Sample records for input function quantification

  1. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    PubMed

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.

  2. Quantification of regional myocardial blood flow estimation with three-dimensional dynamic rubidium-82 PET and modified spillover correction model.

    PubMed

    Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara

    2012-08-01

    Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.

  3. [11C]Harmine Binding to Brain Monoamine Oxidase A: Test-Retest Properties and Noninvasive Quantification.

    PubMed

    Zanderigo, Francesca; D'Agostino, Alexandra E; Joshi, Nandita; Schain, Martin; Kumar, Dileep; Parsey, Ramin V; DeLorenzo, Christine; Mann, J John

    2018-02-08

    Inhibition of the isoform A of monoamine oxidase (MAO-A), a mitochondrial enzyme catalyzing deamination of monoamine neurotransmitters, is useful in treatment of depression and anxiety disorders. [ 11 C]harmine, a MAO-A PET radioligand, has been used to study mood disorders and antidepressant treatment. However, [ 11 C]harmine binding test-retest characteristics have to date only been partially investigated. Furthermore, since MAO-A is ubiquitously expressed, no reference region is available, thus requiring arterial blood sampling during PET scanning. Here, we investigate [ 11 C]harmine binding measurements test-retest properties; assess effects of using a minimally invasive input function estimation on binding quantification and repeatability; and explore binding potentials estimation using a reference region-free approach. Quantification of [ 11 C]harmine distribution volume (V T ) via kinetic models and graphical analyses was compared based on absolute test-retest percent difference (TRPD), intraclass correlation coefficient (ICC), and identifiability. The optimal procedure was also used with a simultaneously estimated input function in place of the measured curve. Lastly, an approach for binding potentials quantification in absence of a reference region was evaluated. [ 11 C]harmine V T estimates quantified using arterial blood and kinetic modeling showed average absolute TRPD values of 7.7 to 15.6 %, and ICC values between 0.56 and 0.86, across brain regions. Using simultaneous estimation (SIME) of input function resulted in V T estimates close to those obtained using arterial input function (r = 0.951, slope = 1.073, intercept = - 1.037), with numerically but not statistically higher test-retest difference (range 16.6 to 22.0 %), but with overall poor ICC values, between 0.30 and 0.57. Prospective studies using [ 11 C]harmine are possible given its test-retest repeatability when binding is quantified using arterial blood. Results with SIME of input function show potential for simplifying data acquisition by replacing arterial catheterization with one arterial blood sample at 20 min post-injection. Estimation of [ 11 C]harmine binding potentials remains a challenge that warrants further investigation.

  4. Combining image-derived and venous input functions enables quantification of serotonin-1A receptors with [carbonyl-11C]WAY-100635 independent of arterial sampling.

    PubMed

    Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Ungersböck, Johanna; Dolliner, Peter; Frey, Richard; Birkfellner, Wolfgang; Mitterhauser, Markus; Wadsak, Wolfgang; Karanikas, Georgios; Kasper, Siegfried; Lanzenberger, Rupert

    2012-08-01

    image- derived input functions (IDIFs) represent a promising technique for a simpler and less invasive quantification of PET studies as compared to arterial cannulation. However, a number of limitations complicate the routine use of IDIFs in clinical research protocols and the full substitution of manual arterial samples by venous ones has hardly been evaluated. This study aims for a direct validation of IDIFs and venous data for the quantification of serotonin-1A receptor binding (5-HT(1A)) with [carbonyl-(11)C]WAY-100635 before and after hormone treatment. Fifteen PET measurements with arterial and venous blood sampling were obtained from 10 healthy women, 8 scans before and 7 after eight weeks of hormone replacement therapy. Image-derived input functions were derived automatically from cerebral blood vessels, corrected for partial volume effects and combined with venous manual samples from 10 min onward (IDIF+VIF). Corrections for plasma/whole-blood ratio and metabolites were done separately with arterial and venous samples. 5-HT(1A) receptor quantification was achieved with arterial input functions (AIF) and IDIF+VIF using a two-tissue compartment model. Comparison between arterial and venous manual blood samples yielded excellent reproducibility. Variability (VAR) was less than 10% for whole-blood activity (p>0.4) and below 2% for plasma to whole-blood ratios (p>0.4). Variability was slightly higher for parent fractions (VARmax=24% at 5 min, p<0.05 and VAR<13% after 20 min, p>0.1) but still within previously reported values. IDIFs after partial volume correction had peak values comparable to AIFs (mean difference Δ=-7.6 ± 16.9 kBq/ml, p>0.1), whereas AIFs exhibited a delay (Δ=4 ± 6.4s, p<0.05) and higher peak width (Δ=15.9 ± 5.2s, p<0.001). Linear regression analysis showed strong agreement for 5-HT(1A) binding as obtained with AIF and IDIF+VIF at baseline (R(2)=0.95), after treatment (R(2)=0.93) and when pooling all scans (R(2)=0.93), with slopes and intercepts in the range of 0.97 to 1.07 and -0.05 to 0.16, respectively. In addition to the region of interest analysis, the approach yielded virtually identical results for voxel-wise quantification as compared to the AIF. Despite the fast metabolism of the radioligand, manual arterial blood samples can be substituted by venous ones for parent fractions and plasma to whole-blood ratios. Moreover, the combination of image-derived and venous input functions provides a reliable quantification of 5-HT(1A) receptors. This holds true for 5-HT(1A) binding estimates before and after treatment for both regions of interest-based and voxel-wise modeling. Taken together, the approach provides less invasive receptor quantification by full independence of arterial cannulation. This offers great potential for the routine use in clinical research protocols and encourages further investigation for other radioligands with different kinetic characteristics. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  6. Evaluation of limited blood sampling population input approaches for kinetic quantification of [18F]fluorothymidine PET data.

    PubMed

    Contractor, Kaiyumars B; Kenny, Laura M; Coombes, Charles R; Turkheimer, Federico E; Aboagye, Eric O; Rosso, Lula

    2012-03-24

    Quantification of kinetic parameters of positron emission tomography (PET) imaging agents normally requires collecting arterial blood samples which is inconvenient for patients and difficult to implement in routine clinical practice. The aim of this study was to investigate whether a population-based input function (POP-IF) reliant on only a few individual discrete samples allows accurate estimates of tumour proliferation using [18F]fluorothymidine (FLT). Thirty-six historical FLT-PET data with concurrent arterial sampling were available for this study. A population average of baseline scans blood data was constructed using leave-one-out cross-validation for each scan and used in conjunction with individual blood samples. Three limited sampling protocols were investigated including, respectively, only seven (POP-IF7), five (POP-IF5) and three (POP-IF3) discrete samples of the historical dataset. Additionally, using the three-point protocol, we derived a POP-IF3M, the only input function which was not corrected for the fraction of radiolabelled metabolites present in blood. The kinetic parameter for net FLT retention at steady state, Ki, was derived using the modified Patlak plot and compared with the original full arterial set for validation. Small percentage differences in the area under the curve between all the POP-IFs and full arterial sampling IF was found over 60 min (4.2%-5.7%), while there were, as expected, larger differences in the peak position and peak height.A high correlation between Ki values calculated using the original arterial input function and all the population-derived IFs was observed (R2 = 0.85-0.98). The population-based input showed good intra-subject reproducibility of Ki values (R2 = 0.81-0.94) and good correlation (R2 = 0.60-0.85) with Ki-67. Input functions generated using these simplified protocols over scan duration of 60 min estimate net PET-FLT retention with reasonable accuracy.

  7. Noninvasive quantification of cerebral metabolic rate for glucose in rats using 18F-FDG PET and standard input function

    PubMed Central

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-01-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947

  8. Noninvasive quantification of cerebral metabolic rate for glucose in rats using (18)F-FDG PET and standard input function.

    PubMed

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-10-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.

  9. Error correction in multi-fidelity molecular dynamics simulations using functional uncertainty quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu

    We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated undermore » three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.« less

  10. Application of image-derived and venous input functions in major depression using [carbonyl-(11)C]WAY-100635.

    PubMed

    Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Wadsak, Wolfgang; Savli, Markus; Kraus, Christoph; Birkfellner, Wolfgang; Ungersboeck, Johanna; Haeusler, Daniela; Mitterhauser, Markus; Karanikas, Georgios; Kasper, Siegfried; Frey, Richard; Lanzenberger, Rupert

    2013-04-01

    Image-derived input functions (IDIFs) represent a promising non-invasive alternative to arterial blood sampling for quantification in positron emission tomography (PET) studies. However, routine applications in patients and longitudinal designs are largely missing despite widespread attempts in healthy subjects. The aim of this study was to apply a previously validated approach to a clinical sample of patients with major depressive disorder (MDD) before and after electroconvulsive therapy (ECT). Eleven scans from 5 patients with venous blood sampling were obtained with the radioligand [carbonyl-(11)C]WAY-100635 at baseline, before and after 11.0±1.2 ECT sessions. IDIFs were defined by two different image reconstruction algorithms 1) OSEM with subsequent partial volume correction (OSEM+PVC) and 2) reconstruction based modelling of the point spread function (TrueX). Serotonin-1A receptor (5-HT1A) binding potentials (BPP, BPND) were quantified with a two-tissue compartment (2TCM) and reference region model (MRTM2). Compared to MRTM2, good agreement in 5-HT1A BPND was found when using input functions from OSEM+PVC (R(2)=0.82) but not TrueX (R(2)=0.57, p<0.001), which is further reflected by lower IDIF peaks for TrueX (p<0.001). Following ECT, decreased 5-HT1A BPND and BPP were found with the 2TCM using OSEM+PVC (23%-35%), except for one patient showing only subtle changes. In contrast, MRTM2 and IDIFs from TrueX gave unstable results for this patient, most probably due to a 2.4-fold underestimation of non-specific binding. Using image-derived and venous input functions defined by OSEM with subsequent PVC we confirm previously reported decreases in 5-HT1A binding in MDD patients after ECT. In contrast to reference region modeling, quantification with image-derived input functions showed consistent results in a clinical setting due to accurate modeling of non-specific binding with OSEM+PVC. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Measurement of myocardial blood flow by cardiovascular magnetic resonance perfusion: comparison of distributed parameter and Fermi models with single and dual bolus.

    PubMed

    Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik

    2015-02-17

    Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.

  12. Arterial input function derived from pairwise correlations between PET-image voxels.

    PubMed

    Schain, Martin; Benjaminsson, Simon; Varnäs, Katarina; Forsberg, Anton; Halldin, Christer; Lansner, Anders; Farde, Lars; Varrone, Andrea

    2013-07-01

    A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [(11)C]flumazenil and [(11)C]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (VT) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in VT was good (∼3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.

  13. Simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) for dynamic contrast-enhanced MRI of liver.

    PubMed

    Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun

    2018-05-01

    To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Through the eye of the needle: a review of isotope approaches to quantify microbial processes mediating soil carbon balance.

    PubMed

    Paterson, Eric; Midwood, Andrew J; Millard, Peter

    2009-01-01

    For soils in carbon balance, losses of soil carbon from biological activity are balanced by organic inputs from vegetation. Perturbations, such as climate or land use change, have the potential to disrupt this balance and alter soil-atmosphere carbon exchanges. As the quantification of soil organic matter stocks is an insensitive means of detecting changes, certainly over short timescales, there is a need to apply methods that facilitate a quantitative understanding of the biological processes underlying soil carbon balance. We outline the processes by which plant carbon enters the soil and critically evaluate isotopic methods to quantify them. Then, we consider the balancing CO(2) flux from soil and detail the importance of partitioning the sources of this flux into those from recent plant assimilate and those from native soil organic matter. Finally, we consider the interactions between the inputs of carbon to soil and the losses from soil mediated by biological activity. We emphasize the key functional role of the microbiota in the concurrent processing of carbon from recent plant inputs and native soil organic matter. We conclude that quantitative isotope labelling and partitioning methods, coupled to those for the quantification of microbial community substrate use, offer the potential to resolve the functioning of the microbial control point of soil carbon balance in unprecedented detail.

  15. Model-free arterial spin labelling for cerebral blood flow quantification: introduction of regional arterial input functions identified by factor analysis.

    PubMed

    Knutsson, Linda; Bloch, Karin Markenroth; Holtås, Stig; Wirestam, Ronnie; Ståhlberg, Freddy

    2008-05-01

    To identify regional arterial input functions (AIFs) using factor analysis of dynamic studies (FADS) when quantification of perfusion is performed using model-free arterial spin labelling. Five healthy volunteers and one patient were examined on a 3-T Philips unit using quantitative STAR labelling of arterial regions (QUASAR). Two sets of images were retrieved, one where the arterial signal had been crushed and another where it was retained. FADS was applied to the arterial signal curves to acquire the AIFs. Perfusion maps were obtained using block-circulant SVD deconvolution and regional AIFs obtained by FADS. In the volunteers, the ASL experiment was repeated within 24 h. The patient was also examined using dynamic susceptibility contrast MRI. In the healthy volunteers, CBF was 64+/-10 ml/[min 100 g] (mean+/-S.D.) in GM and 24+/-4 ml/[min 100 g] in WM, while the mean aBV was 0.94% in GM and 0.25% in WM. Good CBF image quality and reasonable quantitative CBF values were obtained using the combined QUASAR/FADS technique. We conclude that FADS may be a useful supplement in the evaluation of ASL data using QUASAR.

  16. Functional DNA quantification guides accurate next-generation sequencing mutation detection in formalin-fixed, paraffin-embedded tumor biopsies

    PubMed Central

    2013-01-01

    The formalin-fixed, paraffin-embedded (FFPE) biopsy is a challenging sample for molecular assays such as targeted next-generation sequencing (NGS). We compared three methods for FFPE DNA quantification, including a novel PCR assay (‘QFI-PCR’) that measures the absolute copy number of amplifiable DNA, across 165 residual clinical specimens. The results reveal the limitations of commonly used approaches, and demonstrate the value of an integrated workflow using QFI-PCR to improve the accuracy of NGS mutation detection and guide changes in input that can rescue low quality FFPE DNA. These findings address a growing need for improved quality measures in NGS-based patient testing. PMID:24001039

  17. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  18. Quantification of 11C-Laniquidar Kinetics in the Brain.

    PubMed

    Froklage, Femke E; Boellaard, Ronald; Bakker, Esther; Hendrikse, N Harry; Reijneveld, Jaap C; Schuit, Robert C; Windhorst, Albert D; Schober, Patrick; van Berckel, Bart N M; Lammertsma, Adriaan A; Postnov, Andrey

    2015-11-01

    Overexpression of the multidrug efflux transport P-glycoprotein may play an important role in pharmacoresistance. (11)C-laniquidar is a newly developed tracer of P-glycoprotein expression. The aim of this study was to develop a pharmacokinetic model for quantification of (11)C-laniquidar uptake and to assess its test-retest variability. Two (test-retest) dynamic (11)C-laniquidar PET scans were obtained in 8 healthy subjects. Plasma input functions were obtained using online arterial blood sampling with metabolite corrections derived from manual samples. Coregistered T1 MR images were used for region-of-interest definition. Time-activity curves were analyzed using various plasma input compartmental models. (11)C-laniquidar was metabolized rapidly, with a parent plasma fraction of 50% at 10 min after tracer injection. In addition, the first-pass extraction of (11)C-laniquidar was low. (11)C-laniquidar time-activity curves were best fitted to an irreversible single-tissue compartment (1T1K) model using conventional models. Nevertheless, significantly better fits were obtained using 2 parallel single-tissue compartments, one for parent tracer and the other for labeled metabolites (dual-input model). Robust K1 results were also obtained by fitting the first 5 min of PET data to the 1T1K model, at least when 60-min plasma input data were used. For both models, the test-retest variability of (11)C-laniquidar rate constant for transfer from arterial plasma to tissue (K1) was approximately 19%. The accurate quantification of (11)C-laniquidar kinetics in the brain is hampered by its fast metabolism and the likelihood that labeled metabolites enter the brain. Best fits for the entire 60 min of data were obtained using a dual-input model, accounting for uptake of (11)C-laniquidar and its labeled metabolites. Alternatively, K1 could be obtained from a 5-min scan using a standard 1T1K model. In both cases, the test-retest variability of K1 was approximately 19%. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  19. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    PubMed

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Bhat, Kabekode Ghanasham

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  1. Photon path distribution and optical responses of turbid media: theoretical analysis based on the microscopic Beer-Lambert law.

    PubMed

    Tsuchiya, Y

    2001-08-01

    A concise theoretical treatment has been developed to describe the optical responses of a highly scattering inhomogeneous medium using functions of the photon path distribution (PPD). The treatment is based on the microscopic Beer-Lambert law and has been found to yield a complete set of optical responses by time- and frequency-domain measurements. The PPD is defined for possible photons having a total zigzag pathlength of l between the points of light input and detection. Such a distribution is independent of the absorption properties of the medium and can be uniquely determined for the medium under quantification. Therefore, the PPD can be calculated with an imaginary reference medium having the same optical properties as the medium under quantification except for the absence of absorption. One of the advantages of this method is that the optical responses, the total attenuation, the mean pathlength, etc are expressed by functions of the PPD and the absorption distribution.

  2. The transfer functions of cardiac tissue during stochastic pacing.

    PubMed

    de Lange, Enno; Kucera, Jan P

    2009-01-01

    The restitution properties of cardiac action potential duration (APD) and conduction velocity (CV) are important factors in arrhythmogenesis. They determine alternans, wavebreak, and the patterns of reentrant arrhythmias. We developed a novel approach to characterize restitution using transfer functions. Transfer functions relate an input and an output quantity in terms of gain and phase shift in the complex frequency domain. We derived an analytical expression for the transfer function of interbeat intervals (IBIs) during conduction from one site (input) to another site downstream (output). Transfer functions can be efficiently obtained using a stochastic pacing protocol. Using simulations of conduction and extracellular mapping of strands of neonatal rat ventricular myocytes, we show that transfer functions permit the quantification of APD and CV restitution slopes when it is difficult to measure APD directly. We find that the normally positive CV restitution slope attenuates IBI variations. In contrast, a negative CV restitution slope (induced by decreasing extracellular [K(+)]) amplifies IBI variations with a maximum at the frequency of alternans. Hence, it potentiates alternans and renders conduction unstable, even in the absence of APD restitution. Thus, stochastic pacing and transfer function analysis represent a powerful strategy to evaluate restitution and the stability of conduction.

  3. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less

  4. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  5. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  6. Comparison of deterministic and stochastic approaches for isotopic concentration and decay heat uncertainty quantification on elementary fission pulse

    NASA Astrophysics Data System (ADS)

    Lahaye, S.; Huynh, T. D.; Tsilanizara, A.

    2016-03-01

    Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.

  7. Validation and quantification of [18F]altanserin binding in the rat brain using blood input and reference tissue modeling

    PubMed Central

    Riss, Patrick J; Hong, Young T; Williamson, David; Caprioli, Daniele; Sitnikov, Sergey; Ferrari, Valentina; Sawiak, Steve J; Baron, Jean-Claude; Dalley, Jeffrey W; Fryer, Tim D; Aigbirhio, Franklin I

    2011-01-01

    The 5-hydroxytryptamine type 2a (5-HT2A) selective radiotracer [18F]altanserin has been subjected to a quantitative micro-positron emission tomography study in Lister Hooded rats. Metabolite-corrected plasma input modeling was compared with reference tissue modeling using the cerebellum as reference tissue. [18F]altanserin showed sufficient brain uptake in a distribution pattern consistent with the known distribution of 5-HT2A receptors. Full binding saturation and displacement was documented, and no significant uptake of radioactive metabolites was detected in the brain. Blood input as well as reference tissue models were equally appropriate to describe the radiotracer kinetics. [18F]altanserin is suitable for quantification of 5-HT2A receptor availability in rats. PMID:21750562

  8. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  <8% and the calculated CBF values showed a tight correlation (r  =  0.97). The simulation showed that errors associated with the assumed parameters were  <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  9. A Python Interface for the Dakota Iterative Systems Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Piper, M.; Hutton, E.; Syvitski, J. P.

    2016-12-01

    Uncertainty quantification is required to improve the accuracy, reliability, and accountability of Earth science models. Dakota is a software toolkit, developed at Sandia National Laboratories, that provides an interface between models and a library of analysis methods, including support for sensitivity analysis, uncertainty quantification, optimization, and calibration techniques. Dakota is a powerful tool, but its learning curve is steep: the user not only must understand the structure and syntax of the Dakota input file, but also must develop intermediate code, called an analysis driver, that allows Dakota to run a model. The CSDMS Dakota interface (CDI) is a Python package that wraps and extends Dakota's user interface. It simplifies the process of configuring and running a Dakota experiment. A user can program to the CDI, allowing a Dakota experiment to be scripted. The CDI creates Dakota input files and provides a generic analysis driver. Any model written in Python that exposes a Basic Model Interface (BMI), as well as any model componentized in the CSDMS modeling framework, automatically works with the CDI. The CDI has a plugin architecture, so models written in other languages, or those that don't expose a BMI, can be accessed by the CDI by programmatically extending a template; an example is provided in the CDI distribution. Currently, six Dakota analysis methods have been implemented for examples from the much larger Dakota library. To demonstrate the CDI, we performed an uncertainty quantification experiment with the HydroTrend hydrological water balance and transport model. In the experiment, we evaluated the response of long-term suspended sediment load at the river mouth (Qs) to uncertainty in two input parameters, annual mean temperature (T) and precipitation (P), over a series of 100-year runs, using the polynomial chaos method. Through Dakota, we calculated moments, local and global (Sobol') sensitivity indices, and probability density and cumulative distribution functions for the response.

  10. The human motor neuron pools receive a dominant slow‐varying common synaptic input

    PubMed Central

    Negro, Francesco; Yavuz, Utku Şükrü

    2016-01-01

    Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (<5 Hz) to motor neurons in humans.We applied the proposed method to three human muscles and determined experimentally that they receive a similar large amount (>60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (<5 Hz) to a motor neuron pool in humans. This estimate is based on a phenomenological model and the theoretical fitting of the experimental values of coherence between the permutations of groups of motor unit spike trains. We demonstrate the validity of this theoretical estimate with several simulations. Moreover, we applied this method to three human muscles: the abductor digiti minimi, tibialis anterior and vastus medialis. Despite these muscles having different functional roles and control properties, as confirmed by the results of the present study, we estimate that their motor pools receive a similar and large (>60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459

  11. A General Uncertainty Quantification Methodology for Cloud Microphysical Property Retrievals

    NASA Astrophysics Data System (ADS)

    Tang, Q.; Xie, S.; Chen, X.; Zhao, C.

    2014-12-01

    The US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program provides long-term (~20 years) ground-based cloud remote sensing observations. However, there are large uncertainties in the retrieval products of cloud microphysical properties based on the active and/or passive remote-sensing measurements. To address this uncertainty issue, a DOE Atmospheric System Research scientific focus study, Quantification of Uncertainties in Cloud Retrievals (QUICR), has been formed. In addition to an overview of recent progress of QUICR, we will demonstrate the capacity of an observation-based general uncertainty quantification (UQ) methodology via the ARM Climate Research Facility baseline cloud microphysical properties (MICROBASE) product. This UQ method utilizes the Karhunen-Loéve expansion (KLE) and Central Limit Theorems (CLT) to quantify the retrieval uncertainties from observations and algorithm parameters. The input perturbations are imposed on major modes to take into account the cross correlations between input data, which greatly reduces the dimension of random variables (up to a factor of 50) and quantifies vertically resolved full probability distribution functions of retrieved quantities. Moreover, this KLE/CLT approach has the capability of attributing the uncertainties in the retrieval output to individual uncertainty source and thus sheds light on improving the retrieval algorithm and observations. We will present the results of a case study for the ice water content at the Southern Great Plains during an intensive observing period on March 9, 2000. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  12. Beyond the cortical column: abundance and physiology of horizontal connections imply a strong role for inputs from the surround.

    PubMed

    Boucsein, Clemens; Nawrot, Martin P; Schnepel, Philipp; Aertsen, Ad

    2011-01-01

    Current concepts of cortical information processing and most cortical network models largely rest on the assumption that well-studied properties of local synaptic connectivity are sufficient to understand the generic properties of cortical networks. This view seems to be justified by the observation that the vertical connectivity within local volumes is strong, whereas horizontally, the connection probability between pairs of neurons drops sharply with distance. Recent neuroanatomical studies, however, have emphasized that a substantial fraction of synapses onto neocortical pyramidal neurons stems from cells outside the local volume. Here, we discuss recent findings on the signal integration from horizontal inputs, showing that they could serve as a substrate for reliable and temporally precise signal propagation. Quantification of connection probabilities and parameters of synaptic physiology as a function of lateral distance indicates that horizontal projections constitute a considerable fraction, if not the majority, of inputs from within the cortical network. Taking these non-local horizontal inputs into account may dramatically change our current view on cortical information processing.

  13. Quantification of net annual C input in terrestrial ecosystems of the Italian Peninsula under different land-uses

    USDA-ARS?s Scientific Manuscript database

    Soil organic matter (SOM) is a very important compartment of the biosphere: it represents the largest dynamic carbon (C) pool where the C is stored for the longest time period. Root inputs, as exudates and root slush, represent a major, where not the largest, annual contribution to soil C input. Roo...

  14. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    DOE PAGES

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  15. Quantification of Degeneracy in Biological Systems for Characterization of Functional Interactions Between Modules

    PubMed Central

    Li, Yao; Dwivedi, Gaurav; Huang, Wen; Yi, Yingfei

    2012-01-01

    There is an evolutionary advantage in having multiple components with overlapping functionality (i.e degeneracy) in organisms. While theoretical considerations of degeneracy have been well established in neural networks using information theory, the same concepts have not been developed for differential systems, which form the basis of many biochemical reaction network descriptions in systems biology. Here we establish mathematical definitions of degeneracy, complexity and robustness that allow for the quantification of these properties in a system. By exciting a dynamical system with noise, the mutual information associated with a selected observable output and the interacting subspaces of input components can be used to define both complexity and degeneracy. The calculation of degeneracy in a biological network is a useful metric for evaluating features such as the sensitivity of a biological network to environmental evolutionary pressure. Using a two-receptor signal transduction network, we find that redundant components will not yield high degeneracy whereas compensatory mechanisms established by pathway crosstalk will. This form of analysis permits interrogation of large-scale differential systems for non-identical, functionally equivalent features that have evolved to maintain homeostasis during disruption of individual components. PMID:22619750

  16. Compositional Solution Space Quantification for Probabilistic Software Analysis

    NASA Technical Reports Server (NTRS)

    Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem

    2014-01-01

    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.

  17. Sensitivity and uncertainty of input sensor accuracy for grass-based reference evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    Quantification of evapotranspiration (ET) in agricultural environments is becoming of increasing importance throughout the world, thus understanding input variability of relevant sensors is of paramount importance as well. The Colorado Agricultural and Meteorological Network (CoAgMet) and the Florid...

  18. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    PubMed

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  19. Comparison of K-Means and Fuzzy c-Means Algorithm Performance for Automated Determination of the Arterial Input Function

    PubMed Central

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection. PMID:24503700

  20. Quantification of 18F-fluorocholine kinetics in patients with prostate cancer.

    PubMed

    Verwer, Eline E; Oprea-Lager, Daniela E; van den Eertwegh, Alfons J M; van Moorselaar, Reindert J A; Windhorst, Albert D; Schwarte, Lothar A; Hendrikse, N Harry; Schuit, Robert C; Hoekstra, Otto S; Lammertsma, Adriaan A; Boellaard, Ronald

    2015-03-01

    Choline kinase is upregulated in prostate cancer, resulting in increased (18)F-fluoromethylcholine uptake. This study used pharmacokinetic modeling to validate the use of simplified methods for quantification of (18)F-fluoromethylcholine uptake in a routine clinical setting. Forty-minute dynamic PET/CT scans were acquired after injection of 204 ± 9 MBq of (18)F-fluoromethylcholine, from 8 patients with histologically proven metastasized prostate cancer. Plasma input functions were obtained using continuous arterial blood-sampling as well as using image-derived methods. Manual arterial blood samples were used for calibration and correction for plasma-to-blood ratio and metabolites. Time-activity curves were derived from volumes of interest in all visually detectable lymph node metastases. (18)F-fluoromethylcholine kinetics were studied by nonlinear regression fitting of several single- and 2-tissue plasma input models to the time-activity curves. Model selection was based on the Akaike information criterion and measures of robustness. In addition, the performance of several simplified methods, such as standardized uptake value (SUV), was assessed. Best fits were obtained using an irreversible compartment model with blood volume parameter. Parent fractions were 0.12 ± 0.4 after 20 min, necessitating individual metabolite corrections. Correspondence between venous and arterial parent fractions was low as determined by the intraclass correlation coefficient (0.61). Results for image-derived input functions that were obtained from volumes of interest in blood-pool structures distant from tissues of high (18)F-fluoromethylcholine uptake yielded good correlation to those for the blood-sampling input functions (R(2) = 0.83). SUV showed poor correlation to parameters derived from full quantitative kinetic analysis (R(2) < 0.34). In contrast, lesion activity concentration normalized to the integral of the blood activity concentration over time (SUVAUC) showed good correlation (R(2) = 0.92 for metabolite-corrected plasma; 0.65 for whole-blood activity concentrations). SUV cannot be used to quantify (18)F-fluoromethylcholine uptake. A clinical compromise could be SUVAUC derived from 2 consecutive static PET scans, one centered on a large blood-pool structure during 0-30 min after injection to obtain the blood activity concentrations and the other a whole-body scan at 30 min after injection to obtain lymph node activity concentrations. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  1. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  2. Suitability of [18F]altanserin and PET to determine 5-HT2A receptor availability in the rat brain: in vivo and in vitro validation of invasive and non-invasive kinetic models.

    PubMed

    Kroll, Tina; Elmenhorst, David; Matusch, Andreas; Wedekind, Franziska; Weisshaupt, Angela; Beer, Simone; Bauer, Andreas

    2013-08-01

    While the selective 5-hydroxytryptamine type 2a receptor (5-HT2AR) radiotracer [18F]altanserin is well established in humans, the present study evaluated its suitability for quantifying cerebral 5-HT2ARs with positron emission tomography (PET) in albino rats. Ten Sprague Dawley rats underwent 180 min PET scans with arterial blood sampling. Reference tissue methods were evaluated on the basis of invasive kinetic models with metabolite-corrected arterial input functions. In vivo 5-HT2AR quantification with PET was validated by in vitro autoradiographic saturation experiments in the same animals. Overall brain uptake of [18F]altanserin was reliably quantified by invasive and non-invasive models with the cerebellum as reference region shown by linear correlation of outcome parameters. Unlike in humans, no lipophilic metabolites occurred so that brain activity derived solely from parent compound. PET data correlated very well with in vitro autoradiographic data of the same animals. [18F]Altanserin PET is a reliable tool for in vivo quantification of 5-HT2AR availability in albino rats. Models based on both blood input and reference tissue describe radiotracer kinetics adequately. Low cerebral tracer uptake might, however, cause restrictions in experimental usage.

  3. Uncertainty quantification analysis of the dynamics of an electrostatically actuated microelectromechanical switch model

    NASA Astrophysics Data System (ADS)

    Snow, Michael G.; Bajaj, Anil K.

    2015-08-01

    This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.

  4. A DDDAS Framework for Volcanic Ash Propagation and Hazard Analysis

    DTIC Science & Technology

    2012-01-01

    probability distribution for the input variables (for example, Hermite polynomials for normally distributed parameters, or Legendre for uniformly...parameters and windfields will drive our simulations. We will use uncertainty quantification methodology – polynomial chaos quadrature in combination...quantification methodology ? polynomial chaos quadrature in combination with data integration to complete the DDDAS loop. 15. SUBJECT TERMS 16. SECURITY

  5. Feasibility study of TSPO quantification with [18F]FEPPA using population-based input function.

    PubMed

    Mabrouk, Rostom; Strafella, Antonio P; Knezevic, Dunja; Ghadery, Christine; Mizrahi, Romina; Gharehgazlou, Avideh; Koshimori, Yuko; Houle, Sylvain; Rusjan, Pablo

    2017-01-01

    The input function (IF) is a core element in the quantification of Translocator protein 18 kDa with positron emission tomography (PET), as no suitable reference region with negligible binding has been identified. Arterial blood sampling is indeed needed to create the IF (ASIF). In the present manuscript we study individualization of a population based input function (PBIF) with a single arterial manual sample to estimate total distribution volume (VT) for [18F]FEPPA and to replicate previously published clinical studies in which the ASIF was used. The data of 3 previous [18F]FEPPA studies (39 of healthy controls (HC), 16 patients with Parkinson's disease (PD) and 18 with Alzheimer's disease (AD)) was reanalyzed with the new approach. PBIF was used with the Logan graphical analysis (GA) neglecting the vascular contribution to estimate VT. Time of linearization of the GA was determined with the maximum error criteria. The optimal calibration of the PBIF was determined based on the area under the curve (AUC) of the IF and the agreement range of VT between methods. The shape of the IF between groups was studied while taking into account genotyping of the polymorphism (rs6971). PBIF scaled with a single value of activity due to unmetabolized radioligand in arterial plasma, calculated as the average of a sample taken at 60 min and a sample taken at 90 min post-injection, yielded a good interval of agreement between methods and optimized the area under the curve of IF. In HC, gray matter VTs estimated by PBIF highly correlated with those using the standard method (r2 = 0.82, p = 0.0001). Bland-Altman plots revealed PBIF slightly underestimates (~1 mL/cm3) VT calculated by ASIF (including a vascular contribution). It was verified that the AUC of the ASIF were independent of genotype and disease (HC, PD, and AD). Previous clinical results were replicated using PBIF but with lower statistical power. A single arterial blood sample taken 75 minute post-injection contains enough information to individualize the IF in the groups of subjects studied; however, the higher variability produced requires an increase in sample size to reach the same effect size.

  6. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  7. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  8. Landscape and flux reveal a new global view and physical quantification of mammalian cell cycle

    PubMed Central

    Li, Chunhe; Wang, Jin

    2014-01-01

    Cell cycles, essential for biological function, have been investigated extensively. However, enabling a global understanding and defining a physical quantification of the stability and function of the cell cycle remains challenging. Based upon a mammalian cell cycle gene network, we uncovered the underlying Mexican hat landscape of the cell cycle. We found the emergence of three local basins of attraction and two major potential barriers along the cell cycle trajectory. The three local basins of attraction characterize the G1, S/G2, and M phases. The barriers characterize the G1 and S/G2 checkpoints, respectively, of the cell cycle, thus providing an explanation of the checkpoint mechanism for the cell cycle from the physical perspective. We found that the progression of a cell cycle is determined by two driving forces: curl flux for acceleration and potential barriers for deceleration along the cycle path. Therefore, the cell cycle can be promoted (suppressed), either by enhancing (suppressing) the flux (representing the energy input) or by lowering (increasing) the barrier along the cell cycle path. We found that both the entropy production rate and energy per cell cycle increase as the growth factor increases. This reflects that cell growth and division are driven by energy or nutrition supply. More energy input increases flux and decreases barrier along the cell cycle path, leading to faster oscillations. We also identified certain key genes and regulations for stability and progression of the cell cycle. Some of these findings were evidenced from experiments whereas others lead to predictions and potential anticancer strategies. PMID:25228772

  9. Image-derived input function with factor analysis and a-priori information.

    PubMed

    Simončič, Urban; Zanotti-Fregonara, Paolo

    2015-02-01

    Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.

  10. miR-MaGiC improves quantification accuracy for small RNA-seq.

    PubMed

    Russell, Pamela H; Vestal, Brian; Shi, Wen; Rudra, Pratyaydipta D; Dowell, Robin; Radcliffe, Richard; Saba, Laura; Kechris, Katerina

    2018-05-15

    Many tools have been developed to profile microRNA (miRNA) expression from small RNA-seq data. These tools must contend with several issues: the small size of miRNAs, the small number of unique miRNAs, the fact that similar miRNAs can be transcribed from multiple loci, and the presence of miRNA isoforms known as isomiRs. Methods failing to address these issues can return misleading information. We propose a novel quantification method designed to address these concerns. We present miR-MaGiC, a novel miRNA quantification method, implemented as a cross-platform tool in Java. miR-MaGiC performs stringent mapping to a core region of each miRNA and defines a meaningful set of target miRNA sequences by collapsing the miRNA space to "functional groups". We hypothesize that these two features, mapping stringency and collapsing, provide more optimal quantification to a more meaningful unit (i.e., miRNA family). We test miR-MaGiC and several published methods on 210 small RNA-seq libraries, evaluating each method's ability to accurately reflect global miRNA expression profiles. We define accuracy as total counts close to the total number of input reads originating from miRNAs. We find that miR-MaGiC, which incorporates both stringency and collapsing, provides the most accurate counts.

  11. Spatially resolved assessment of hepatic function using 99mTc-IDA SPECT

    PubMed Central

    Wang, Hesheng; Cao, Yue

    2013-01-01

    Purpose: 99mTc-iminodiacetic acid (IDA) hepatobiliary imaging is usually quantified for hepatic function on the entire liver or regions of interest (ROIs) in the liver. The authors presented a method to estimate the hepatic extraction fraction (HEF) voxel-by-voxel from single-photon emission computed tomography (SPECT)/CT with a 99mTc-labeled IDA agent of mebrofenin and evaluated the spatially resolved HEF measurements with an independent physiological measurement. Methods: Fourteen patients with intrahepatic cancers were treated with radiation therapy (RT) and imaged by 99mTc-mebrofenin SPECT before and 1 month after RT. The dynamic SPECT volumes were with a resolution of 3.9 × 3.9 × 2.5 mm3. Throughout the whole liver with approximate 50 000 voxels, voxelwise HEF quantifications were estimated and compared between using arterial input function (AIF) from the heart and using vascular input function (VIF) from the spleen. The correlation between mean of the HEFs over the nontumor liver tissue and the overall liver function measured by Indocyanine green clearance half-time (T1/2) was assessed. Variation of the voxelwise estimation was evaluated in ROIs drawn in relatively homogeneous regions of the livers. The authors also examined effects of the time range parameter on the voxelwise HEF quantification. Results: Mean of the HEFs over the liver estimated using AIF significantly correlated with the physiological measurement T1/2 (r = 0.52, p = 0.0004), and the correlation was greatly improved by using VIF (r = 0.79, p < 0.0001). The parameter of time range for the retention phase did not lead to a significant difference in the means of the HEFs in the ROIs. Using VIF and a retention phase time range of 7–30 min, the relative variation of the voxelwise HEF in the ROIs was 10% ± 6% of respective mean HEF. Conclusions: The voxelwise HEF derived from 99mTc-IDA SPECT by the deconvolution analysis is feasible to assess the spatial distribution of hepatic function in the liver. PMID:24007177

  12. New dual in-growth core isotopic technique to assess the root litter carbon input to the soil

    USDA-ARS?s Scientific Manuscript database

    The root-derived carbon (C) input to the soil, whose quantification is often neglected because of methodological difficulties, is considered a crucial C flux for soil C dynamics and net ecosystem productivity (NEP) studies. In the present study, we compared two independent methods to quantify this C...

  13. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  14. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)

    PubMed Central

    Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125

  15. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    PubMed

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  16. Exponential convergence rate (the spectral convergence) of the fast Padé transform for exact quantification in magnetic resonance spectroscopy.

    PubMed

    Belkić, Dzevad

    2006-12-21

    This study deals with the most challenging numerical aspect for solving the quantification problem in magnetic resonance spectroscopy (MRS). The primary goal is to investigate whether it could be feasible to carry out a rigorous computation within finite arithmetics to reconstruct exactly all the machine accurate input spectral parameters of every resonance from a synthesized noiseless time signal. We also consider simulated time signals embedded in random Gaussian distributed noise of the level comparable to the weakest resonances in the corresponding spectrum. The present choice for this high-resolution task in MRS is the fast Padé transform (FPT). All the sought spectral parameters (complex frequencies and amplitudes) can unequivocally be reconstructed from a given input time signal by using the FPT. Moreover, the present computations demonstrate that the FPT can achieve the spectral convergence, which represents the exponential convergence rate as a function of the signal length for a fixed bandwidth. Such an extraordinary feature equips the FPT with the exemplary high-resolution capabilities that are, in fact, theoretically unlimited. This is illustrated in the present study by the exact reconstruction (within machine accuracy) of all the spectral parameters from an input time signal comprised of 25 harmonics, i.e. complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances whose chemical shifts differ by an exceedingly small fraction of only 10(-11) ppm. Moreover, without exhausting even a quarter of the full signal length, the FPT is shown to retrieve exactly all the input spectral parameters defined with 12 digits of accuracy. Specifically, we demonstrate that when the FPT is close to the convergence region, an unprecedented phase transition occurs, since literally a few additional signal points are sufficient to reach the full 12 digit accuracy with the exponentially fast rate of convergence. This is the critical proof-of-principle for the high-resolution power of the FPT for machine accurate input data. Furthermore, it is proven that the FPT is also a highly reliable method for quantifying noise-corrupted time signals reminiscent of those encoded via MRS in clinical neuro-diagnostics.

  17. Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Rodney O.; Passalacqua, Alberto

    2016-02-01

    Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can bemore » then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into MFIX. The effect of uncertainty on the disperse-phase volume fraction, on the phase velocities and on the pressure drop inside the fluidized bed are examined, and the reconstructed PDFs are provided for the three quantities studied. Then the approach is applied to a bubbling fluidized bed with two uncertain parameters, particle-particle and particle-wall restitution coefficients. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities and gas pressure are provided. The PDFs of the response are reconstructed using EQMOM with appropriate kernel density functions. The simulation results are compared to experimental data provided by the 2013 NETL small-scale challenge problem. Lastly, the proposed procedure is demonstrated by considering a riser of a circulating fluidized bed as an example application. The mean particle size is considered to be the uncertain input parameter. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities, and granular temperature are provided. Mean values and confidence intervals of the quantities of interest are compared to the experiment results. The univariate and bivariate PDF reconstructions of the system response are performed using EQMOM and ECQMOM.« less

  18. Direct qPCR quantification using the Quantifiler(®) Trio DNA quantification kit.

    PubMed

    Liu, Jason Yingjie

    2014-11-01

    The effectiveness of a direct quantification assay is essential to the adoption of the combined direct quantification/direct STR workflow. In this paper, the feasibility of using the Quantifiler(®) Trio DNA quantification kit for the direct quantification of forensic casework samples was investigated. Both low-level touch DNA samples and blood samples were collected on PE swabs and quantified directly. The increased sensitivity of the Quantifiler(®) Trio kit enabled the detection of less than 10pg of DNA in unprocessed touch samples and also minimizes the stochastic effect experienced by different targets in the same sample. The DNA quantity information obtained from a direct quantification assay using the Quantifiler(®) Trio kit can also be used to accurately estimate the optimal input DNA quantity for a direct STR amplification reaction. The correlation between the direct quantification results (Quantifiler(®) Trio kit) and the direct STR results (GlobalFiler™ PCR amplification kit(*)) for low-level touch DNA samples indicates that direct quantification using the Quantifiler(®) Trio DNA quantification kit is more reliable than the Quantifiler(®) Duo DNA quantification kit for predicting the STR results of unprocessed touch DNA samples containing less than 10pg of DNA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  20. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  1. Quantification of fossil organic matter in contaminated sediments from an industrial watershed: validation of the quantitative multimolecular approach by radiocarbon analysis.

    PubMed

    Jeanneau, Laurent; Faure, Pierre

    2010-09-01

    The quantitative multimolecular approach (QMA) based on an exhaustive identification and quantification of molecules from the extractable organic matter (EOM) has been recently developed in order to investigate organic contamination in sediments by a more complete method than the restrictive quantification of target contaminants. Such an approach allows (i) the comparison between natural and anthropogenic inputs, (ii) between modern and fossil organic matter and (iii) the differentiation between several anthropogenic sources. However QMA is based on the quantification of molecules recovered by organic solvent and then analyzed by gas chromatography-mass spectrometry, which represent a small fraction of sedimentary organic matter (SOM). In order to extend the conclusions of QMA to SOM, radiocarbon analyses have been performed on organic extracts and decarbonated sediments. This analysis allows (i) the differentiation between modern biomass (contemporary (14)C) and fossil organic matter ((14)C-free) and (ii) the calculation of the modern carbon percentage (PMC). At the confluence between Fensch and Moselle Rivers, a catchment highly contaminated by both industrial activities and urbanization, PMC values in decarbonated sediments are well correlated with the percentage of natural molecular markers determined by QMA. It highlights that, for this type of contamination by fossil organic matter inputs, the conclusions of QMA can be scaled up to SOM. QMA is an efficient environmental diagnostic tool that leads to a more realistic quantification of fossil organic matter in sediments. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Towards quantitative [18F]FDG-PET/MRI of the brain: Automated MR-driven calculation of an image-derived input function for the non-invasive determination of cerebral glucose metabolic rates.

    PubMed

    Sundar, Lalith Ks; Muzik, Otto; Rischka, Lucas; Hahn, Andreas; Rausch, Ivo; Lanzenberger, Rupert; Hienert, Marius; Klebermass, Eva-Maria; Füchsel, Frank-Günther; Hacker, Marcus; Pilz, Magdalena; Pataraia, Ekaterina; Traub-Weidinger, Tatjana; Beyer, Thomas

    2018-01-01

    Absolute quantification of PET brain imaging requires the measurement of an arterial input function (AIF), typically obtained invasively via an arterial cannulation. We present an approach to automatically calculate an image-derived input function (IDIF) and cerebral metabolic rates of glucose (CMRGlc) from the [18F]FDG PET data using an integrated PET/MRI system. Ten healthy controls underwent test-retest dynamic [18F]FDG-PET/MRI examinations. The imaging protocol consisted of a 60-min PET list-mode acquisition together with a time-of-flight MR angiography scan for segmenting the carotid arteries and intermittent MR navigators to monitor subject movement. AIFs were collected as the reference standard. Attenuation correction was performed using a separate low-dose CT scan. Assessment of the percentage difference between area-under-the-curve of IDIF and AIF yielded values within ±5%. Similar test-retest variability was seen between AIFs (9 ± 8) % and the IDIFs (9 ± 7) %. Absolute percentage difference between CMRGlc values obtained from AIF and IDIF across all examinations and selected brain regions was 3.2% (interquartile range: (2.4-4.3) %, maximum < 10%). High test-retest intravariability was observed between CMRGlc values obtained from AIF (14%) and IDIF (17%). The proposed approach provides an IDIF, which can be effectively used in lieu of AIF.

  3. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure. This work was supported by the Sandia National Laboratories Seniors’ Council LDRD (Laboratory Directed Research and Development) program. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. A probabilistic model of debris-flow delivery to stream channels, demonstrated for the Coast Range of Oregon, USA

    Treesearch

    Daniel J. Miller; Kelly M. Burnett

    2008-01-01

    Debris flows are important geomorphic agents in mountainous terrains that shape channel environments and add a dynamic element to sediment supply and channel disturbance. Identification of channels susceptible to debris-flow inputs of sediment and organic debris, and quantification of the likelihood and magnitude of those inputs, are key tasks for characterizing...

  5. Reference tissue quantification of DCE-MRI data without a contrast agent calibration

    NASA Astrophysics Data System (ADS)

    Walker-Samuel, Simon; Leach, Martin O.; Collins, David J.

    2007-02-01

    The quantification of dynamic contrast-enhanced (DCE) MRI data conventionally requires a conversion from signal intensity to contrast agent concentration by measuring a change in the tissue longitudinal relaxation rate, R1. In this paper, it is shown that the use of a spoiled gradient-echo acquisition sequence (optimized so that signal intensity scales linearly with contrast agent concentration) in conjunction with a reference tissue-derived vascular input function (VIF), avoids the need for the conversion to Gd-DTPA concentration. This study evaluates how to optimize such sequences and which dynamic time-series parameters are most suitable for this type of analysis. It is shown that signal difference and relative enhancement provide useful alternatives when full contrast agent quantification cannot be achieved, but that pharmacokinetic parameters derived from both contain sources of error (such as those caused by differences between reference tissue and region of interest proton density and native T1 values). It is shown in a rectal cancer study that these sources of uncertainty are smaller when using signal difference, compared with relative enhancement (15 ± 4% compared with 33 ± 4%). Both of these uncertainties are of the order of those associated with the conversion to Gd-DTPA concentration, according to literature estimates.

  6. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  7. Spectral Analysis of Dynamic PET Studies: A Review of 20 Years of Method Developments and Applications.

    PubMed

    Veronese, Mattia; Rizzo, Gaia; Bertoldo, Alessandra; Turkheimer, Federico E

    2016-01-01

    In Positron Emission Tomography (PET), spectral analysis (SA) allows the quantification of dynamic data by relating the radioactivity measured by the scanner in time to the underlying physiological processes of the system under investigation. Among the different approaches for the quantification of PET data, SA is based on the linear solution of the Laplace transform inversion whereas the measured arterial and tissue time-activity curves of a radiotracer are used to calculate the input response function of the tissue. In the recent years SA has been used with a large number of PET tracers in brain and nonbrain applications, demonstrating that it is a very flexible and robust method for PET data analysis. Differently from the most common PET quantification approaches that adopt standard nonlinear estimation of compartmental models or some linear simplifications, SA can be applied without defining any specific model configuration and has demonstrated very good sensitivity to the underlying kinetics. This characteristic makes it useful as an investigative tool especially for the analysis of novel PET tracers. The purpose of this work is to offer an overview of SA, to discuss advantages and limitations of the methodology, and to inform about its applications in the PET field.

  8. Assessment of spill flow emissions on the basis of measured precipitation and waste water data

    NASA Astrophysics Data System (ADS)

    Hochedlinger, Martin; Gruber, Günter; Kainz, Harald

    2005-09-01

    Combined sewer overflows (CSOs) are substantial contributors to the total emissions into surface water bodies. The emitted pollution results from dry-weather waste water loads, surface runoff pollution and from the remobilisation of sewer deposits and sewer slime during storm events. One possibility to estimate overflow loads is a calculation with load quantification models. Input data for these models are pollution concentrations, e.g. Total Chemical Oxygen Demand (COD tot), Total Suspended Solids (TSS) or Soluble Chemical Oxygen Demand (COD sol), rainfall series and flow measurements for model calibration and validation. It is important for the result of overflow loads to model with reliable input data, otherwise this inevitably leads to bad results. In this paper the correction of precipitation measurements and the sewer online-measurements are presented to satisfy the load quantification model requirements already described. The main focus is on tipping bucket gauge measurements and their corrections. The results evidence the importance of their corrections due the effects on load quantification modelling and show the difference between corrected and not corrected data of storm events with high rain intensities.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C. S.; Zhang, Hongbin

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  10. Direct quantification of long-term rock nitrogen inputs to temperate forest ecosystems.

    PubMed

    Morford, Scott L; Houlton, Benjamin Z; Dahlgren, Randy A

    2016-01-01

    Sedimentary and metasedimentary rocks contain large reservoirs of fixed nitrogen (N), but questions remain over the importance of rock N weathering inputs in terrestrial ecosystems. Here we provide direct evidence for rock N weathering (i.e., loss of N from rock) in three temperate forest sites residing on a N-rich parent material (820-1050 mg N kg(-1); mica schist) in the Klamath Mountains (northern California and southern Oregon), USA. Our method combines a mass balance model of element addition/ depletion with a procedure for quantifying fixed N in rock minerals, enabling quantification of rock N inputs to bioavailable reservoirs in soil and regolith. Across all sites, -37% to 48% of the initial bedrock N content has undergone long-term weathering in the soil. Combined with regional denudation estimates (sum of physical + chemical erosion), these weathering fractions translate to 1.6-10.7 kg x ha(-1) x yr(-1) of rock N input to these forest ecosystems. These N input fluxes are substantial in light of estimates for atmospheric sources in these sites (4.5-7.0 kg x ha(-1) x yr(-1)). In addition, N depletion from rock minerals was greater than sodium, suggesting active biologically mediated weathering of growth-limiting nutrients compared to nonessential elements. These results point to regional tectonics, biologically mediated weathering effects, and rock N chemistry in shaping the magnitude of rock N inputs to the forest ecosystems examined.

  11. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    PubMed

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  12. Land-use choices follow profitability at the expense of ecological functions in Indonesian smallholder landscapes

    NASA Astrophysics Data System (ADS)

    Clough, Yann; Krishna, Vijesh V.; Corre, Marife D.; Darras, Kevin; Denmead, Lisa H.; Meijide, Ana; Moser, Stefan; Musshoff, Oliver; Steinebach, Stefanie; Veldkamp, Edzo; Allen, Kara; Barnes, Andrew D.; Breidenbach, Natalie; Brose, Ulrich; Buchori, Damayanti; Daniel, Rolf; Finkeldey, Reiner; Harahap, Idham; Hertel, Dietrich; Holtkamp, A. Mareike; Hörandl, Elvira; Irawan, Bambang; Jaya, I. Nengah Surati; Jochum, Malte; Klarner, Bernhard; Knohl, Alexander; Kotowska, Martyna M.; Krashevska, Valentyna; Kreft, Holger; Kurniawan, Syahrul; Leuschner, Christoph; Maraun, Mark; Melati, Dian Nuraini; Opfermann, Nicole; Pérez-Cruzado, César; Prabowo, Walesa Edho; Rembold, Katja; Rizali, Akhmad; Rubiana, Ratna; Schneider, Dominik; Tjitrosoedirdjo, Sri Sudarmiyati; Tjoa, Aiyen; Tscharntke, Teja; Scheu, Stefan

    2016-10-01

    Smallholder-dominated agricultural mosaic landscapes are highlighted as model production systems that deliver both economic and ecological goods in tropical agricultural landscapes, but trade-offs underlying current land-use dynamics are poorly known. Here, using the most comprehensive quantification of land-use change and associated bundles of ecosystem functions, services and economic benefits to date, we show that Indonesian smallholders predominantly choose farm portfolios with high economic productivity but low ecological value. The more profitable oil palm and rubber monocultures replace forests and agroforests critical for maintaining above- and below-ground ecological functions and the diversity of most taxa. Between the monocultures, the higher economic performance of oil palm over rubber comes with the reliance on fertilizer inputs and with increased nutrient leaching losses. Strategies to achieve an ecological-economic balance and a sustainable management of tropical smallholder landscapes must be prioritized to avoid further environmental degradation.

  13. Land-use choices follow profitability at the expense of ecological functions in Indonesian smallholder landscapes

    PubMed Central

    Clough, Yann; Krishna, Vijesh V.; Corre, Marife D.; Darras, Kevin; Denmead, Lisa H.; Meijide, Ana; Moser, Stefan; Musshoff, Oliver; Steinebach, Stefanie; Veldkamp, Edzo; Allen, Kara; Barnes, Andrew D.; Breidenbach, Natalie; Brose, Ulrich; Buchori, Damayanti; Daniel, Rolf; Finkeldey, Reiner; Harahap, Idham; Hertel, Dietrich; Holtkamp, A. Mareike; Hörandl, Elvira; Irawan, Bambang; Jaya, I. Nengah Surati; Jochum, Malte; Klarner, Bernhard; Knohl, Alexander; Kotowska, Martyna M.; Krashevska, Valentyna; Kreft, Holger; Kurniawan, Syahrul; Leuschner, Christoph; Maraun, Mark; Melati, Dian Nuraini; Opfermann, Nicole; Pérez-Cruzado, César; Prabowo, Walesa Edho; Rembold, Katja; Rizali, Akhmad; Rubiana, Ratna; Schneider, Dominik; Tjitrosoedirdjo, Sri Sudarmiyati; Tjoa, Aiyen; Tscharntke, Teja; Scheu, Stefan

    2016-01-01

    Smallholder-dominated agricultural mosaic landscapes are highlighted as model production systems that deliver both economic and ecological goods in tropical agricultural landscapes, but trade-offs underlying current land-use dynamics are poorly known. Here, using the most comprehensive quantification of land-use change and associated bundles of ecosystem functions, services and economic benefits to date, we show that Indonesian smallholders predominantly choose farm portfolios with high economic productivity but low ecological value. The more profitable oil palm and rubber monocultures replace forests and agroforests critical for maintaining above- and below-ground ecological functions and the diversity of most taxa. Between the monocultures, the higher economic performance of oil palm over rubber comes with the reliance on fertilizer inputs and with increased nutrient leaching losses. Strategies to achieve an ecological-economic balance and a sustainable management of tropical smallholder landscapes must be prioritized to avoid further environmental degradation. PMID:27725673

  14. Land-use choices follow profitability at the expense of ecological functions in Indonesian smallholder landscapes.

    PubMed

    Clough, Yann; Krishna, Vijesh V; Corre, Marife D; Darras, Kevin; Denmead, Lisa H; Meijide, Ana; Moser, Stefan; Musshoff, Oliver; Steinebach, Stefanie; Veldkamp, Edzo; Allen, Kara; Barnes, Andrew D; Breidenbach, Natalie; Brose, Ulrich; Buchori, Damayanti; Daniel, Rolf; Finkeldey, Reiner; Harahap, Idham; Hertel, Dietrich; Holtkamp, A Mareike; Hörandl, Elvira; Irawan, Bambang; Jaya, I Nengah Surati; Jochum, Malte; Klarner, Bernhard; Knohl, Alexander; Kotowska, Martyna M; Krashevska, Valentyna; Kreft, Holger; Kurniawan, Syahrul; Leuschner, Christoph; Maraun, Mark; Melati, Dian Nuraini; Opfermann, Nicole; Pérez-Cruzado, César; Prabowo, Walesa Edho; Rembold, Katja; Rizali, Akhmad; Rubiana, Ratna; Schneider, Dominik; Tjitrosoedirdjo, Sri Sudarmiyati; Tjoa, Aiyen; Tscharntke, Teja; Scheu, Stefan

    2016-10-11

    Smallholder-dominated agricultural mosaic landscapes are highlighted as model production systems that deliver both economic and ecological goods in tropical agricultural landscapes, but trade-offs underlying current land-use dynamics are poorly known. Here, using the most comprehensive quantification of land-use change and associated bundles of ecosystem functions, services and economic benefits to date, we show that Indonesian smallholders predominantly choose farm portfolios with high economic productivity but low ecological value. The more profitable oil palm and rubber monocultures replace forests and agroforests critical for maintaining above- and below-ground ecological functions and the diversity of most taxa. Between the monocultures, the higher economic performance of oil palm over rubber comes with the reliance on fertilizer inputs and with increased nutrient leaching losses. Strategies to achieve an ecological-economic balance and a sustainable management of tropical smallholder landscapes must be prioritized to avoid further environmental degradation.

  15. IMPROVED DERIVATION OF INPUT FUNCTION IN DYNAMIC MOUSE [18F]FDG PET USING BLADDER RADIOACTIVITY KINETICS

    PubMed Central

    Wong, Koon-Pong; Zhang, Xiaoli; Huang, Sung-Cheng

    2013-01-01

    Purpose Accurate determination of the plasma input function (IF) is essential for absolute quantification of physiological parameters in positron emission tomography (PET). However, it requires an invasive and tedious procedure of arterial blood sampling that is challenging in mice because of the limited blood volume. In this study, a hybrid modeling approach is proposed to estimate the plasma IF of 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) in mice using accumulated radioactivity in urinary bladder together with a single late-time blood sample measurement. Methods Dynamic PET scans were performed on nine isoflurane-anesthetized male C57BL/6 mice after a bolus injection of [18F]FDG at the lateral caudal vein. During a 60- or 90-min scan, serial blood samples were taken from the femoral artery. Image data were reconstructed using filtered backprojection with CT-based attenuation correction. Total accumulated radioactivity in the urinary bladder was fitted to a renal compartmental model with the last blood sample and a 1-exponential function that described the [18F]FDG clearance in blood. Multiple late-time blood sample estimates were calculated by the blood [18F]FDG clearance equation. A sum of 4-exponentials was assumed for the plasma IF that served as a forcing function to all tissues. The estimated plasma IF was obtained by simultaneously fitting the [18F]FDG model to the time-activity curves (TACs) of liver and muscle and the forcing function to early (0–1 min) left-ventricle data (corrected for delay, dispersion, partial-volume effects and erythrocytes uptake) and the late-time blood estimates. Using only the blood sample acquired at the end of the study to estimate the IF and the use of liver TAC as an alternative IF were also investigated. Results The area under the plasma TACs calculated for all studies using the hybrid approach was not significantly different from that using all blood samples. [18F]FDG uptake constants in brain, myocardium, skeletal muscle and liver computed by the Patlak analysis using estimated and measured plasma TACs were in excellent agreement (slope ~ 1; R2 > 0.938). The IF estimated using only the last blood sample acquired at the end of the study and the use of liver TAC as plasma IF provided less reliable results. Conclusions The estimated plasma IFs obtained with the hybrid model agreed well with those derived from arterial blood sampling. Importantly, the proposed method obviates the need of arterial catheterization, making it possible to perform repeated dynamic [18F]FDG PET studies on the same animal. Liver TAC is unsuitable as an input function for absolute quantification of [18F]FDG PET data. PMID:23322346

  16. Characterizing stroke lesions using digital templates and lesion quantification tools in a web-based imaging informatics system for a large-scale stroke rehabilitation clinical trial

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Edwardson, Matthew; Dromerick, Alexander; Winstein, Carolee; Wang, Jing; Liu, Brent

    2015-03-01

    Previously, we presented an Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) imaging informatics system that supports a large-scale phase III stroke rehabilitation trial. The ePR system is capable of displaying anonymized patient imaging studies and reports, and the system is accessible to multiple clinical trial sites and users across the United States via the web. However, the prior multicenter stroke rehabilitation trials lack any significant neuroimaging analysis infrastructure. In stroke related clinical trials, identification of the stroke lesion characteristics can be meaningful as recent research shows that lesion characteristics are related to stroke scale and functional recovery after stroke. To facilitate the stroke clinical trials, we hope to gain insight into specific lesion characteristics, such as vascular territory, for patients enrolled into large stroke rehabilitation trials. To enhance the system's capability for data analysis and data reporting, we have integrated new features with the system: a digital brain template display, a lesion quantification tool and a digital case report form. The digital brain templates are compiled from published vascular territory templates at each of 5 angles of incidence. These templates were updated to include territories in the brainstem using a vascular territory atlas and the Medical Image Processing, Analysis and Visualization (MIPAV) tool. The digital templates are displayed for side-by-side comparisons and transparent template overlay onto patients' images in the image viewer. The lesion quantification tool quantifies planimetric lesion area from user-defined contour. The digital case report form stores user input into a database, then displays contents in the interface to allow for reviewing, editing, and new inputs. In sum, the newly integrated system features provide the user with readily-accessible web-based tools to identify the vascular territory involved, estimate lesion area, and store these results in a web-based digital format.

  17. Test-Retest Repeatability of Myocardial Blood Flow Measurements using Rubidium-82 Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Efseaff, Matthew

    Rubidium-82 positron emission tomography (PET) imaging has been proposed for routine myocardial blood flow (MBF) quantification. Few studies have investigated the test-retest repeatability of this method. Same-day repeatability of rest MBF imaging was optimized with a highly automated analysis program using image-derived input functions and a dual spillover correction (SOC). The effects of heterogeneous tracer infusion profiles and subject hemodynamics on test-retest repeatability were investigated at rest and during hyperemic stress. Factors affecting rest MBF repeatability included gender, suspected coronary artery disease, and dual SOC (p < 0.001). The best repeatability coefficient for same-day rest MBF was 0.20 mL/min/g using a six-minute scan-time, iterative reconstruction, dual SOC, resting rate-pressure-product (RPP) adjustment, and a left atrium image-derived input function. The serial study repeatabilities of the optimized protocol in subjects with homogeneous RPPs and tracer infusion profiles was 0.19 and 0.53 mL/min/g at rest and stress, and 0.95 for stress / rest myocardial flow reserve (MFR). Subjects with heterogeneous tracer infusion profiles and hemodynamic conditions had significantly less repeatable MBF measurements at rest, stress, and stress/rest flow reserve (p < 0.05).

  18. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    DOE PAGES

    Strydom, Gerhard

    2013-01-01

    The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less

  19. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics1[OPEN

    PubMed Central

    Poeschl, Yvonne; Plötner, Romina

    2017-01-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. PMID:28931626

  20. Rapid quantification of vesicle concentration for DOPG/DOPC and Cardiolipin/DOPC mixed lipid systems of variable composition.

    PubMed

    Elmer-Dixon, Margaret M; Bowler, Bruce E

    2018-05-19

    A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.

  1. Multi-muscle FES force control of the human arm for arbitrary goals.

    PubMed

    Schearer, Eric M; Liao, Yu-Wei; Perreault, Eric J; Tresch, Matthew C; Memberg, William D; Kirsch, Robert F; Lynch, Kevin M

    2014-05-01

    We present a method for controlling a neuroprosthesis for a paralyzed human arm using functional electrical stimulation (FES) and characterize the errors of the controller. The subject has surgically implanted electrodes for stimulating muscles in her shoulder and arm. Using input/output data, a model mapping muscle stimulations to isometric endpoint forces measured at the subject's hand was identified. We inverted the model of this redundant and coupled multiple-input multiple-output system by minimizing muscle activations and used this inverse for feedforward control. The magnitude of the total root mean square error over a grid in the volume of achievable isometric endpoint force targets was 11% of the total range of achievable forces. Major sources of error were random error due to trial-to-trial variability and model bias due to nonstationary system properties. Because the muscles working collectively are the actuators of the skeletal system, the quantification of errors in force control guides designs of motion controllers for multi-joint, multi-muscle FES systems that can achieve arbitrary goals.

  2. 39 CFR 3050.2 - Documentation of periodic reports.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... traced back to public documents or to primary data sources; and (3) Be submitted in a form, and be... Postal Service shall identify any input data that have changed, list any quantification techniques that...

  3. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  4. Effects of RNA integrity on transcript quantification by total RNA sequencing of clinically collected human placental samples.

    PubMed

    Reiman, Mario; Laan, Maris; Rull, Kristiina; Sõber, Siim

    2017-08-01

    RNA degradation is a ubiquitous process that occurs in living and dead cells, as well as during handling and storage of extracted RNA. Reduced RNA quality caused by degradation is an established source of uncertainty for all RNA-based gene expression quantification techniques. RNA sequencing is an increasingly preferred method for transcriptome analyses, and dependence of its results on input RNA integrity is of significant practical importance. This study aimed to characterize the effects of varying input RNA integrity [estimated as RNA integrity number (RIN)] on transcript level estimates and delineate the characteristic differences between transcripts that differ in degradation rate. The study used ribodepleted total RNA sequencing data from a real-life clinically collected set ( n = 32) of human solid tissue (placenta) samples. RIN-dependent alterations in gene expression profiles were quantified by using DESeq2 software. Our results indicate that small differences in RNA integrity affect gene expression quantification by introducing a moderate and pervasive bias in expression level estimates that significantly affected 8.1% of studied genes. The rapidly degrading transcript pool was enriched in pseudogenes, short noncoding RNAs, and transcripts with extended 3' untranslated regions. Typical slowly degrading transcripts (median length, 2389 nt) represented protein coding genes with 4-10 exons and high guanine-cytosine content.-Reiman, M., Laan, M., Rull, K., Sõber, S. Effects of RNA integrity on transcript quantification by total RNA sequencing of clinically collected human placental samples. © FASEB.

  5. Exploring the Underlying Mechanisms of the Xenopus laevis Embryonic Cell Cycle.

    PubMed

    Zhang, Kun; Wang, Jin

    2018-05-31

    The cell cycle is an indispensable process in proliferation and development. Despite significant efforts, global quantification and physical understanding are still challenging. In this study, we explored the mechanisms of the Xenopus laevis embryonic cell cycle by quantifying the underlying landscape and flux. We uncovered the Mexican hat landscape of the Xenopus laevis embryonic cell cycle with several local basins and barriers on the oscillation path. The local basins characterize the different phases of the Xenopus laevis embryonic cell cycle, and the local barriers represent the checkpoints. The checkpoint mechanism of the cell cycle is revealed by the landscape basins and barriers. While landscape shape determines the stabilities of the states on the oscillation path, the curl flux force determines the stability of the cell cycle flow. Replication is fundamental for biology of living cells. We quantify the input energy (through the entropy production) as the thermodynamic requirement for initiation and sustainability of single cell life (cell cycle). Furthermore, we also quantify curl flux originated from the input energy as the dynamical requirement for the emergence of a new stable phase (cell cycle). This can provide a new quantitative insight for the origin of single cell life. In fact, the curl flux originated from the energy input or nutrition supply determines the speed and guarantees the progression of the cell cycle. The speed of the cell cycle is a hallmark of cancer. We characterized the quality of the cell cycle by the coherence time and found it is supported by the flux and energy cost. We are also able to quantify the degree of time irreversibility by the cross correlation function forward and backward in time from the stochastic traces in the simulation or experiments, providing a way for the quantification of the time irreversibility and the flux. Through global sensitivity analysis upon landscape and flux, we can identify the key elements for controlling the cell cycle speed. This can help to design an effective strategy for drug discovery against cancer.

  6. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE PAGES

    McDonnell, J. D.; Schunck, N.; Higdon, D.; ...

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  7. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonnell, J. D.; Schunck, N.; Higdon, D.

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  8. Study of Environmental Data Complexity using Extreme Learning Machine

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael; Kanevski, Mikhail

    2017-04-01

    The main goals of environmental data science using machine learning algorithm deal, in a broad sense, around the calibration, the prediction and the visualization of hidden relationship between input and output variables. In order to optimize the models and to understand the phenomenon under study, the characterization of the complexity (at different levels) should be taken into account. Therefore, the identification of the linear or non-linear behavior between input and output variables adds valuable information for the knowledge of the phenomenon complexity. The present research highlights and investigates the different issues that can occur when identifying the complexity (linear/non-linear) of environmental data using machine learning algorithm. In particular, the main attention is paid to the description of a self-consistent methodology for the use of Extreme Learning Machines (ELM, Huang et al., 2006), which recently gained a great popularity. By applying two ELM models (with linear and non-linear activation functions) and by comparing their efficiency, quantification of the linearity can be evaluated. The considered approach is accompanied by simulated and real high dimensional and multivariate data case studies. In conclusion, the current challenges and future development in complexity quantification using environmental data mining are discussed. References - Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., 2006. Extreme learning machine: theory and applications. Neurocomputing 70 (1-3), 489-501. - Kanevski, M., Pozdnoukhov, A., Timonin, V., 2009. Machine Learning for Spatial Environmental Data. EPFL Press; Lausanne, Switzerland, p.392. - Leuenberger, M., Kanevski, M., 2015. Extreme Learning Machines for spatial environmental data. Computers and Geosciences 85, 64-73.

  9. Acceleration techniques and their impact on arterial input function sampling: Non-accelerated versus view-sharing and compressed sensing sequences.

    PubMed

    Benz, Matthias R; Bongartz, Georg; Froehlich, Johannes M; Winkel, David; Boll, Daniel T; Heye, Tobias

    2018-07-01

    The aim was to investigate the variation of the arterial input function (AIF) within and between various DCE MRI sequences. A dynamic flow-phantom and steady signal reference were scanned on a 3T MRI using fast low angle shot (FLASH) 2d, FLASH3d (parallel imaging factor (P) = P0, P2, P4), volumetric interpolated breath-hold examination (VIBE) (P = P0, P3, P2 × 2, P2 × 3, P3 × 2), golden-angle radial sparse parallel imaging (GRASP), and time-resolved imaging with stochastic trajectories (TWIST). Signal over time curves were normalized and quantitatively analyzed by full width half maximum (FWHM) measurements to assess variation within and between sequences. The coefficient of variation (CV) for the steady signal reference ranged from 0.07-0.8%. The non-accelerated gradient echo FLASH2d, FLASH3d, and VIBE sequences showed low within sequence variation with 2.1%, 1.0%, and 1.6%. The maximum FWHM CV was 3.2% for parallel imaging acceleration (VIBE P2 × 3), 2.7% for GRASP and 9.1% for TWIST. The FWHM CV between sequences ranged from 8.5-14.4% for most non-accelerated/accelerated gradient echo sequences except 6.2% for FLASH3d P0 and 0.3% for FLASH3d P2; GRASP FWHM CV was 9.9% versus 28% for TWIST. MRI acceleration techniques vary in reproducibility and quantification of the AIF. Incomplete coverage of the k-space with TWIST as a representative of view-sharing techniques showed the highest variation within sequences and might be less suited for reproducible quantification of the AIF. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Microbial Communities Are Well Adapted to Disturbances in Energy Input

    PubMed Central

    Vallino, Joseph J.

    2016-01-01

    ABSTRACT Although microbial systems are well suited for studying concepts in ecological theory, little is known about how microbial communities respond to long-term periodic perturbations beyond diel oscillations. Taking advantage of an ongoing microcosm experiment, we studied how methanotrophic microbial communities adapted to disturbances in energy input over a 20-day cycle period. Sequencing of bacterial 16S rRNA genes together with quantification of microbial abundance and ecosystem function were used to explore the long-term dynamics (510 days) of methanotrophic communities under continuous versus cyclic chemical energy supply. We observed that microbial communities appeared inherently well adapted to disturbances in energy input and that changes in community structure in both treatments were more dependent on internal dynamics than on external forcing. The results also showed that the rare biosphere was critical to seeding the internal community dynamics, perhaps due to cross-feeding or other strategies. We conclude that in our experimental system, internal feedbacks were more important than external drivers in shaping the community dynamics over time, suggesting that ecosystems can maintain their function despite inherently unstable community dynamics. IMPORTANCE Within the broader ecological context, biological communities are often viewed as stable and as only experiencing succession or replacement when subject to external perturbations, such as changes in food availability or the introduction of exotic species. Our findings indicate that microbial communities can exhibit strong internal dynamics that may be more important in shaping community succession than external drivers. Dynamic “unstable” communities may be important for ecosystem functional stability, with rare organisms playing an important role in community restructuring. Understanding the mechanisms responsible for internal community dynamics will certainly be required for understanding and manipulating microbiomes in both host-associated and natural ecosystems. PMID:27822558

  11. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics.

    PubMed

    Möller, Birgit; Poeschl, Yvonne; Plötner, Romina; Bürstenbinder, Katharina

    2017-11-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. © 2017 American Society of Plant Biologists. All Rights Reserved.

  12. Uncertainty quantification of measured quantities for a HCCI engine: composition or temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petitpas, Guillaume; Whitesides, Russell

    UQHCCI_1 computes the measurement uncertainties of a HCCI engine test bench using the pressure trace and the estimated uncertainties of the measured quantities as inputs, then propagating them through Bayesian inference and a mixing model.

  13. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  14. Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2015-12-01

    For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  15. Estimating phosphorus loss in runoff from manure and fertilizer for a phosphorus loss quantification tool.

    PubMed

    Vadas, P A; Good, L W; Moore, P A; Widman, N

    2009-01-01

    Nonpoint-source pollution of fresh waters by P is a concern because it contributes to accelerated eutrophication. Given the state of the science concerning agricultural P transport, a simple tool to quantify annual, field-scale P loss is a realistic goal. We developed new methods to predict annual dissolved P loss in runoff from surface-applied manures and fertilizers and validated the methods with data from 21 published field studies. We incorporated these manure and fertilizer P runoff loss methods into an annual, field-scale P loss quantification tool that estimates dissolved and particulate P loss in runoff from soil, manure, fertilizer, and eroded sediment. We validated the P loss tool using independent data from 28 studies that monitored P loss in runoff from a variety of agricultural land uses for at least 1 yr. Results demonstrated (i) that our new methods to estimate P loss from surface manure and fertilizer are an improvement over methods used in existing Indexes, and (ii) that it was possible to reliably quantify annual dissolved, sediment, and total P loss in runoff using relatively simple methods and readily available inputs. Thus, a P loss quantification tool that does not require greater degrees of complexity or input data than existing P Indexes could accurately predict P loss across a variety of management and fertilization practices, soil types, climates, and geographic locations. However, estimates of runoff and erosion are still needed that are accurate to a level appropriate for the intended use of the quantification tool.

  16. Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of Task 3 is to provide additional analysis and insight necessary to support key design/programmatic decision for options quantification and selection for system definition. This includes: (1) the identification of key trade study topics; (2) the definition of a trade study procedure for each topic (issues to be resolved, key inputs, criteria/weighting, methodology); (3) conduct tradeoff and sensitivity analysis; and (4) the review/verification of results within the context of evolving system design and definition. The trade study topics addressed in this volume include space autonomy and function automation, software transportability, system network topology, communications standardization, onboard local area networking, distributed operating system, software configuration management, and the software development environment facility.

  17. JAva GUi for Applied Research (JAGUAR) v 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JAGUAR is a Java software tool for automatically rendering a graphical user interface (GUI) from a structured input specification. It is designed as a plug-in to the Eclipse workbench to enable users to create, edit, and externally execute analysis application input decks and then view the results. JAGUAR serves as a GUI for Sandia's DAKOTA software toolkit for optimization and uncertainty quantification. It will include problem (input deck)set-up, option specification, analysis execution, and results visualization. Through the use of wizards, templates, and views, JAGUAR helps uses navigate the complexity of DAKOTA's complete input specification. JAGUAR is implemented in Java, leveragingmore » Eclipse extension points and Eclipse user interface. JAGUAR parses a DAKOTA NIDR input specification and presents the user with linked graphical and plain text representations of problem set-up and option specification for DAKOTA studies. After the data has been input by the user, JAGUAR generates one or more input files for DAKOTA, executes DAKOTA, and captures and interprets the results« less

  18. Exploiting active subspaces to quantify uncertainty in the numerical simulation of the HyShot II scramjet

    NASA Astrophysics Data System (ADS)

    Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.

    2015-12-01

    We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.

  19. A tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel independent brake from moderate driving to limit handling

    NASA Astrophysics Data System (ADS)

    Joa, Eunhyek; Park, Kwanwoo; Koh, Youngil; Yi, Kyongsu; Kim, Kilsoo

    2018-04-01

    This paper presents a tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel braking for enhanced performance from moderate driving to limit handling. The proposed algorithm adopted hierarchical structure: supervisor - desired motion tracking controller - optimisation-based control allocation. In the supervisor, by considering transient cornering characteristics, desired vehicle motion is calculated. In the desired motion tracking controller, in order to track desired vehicle motion, virtual control input is determined in the manner of sliding mode control. In the control allocation, virtual control input is allocated to minimise cost function. The cost function consists of two major parts. First part is a slip-based tyre friction utilisation quantification, which does not need a tyre force estimation. Second part is an allocation guideline, which guides optimally allocated inputs to predefined solution. The proposed algorithm has been investigated via simulation from moderate driving to limit handling scenario. Compared to Base and direct yaw moment control system, the proposed algorithm can effectively reduce tyre dissipation energy in the moderate driving situation. Moreover, the proposed algorithm enhances limit handling performance compared to Base and direct yaw moment control system. In addition to comparison with Base and direct yaw moment control, comparison the proposed algorithm with the control algorithm based on the known tyre force information has been conducted. The results show that the performance of the proposed algorithm is similar with that of the control algorithm with the known tyre force information.

  20. Quantitative differentiation of multiple virus in blood using nanoporous silicon oxide immunosensor and artificial neural network.

    PubMed

    Chakraborty, W; Ray, R; Samanta, N; RoyChaudhuri, C

    2017-12-15

    In spite of the rapid developments in various nanosensor technologies, it still remains challenging to realize a reliable ultrasensitive electrical biosensing platform which will be able to detect multiple viruses in blood simultaneously with a fairly high reproducibility without using secondary labels. In this paper, we have reported quantitative differentiation of Hep-B and Hep-C viruses in blood using nanoporous silicon oxide immunosensor array and artificial neural network (ANN). The peak frequency output (f p ) from the steady state sensitivity characteristics and the first cut off frequency (f c ) from the transient characteristics have been considered as inputs to the multilayer ANN. Implementation of several classifier blocks in the ANN architecture and coupling them with both the sensor chips, functionalized with Hep-B and Hep-C antibodies have enabled the quantification of the viruses with an accuracy of around 95% in the range of 0.04fM-1pM and with an accuracy of around 90% beyond 1pM and within 25nM in blood serum. This is the most sensitive report on multiple virus quantification using label free method. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Non-intrusive uncertainty quantification of computational fluid dynamics simulations: notes on the accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher

    2017-11-01

    Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.

  2. freeQuant: A Mass Spectrometry Label-Free Quantification Software Tool for Complex Proteome Analysis.

    PubMed

    Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong

    2015-01-01

    Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.

  3. Modeling transport phenomena and uncertainty quantification in solidification processes

    NASA Astrophysics Data System (ADS)

    Fezi, Kyle S.

    Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.

  4. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  5. Tall Buildings Initiative

    Science.gov Websites

    Design Task 7 - Guidelines on Modeling and Acceptance Values Task 8 - Input Ground Motions for Tall - Performance-Based Seismic Design Guidelines for Tall Buildings Task 12 - Quantification of seismic performance published Report No. 2017/06 titled: "Guidelines for Performance-Based Seismic Design of Tall Buildings

  6. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  7. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  8. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    NASA Astrophysics Data System (ADS)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  9. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE PAGES

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  10. Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zabaras, Nicolas J.

    2016-11-08

    Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.

  11. Assessment of model behavior and acceptable forcing data uncertainty in the context of land surface soil moisture estimation

    NASA Astrophysics Data System (ADS)

    Dumedah, Gift; Walker, Jeffrey P.

    2017-03-01

    The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.

  12. Ignition criterion for heterogeneous energetic materials based on hotspot size-temperature threshold

    NASA Astrophysics Data System (ADS)

    Barua, A.; Kim, S.; Horie, Y.; Zhou, M.

    2013-02-01

    A criterion for the ignition of granular explosives (GXs) and polymer-bonded explosives (PBXs) under shock and non-shock loading is developed. The formulation is based on integration of a quantification of the distributions of the sizes and locations of hotspots in loading events using a cohesive finite element method (CFEM) developed recently and the characterization by Tarver et al. [C. M. Tarver et al., "Critical conditions for impact- and shock-induced hot spots in solid explosives," J. Phys. Chem. 100, 5794-5799 (1996)] of the critical size-temperature threshold of hotspots required for chemical ignition of solid explosives. The criterion, along with the CFEM capability to quantify the thermal-mechanical behavior of GXs and PBXs, allows the critical impact velocity for ignition, time to ignition, and critical input energy at ignition to be determined as functions of material composition, microstructure, and loading conditions. The applicability of the relation between the critical input energy (E) and impact velocity of James [H. R. James, "An extension to the critical energy criterion used to predict shock initiation thresholds," Propellants, Explos., Pyrotech. 21, 8-13 (1996)] for shock loading is examined, leading to a modified interpretation, which is sensitive to microstructure and loading condition. As an application, numerical studies are undertaken to evaluate the ignition threshold of granular high melting point eXplosive, octahydro-1,3,5,7-tetranitro-1,2,3,5-tetrazocine (HMX) and HMX/Estane PBX under loading with impact velocities up to 350 ms-1 and strain rates up to 105 s-1. Results show that, for the GX, the time to criticality (tc) is strongly influenced by initial porosity, but is insensitive to grain size. Analyses also lead to a quantification of the differences between the responses of the GXs and PBXs in terms of critical impact velocity for ignition, time to ignition, and critical input energy at ignition. Since the framework permits explicit tracking of the influences of microstructure, loading, and mechanical constraints, the calculations also show the effects of stress wave reflection and confinement condition on the ignition behaviors of GXs and PBXs.

  13. Spermatozoa input concentrations and RNA isolation methods on RNA yield and quality in bull (Bos taurus).

    PubMed

    Parthipan, Sivashanmugam; Selvaraju, Sellappan; Somashekar, Lakshminarayana; Kolte, Atul P; Arangasamy, Arunachalam; Ravindra, Janivara Parameswaraiah

    2015-08-01

    Sperm RNA can be used to understand the past spermatogenic process, future successful fertilization, and embryo development. To study the sperm RNA composition and function, isolation of good quality RNA with sufficient quantity is essential. The objective of this study was to assess the influence of sperm input concentrations and RNA isolation methods on RNA yield and quality in bull sperm. The fresh semen samples from bulls (n = 6) were snap-frozen in liquid nitrogen and stored at -80 °C. The sperm RNA was isolated using membrane-based methods combined with TRIzol (RNeasy+TRIzol and PureLink+TRIzol) and conventional methods (TRIzol, Double TRIzol, and RNAzol RT). Based on fluorometric quantification, combined methods resulted in significantly (P < 0.05) higher total RNA yields (800-900 ng/30-40 × 10(6)) as compared with other methods and yielded 20 to 30 fg of RNA/spermatozoon. The quality of RNA isolated by membrane-based methods was superior to that isolated by conventional methods. The sperm RNA was observed to be intact as well as fragmented (50-2000 bp). The study revealed that the membrane-based methods with a cocktail of lysis solution and an optimal input concentration of 30 to 40 million sperm were optimal for maximum recovery of RNA from bull spermatozoa. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro

    2013-12-01

    We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.

  15. Real-Time Reverse-Transcription Quantitative Polymerase Chain Reaction Assay Is a Feasible Method for the Relative Quantification of Heregulin Expression in Non-Small Cell Lung Cancer Tissue.

    PubMed

    Kristof, Jessica; Sakrison, Kellen; Jin, Xiaoping; Nakamaru, Kenji; Schneider, Matthias; Beckman, Robert A; Freeman, Daniel; Spittle, Cindy; Feng, Wenqin

    2017-01-01

    In preclinical studies, heregulin ( HRG ) expression was shown to be the most relevant predictive biomarker for response to patritumab, a fully human anti-epidermal growth factor receptor 3 monoclonal antibody. In support of a phase 2 study of erlotinib ± patritumab in non-small cell lung cancer (NSCLC), a reverse-transcription quantitative polymerase chain reaction (RT-qPCR) assay for relative quantification of HRG expression from formalin-fixed paraffin-embedded (FFPE) NSCLC tissue samples was developed and validated and described herein. Test specimens included matched FFPE normal lung and NSCLC and frozen NSCLC tissue, and HRG -positive and HRG -negative cell lines. Formalin-fixed paraffin-embedded tissue was examined for functional performance. Heregulin distribution was also analyzed across 200 NSCLC commercial samples. Applied Biosystems TaqMan Gene Expression Assays were run on the Bio-Rad CFX96 real-time PCR platform. Heregulin RT-qPCR assay specificity, PCR efficiency, PCR linearity, and reproducibility were demonstrated. The final assay parameters included the Qiagen FFPE RNA Extraction Kit for RNA extraction from FFPE NSCLC tissue, 50 ng of RNA input, and 3 reference (housekeeping) genes ( HMBS, IPO8 , and EIF2B1 ), which had expression levels similar to HRG expression levels and were stable among FFPE NSCLC samples. Using the validated assay, unimodal HRG distribution was confirmed across 185 evaluable FFPE NSCLC commercial samples. Feasibility of an RT-qPCR assay for the quantification of HRG expression in FFPE NSCLC specimens was demonstrated.

  16. Cerebral blood flow with [15O]water PET studies using an image-derived input function and MR-defined carotid centerlines

    NASA Astrophysics Data System (ADS)

    Fung, Edward K.; Carson, Richard E.

    2013-03-01

    Full quantitative analysis of brain PET data requires knowledge of the arterial input function into the brain. Such data are normally acquired by arterial sampling with corrections for delay and dispersion to account for the distant sampling site. Several attempts have been made to extract an image-derived input function (IDIF) directly from the internal carotid arteries that supply the brain and are often visible in brain PET images. We have devised a method of delineating the internal carotids in co-registered magnetic resonance (MR) images using the level-set method and applying the segmentations to PET images using a novel centerline approach. Centerlines of the segmented carotids were modeled as cubic splines and re-registered in PET images summed over the early portion of the scan. Using information from the anatomical center of the vessel should minimize partial volume and spillover effects. Centerline time-activity curves were taken as the mean of the values for points along the centerline interpolated from neighboring voxels. A scale factor correction was derived from calculation of cerebral blood flow (CBF) using gold standard arterial blood measurements. We have applied the method to human subject data from multiple injections of [15O]water on the HRRT. The method was assessed by calculating the area under the curve (AUC) of the IDIF and the CBF, and comparing these to values computed using the gold standard arterial input curve. The average ratio of IDIF to arterial AUC (apparent recovery coefficient: aRC) across 9 subjects with multiple (n = 69) injections was 0.49 ± 0.09 at 0-30 s post tracer arrival, 0.45 ± 0.09 at 30-60 s, and 0.46 ± 0.09 at 60-90 s. Gray and white matter CBF values were 61.4 ± 11.0 and 15.6 ± 3.0 mL/min/100 g tissue using sampled blood data. Using IDIF centerlines scaled by the average aRC over each subjects’ injections, gray and white matter CBF values were 61.3 ± 13.5 and 15.5 ± 3.4 mL/min/100 g tissue. Using global average aRC values, the means were unchanged, and intersubject variability was noticeably reduced. This MR-based centerline method with local re-registration to [15O]water PET yields a consistent IDIF over multiple injections in the same subject, thus permitting the absolute quantification of CBF without arterial input function measurements.

  17. Interactions between Snow Chemistry, Mercury Inputs and Microbial Population Dynamics in an Arctic Snowpack

    PubMed Central

    Larose, Catherine; Prestat, Emmanuel; Cecillon, Sébastien; Berger, Sibel; Malandain, Cédric; Lyon, Delina; Ferrari, Christophe; Schneider, Dominique; Dommergue, Aurélien; Vogel, Timothy M.

    2013-01-01

    We investigated the interactions between snowpack chemistry, mercury (Hg) contamination and microbial community structure and function in Arctic snow. Snowpack chemistry (inorganic and organic ions) including mercury (Hg) speciation was studied in samples collected during a two-month field study in a high Arctic site, Svalbard, Norway (79°N). Shifts in microbial community structure were determined by using a 16S rRNA gene phylogenetic microarray. We linked snowpack and meltwater chemistry to changes in microbial community structure by using co-inertia analyses (CIA) and explored changes in community function due to Hg contamination by q-PCR quantification of Hg-resistance genes in metagenomic samples. Based on the CIA, chemical and microbial data were linked (p = 0.006) with bioavailable Hg (BioHg) and methylmercury (MeHg) contributing significantly to the ordination of samples. Mercury was shown to influence community function with increases in merA gene copy numbers at low BioHg levels. Our results show that snowpacks can be considered as dynamic habitats with microbial and chemical components responding rapidly to environmental changes. PMID:24282515

  18. User's Manual: Routines for Radiative Heat Transfer and Thermometry

    NASA Technical Reports Server (NTRS)

    Risch, Timothy K.

    2016-01-01

    Determining the intensity and spectral distribution of radiation emanating from a heated surface has applications in many areas of science and engineering. Areas of research in which the quantification of spectral radiation is used routinely include thermal radiation heat transfer, infrared signature analysis, and radiation thermometry. In the analysis of radiation, it is helpful to be able to predict the radiative intensity and the spectral distribution of the emitted energy. Presented in this report is a set of routines written in Microsoft Visual Basic for Applications (VBA) (Microsoft Corporation, Redmond, Washington) and incorporating functions specific to Microsoft Excel (Microsoft Corporation, Redmond, Washington) that are useful for predicting the radiative behavior of heated surfaces. These routines include functions for calculating quantities of primary importance to engineers and scientists. In addition, the routines also provide the capability to use such information to determine surface temperatures from spectral intensities and for calculating the sensitivity of the surface temperature measurements to unknowns in the input parameters.

  19. Longitudinal Evaluation of Fatty Acid Metabolism in Normal and Spontaneously Hypertensive Rat Hearts with Dynamic MicroSPECT Imaging

    DOE PAGES

    Reutter, Bryan W.; Huesman, Ronald H.; Brennan, Kathleen M.; ...

    2011-01-01

    The goal of this project is to develop radionuclide molecular imaging technologies using a clinical pinhole SPECT/CT scanner to quantify changes in cardiac metabolism using the spontaneously hypertensive rat (SHR) as a model of hypertensive-related pathophysiology. This paper quantitatively compares fatty acid metabolism in hearts of SHR and Wistar-Kyoto normal rats as a function of age and thereby tracks physiological changes associated with the onset and progression of heart failure in the SHR model. The fatty acid analog, 123 I-labeled BMIPP, was used in longitudinal metabolic pinhole SPECT imaging studies performed every seven months for 21 months. The uniqueness ofmore » this project is the development of techniques for estimating the blood input function from projection data acquired by a slowly rotating camera that is imaging fast circulation and the quantification of the kinetics of 123 I-BMIPP by fitting compartmental models to the blood and tissue time-activity curves.« less

  20. Recurrence Quantification Analysis of Processes and Products of Discourse: A Tutorial in R

    ERIC Educational Resources Information Center

    Wallot, Sebastian

    2017-01-01

    Processes of naturalistic reading and writing are based on complex linguistic input, stretch-out over time, and rely on an integrated performance of multiple perceptual, cognitive, and motor processes. Hence, naturalistic reading and writing performance is nonstationary and exhibits fluctuations and transitions. However, instead of being just…

  1. Single-Input and Multiple-Output Surface Acoustic Wave Sensing for Damage Quantification in Piezoelectric Sensors.

    PubMed

    Pamwani, Lavish; Habib, Anowarul; Melandsø, Frank; Ahluwalia, Balpreet Singh; Shelke, Amit

    2018-06-22

    The main aim of the paper is damage detection at the microscale in the anisotropic piezoelectric sensors using surface acoustic waves (SAWs). A novel technique based on the single input and multiple output of Rayleigh waves is proposed to detect the microscale cracks/flaws in the sensor. A convex-shaped interdigital transducer is fabricated for excitation of divergent SAWs in the sensor. An angularly shaped interdigital transducer (IDT) is fabricated at 0 degrees and ±20 degrees for sensing the convex shape evolution of SAWs. A precalibrated damage was introduced in the piezoelectric sensor material using a micro-indenter in the direction perpendicular to the pointing direction of the SAW. Damage detection algorithms based on empirical mode decomposition (EMD) and principal component analysis (PCA) are implemented to quantify the evolution of damage in piezoelectric sensor material. The evolution of the damage was quantified using a proposed condition indicator (CI) based on normalized Euclidean norm of the change in principal angles, corresponding to pristine and damaged states. The CI indicator provides a robust and accurate metric for detection and quantification of damage.

  2. Evaluation of severe accident risks: Quantification of major input parameters: MAACS (MELCOR Accident Consequence Code System) input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprung, J.L.; Jow, H-N; Rollstin, J.A.

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less

  3. Double-input compartmental modeling and spectral analysis for the quantification of positron emission tomography data in oncology

    NASA Astrophysics Data System (ADS)

    Tomasi, G.; Kimberley, S.; Rosso, L.; Aboagye, E.; Turkheimer, F.

    2012-04-01

    In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [11C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[18F]fluorouracil (5-[18F]FU) and [18F]fluorothymidine ([18F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[18F]FU and to tumor, vertebra and liver for [18F]FLT were analyzed. For 5-[18F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[18F]FU (R2 = 0.91) and metabolite [18F]FBAL (R2 = 0.99). For [18F]FLT, the DI methods provided notable improvements but less substantial than for 5-[18F]FU due to the lower rate of metabolism of [18F]FLT. On the basis of the AIC values, agreement between [18F]FLT Ki estimated with the SI and DI models was good (R2 = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [18F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R2 = 0.33 for Ki). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with a high rate of metabolism. Furthermore, they showed that SA is suitable for DI modeling and can be used effectively in the analysis of PET data.

  4. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  5. Quantification of Plasma miRNAs by Digital PCR for Cancer Diagnosis

    PubMed Central

    Ma, Jie; Li, Ning; Guarnera, Maria; Jiang, Feng

    2013-01-01

    Analysis of plasma microRNAs (miRNAs) by quantitative polymerase chain reaction (qPCR) provides a potential approach for cancer diagnosis. However, absolutely quantifying low abundant plasma miRNAs is challenging with qPCR. Digital PCR offers a unique means for assessment of nucleic acids presenting at low levels in plasma. This study aimed to evaluate the efficacy of digital PCR for quantification of plasma miRNAs and the potential utility of this technique for cancer diagnosis. We used digital PCR to quantify the copy number of plasma microRNA-21-5p (miR-21–5p) and microRNA-335–3p (miR-335–3p) in 36 lung cancer patients and 38 controls. Digital PCR showed a high degree of linearity and quantitative correlation with miRNAs in a dynamic range from 1 to 10,000 copies/μL of input, with high reproducibility. qPCR exhibited a dynamic range from 100 to 1×107 copies/μL of input. Digital PCR had a higher sensitivity to detect copy number of the miRNAs compared with qPCR. In plasma, digital PCR could detect copy number of both miR-21–5p and miR-335–3p, whereas qPCR was only able to assess miR-21–5p. Quantification of the plasma miRNAs by digital PCR provided 71.8% sensitivity and 80.6% specificity in distinguishing lung cancer patients from cancer-free subjects. PMID:24277982

  6. A sensitive and accurate quantification method for the detection of hepatitis B virus covalently closed circular DNA by the application of a droplet digital polymerase chain reaction amplification system.

    PubMed

    Mu, Di; Yan, Liang; Tang, Hui; Liao, Yong

    2015-10-01

    To develop a sensitive and accurate assay system for the quantification of covalently closed circular HBV DNA (cccDNA) for future clinical monitoring of cccDNA fluctuation during antiviral therapy in the liver of infected patients. A droplet digital PCR (ddPCR)-based assay system detected template DNA input at the single copy level (or ~10(-5) pg of plasmid HBV DNA) by using serially diluted plasmid HBV DNA samples. Compared with the conventional quantitative PCR assay in the detection of cccDNA, which required at least 50 ng of template DNA input, a parallel experiment applying a ddPCR system demonstrates that the lowest detection limit of cccDNA from HepG2.215 cellular DNA samples is around 1 ng, which is equivalent to 0.54 ± 0.94 copies of cccDNA. In addition, we demonstrated that the addition of cccDNA-safe exonuclease and utilization of cccDNA-specific primers in the ddPCR assay system significantly improved the detection accuracy of HBV cccDNA from HepG2.215 cellular DNA samples. The ddPCR-based cccDNA detection system is a sensitive and accurate assay for the quantification of cccDNA in HBV-transfected HepG2.215 cellular DNA samples and may represent an important method for future application in monitoring cccDNA fluctuation during antiviral therapy.

  7. Robust nonparametric quantification of clustering density of molecules in single-molecule localization microscopy

    PubMed Central

    Jiang, Shenghang; Park, Seongjin; Challapalli, Sai Divya; Fei, Jingyi; Wang, Yong

    2017-01-01

    We report a robust nonparametric descriptor, J′(r), for quantifying the density of clustering molecules in single-molecule localization microscopy. J′(r), based on nearest neighbor distribution functions, does not require any parameter as an input for analyzing point patterns. We show that J′(r) displays a valley shape in the presence of clusters of molecules, and the characteristics of the valley reliably report the clustering features in the data. Most importantly, the position of the J′(r) valley (rJm′) depends exclusively on the density of clustering molecules (ρc). Therefore, it is ideal for direct estimation of the clustering density of molecules in single-molecule localization microscopy. As an example, this descriptor was applied to estimate the clustering density of ptsG mRNA in E. coli bacteria. PMID:28636661

  8. Introduction to the project DUNE, a DUst experiment in a low Nutrient, low chlorophyll Ecosystem

    NASA Astrophysics Data System (ADS)

    Guieu, C.; Dulac, F.; Ridame, C.; Pondaven, P.

    2013-07-01

    The main goal of the project DUNE was to estimate the impact of atmospheric deposition on an oligotrophic ecosystem based on mesocosm experiments simulating strong atmospheric inputs of Aeolian dust. Atmospheric deposition is now recognized as a significant source of macro- and micro-nutrients for the surface ocean, but the quantification of its role on the biological carbon pump is still poorly determined. We proposed in DUNE to investigate the role of atmospheric inputs on the functioning of an oligotrophic system particularly well adapted to this kind of study: the Mediterranean Sea. The Mediterranean Sea - etymologically, sea surrounded by land - is submitted to atmospheric inputs that are very variable both in frequency and intensity. During the thermal stratification period, only atmospheric deposition is prone to fertilize Mediterranean surface waters which has become very oligotrophic due to the nutrient depletion (after the spring bloom). This paper describes the objectives of DUNE and the implementation plan of a series of mesocosms experiments during which either wet or dry and a succession of two wet deposition fluxes of 10 g m-2 of Saharan dust have been simulated. After the presentation of the main biogeochemical initial conditions of the site at the time of each experiment, a general overview of the papers published in this special issue is presented, including laboratory results on the solubility of trace elements in erodible soils in addition to results from the mesocosm experiments. Our mesocosm experiments aimed at being representative of real atmospheric deposition events onto the surface of oligotrophic marine waters and were an original attempt to consider the vertical dimension in the study of the fate of atmospheric deposition within surface waters. Results obtained can be more easily extrapolated to quantify budgets and parameterize processes such as particle migration through a "captured water column". The strong simulated dust deposition events were found to impact the dissolved concentrations of inorganic dissolved phosphorus, nitrogen, iron and other trace elements. In the case of Fe, adsorption on sinking particles yields a decrease in dissolved concentration unless binding ligands were produced following a former deposition input and associated fertilization. For the first time, a quantification of the C export induced by the aerosol addition was possible. Description and parameterization of biotic (heterotrophs and autotrophs, including diazotrophs) and abiotic processes (ballast effect due to lithogenic particles) after dust addition in sea surface water, result in a net particulate organic carbon export in part controlled by the "lithogenic carbon pump".

  9. Stationary plasma thruster evaluation in Russia

    NASA Technical Reports Server (NTRS)

    Brophy, John R.

    1992-01-01

    A team of electric propulsion specialists from U.S. government laboratories experimentally evaluated the performance of a 1.35-kW Stationary Plasma Thruster (SPT) at the Scientific Research Institute of Thermal Processes in Moscow and at 'Fakel' Enterprise in Kaliningrad, Russia. The evaluation was performed using a combination of U.S. and Russian instrumentation and indicated that the actual performance of the thruster appears to be close to the claimed performance. The claimed performance was a specific impulse of 16,000 m/s, an overall efficiency of 50 percent, and an input power of 1.35 kW, and is superior to the performance of western electric thrusters at this specific impulse. The unique performance capabilities of the stationary plasma thruster, along with claims that more than fifty of the 660-W thrusters have been flown in space on Russian spacecraft, attracted the interest of western spacecraft propulsion specialists. A two-phase program was initiated to evaluate the stationary plasma thruster performance and technology. The first phase of this program, to experimentally evaluate the performance of the thruster with U.S. instrumentation in Russia, is described in this report. The second phase objective is to determine the suitability of the stationary plasma thruster technology for use on western spacecraft. This will be accomplished by bringing stationary plasma thrusters to the U.S. for quantification of thruster erosion rates, measurements of the performance variation as a function of long-duration operation, quantification of the exhaust beam divergence angle, and determination of the non-propellant efflux from the thruster. These issues require quantification in order to maximize the probability for user application of the SPT technology and significantly increase the propulsion capabilities of U.S. spacecraft.

  10. Expert review on poliovirus immunity and transmission.

    PubMed

    Duintjer Tebbens, Radboud J; Pallansch, Mark A; Chumakov, Konstantin M; Halsey, Neal A; Hovi, Tapani; Minor, Philip D; Modlin, John F; Patriarca, Peter A; Sutter, Roland W; Wright, Peter F; Wassilak, Steven G F; Cochi, Stephen L; Kim, Jong-Hoon; Thompson, Kimberly M

    2013-04-01

    Successfully managing risks to achieve wild polioviruses (WPVs) eradication and address the complexities of oral poliovirus vaccine (OPV) cessation to stop all cases of paralytic poliomyelitis depends strongly on our collective understanding of poliovirus immunity and transmission. With increased shifting from OPV to inactivated poliovirus vaccine (IPV), numerous risk management choices motivate the need to understand the tradeoffs and uncertainties and to develop models to help inform decisions. The U.S. Centers for Disease Control and Prevention hosted a meeting of international experts in April 2010 to review the available literature relevant to poliovirus immunity and transmission. This expert review evaluates 66 OPV challenge studies and other evidence to support the development of quantitative models of poliovirus transmission and potential outbreaks. This review focuses on characterization of immunity as a function of exposure history in terms of susceptibility to excretion, duration of excretion, and concentration of excreted virus. We also discuss the evidence of waning of host immunity to poliovirus transmission, the relationship between the concentration of poliovirus excreted and infectiousness, the importance of different transmission routes, and the differences in transmissibility between OPV and WPV. We discuss the limitations of the available evidence for use in polio risk models, and conclude that despite the relatively large number of studies on immunity, very limited data exist to directly support quantification of model inputs related to transmission. Given the limitations in the evidence, we identify the need for expert input to derive quantitative model inputs from the existing data. © 2012 Society for Risk Analysis.

  11. The Frog Vestibular System as a Model for Lesion-Induced Plasticity: Basic Neural Principles and Implications for Posture Control

    PubMed Central

    Lambert, François M.; Straka, Hans

    2011-01-01

    Studies of behavioral consequences after unilateral labyrinthectomy have a long tradition in the quest of determining rules and limitations of the central nervous system (CNS) to exert plastic changes that assist the recuperation from the loss of sensory inputs. Frogs were among the first animal models to illustrate general principles of regenerative capacity and reorganizational neural flexibility after a vestibular lesion. The continuous successful use of the latter animals is in part based on the easy access and identifiability of nerve branches to inner ear organs for surgical intervention, the possibility to employ whole brain preparations for in vitro studies and the limited degree of freedom of postural reflexes for quantification of behavioral impairments and subsequent improvements. Major discoveries that increased the knowledge of post-lesional reactive mechanisms in the CNS include alterations in vestibular commissural signal processing and activation of cooperative changes in excitatory and inhibitory inputs to disfacilitated neurons. Moreover, the observed increase of synaptic efficacy in propriospinal circuits illustrates the importance of limb proprioceptive inputs for postural recovery. Accumulated evidence suggests that the lesion-induced neural plasticity is not a goal-directed process that aims toward a meaningful restoration of vestibular reflexes but rather attempts a survival of those neurons that have lost their excitatory inputs. Accordingly, the reaction mechanism causes an improvement of some components but also a deterioration of other aspects as seen by spatio-temporally inappropriate vestibulo-motor responses, similar to the consequences of plasticity processes in various sensory systems and species. The generality of the findings indicate that frogs continue to form a highly amenable vertebrate model system for exploring molecular and physiological events during cellular and network reorganization after a loss of vestibular function. PMID:22518109

  12. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.

    Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less

  13. Uncertainty quantification in Rothermel's Model using an efficient sampling method

    Treesearch

    Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick

    2007-01-01

    The purpose of the present work is to quantify parametric uncertainty in Rothermel’s wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...

  14. Use of Landsat and environmental satellite data in evapotranspiration estimation from a wildland area

    NASA Technical Reports Server (NTRS)

    Khorram, S.; Smith, H. G.

    1979-01-01

    A remote sensing-aided procedure was applied to the watershed-wide estimation of water loss to the atmosphere (evapotranspiration, ET). The approach involved a spatially referenced databank based on both remotely sensed and ground-acquired information. Physical models for both estimation of ET and quantification of input parameters are specified, and results of the investigation are outlined.

  15. Validating the Use of pPerformance Risk Indices for System-Level Risk and Maturity Assessments

    NASA Astrophysics Data System (ADS)

    Holloman, Sherrica S.

    With pressure on the U.S. Defense Acquisition System (DAS) to reduce cost overruns and schedule delays, system engineers' performance is only as good as their tools. Recent literature details a need for 1) objective, analytical risk quantification methodologies over traditional subjective qualitative methods -- such as, expert judgment, and 2) mathematically rigorous system-level maturity assessments. The Mahafza, Componation, and Tippett (2005) Technology Performance Risk Index (TPRI) ties the assessment of technical performance to the quantification of risk of unmet performance; however, it is structured for component- level data as input. This study's aim is to establish a modified TPRI with systems-level data as model input, and then validate the modified index with actual system-level data from the Department of Defense's (DoD) Major Defense Acquisition Programs (MDAPs). This work's contribution is the establishment and validation of the System-level Performance Risk Index (SPRI). With the introduction of the SPRI, system-level metrics are better aligned, allowing for better assessment, tradeoff and balance of time, performance and cost constraints. This will allow system engineers and program managers to ultimately make better-informed system-level technical decisions throughout the development phase.

  16. Detection, location, and quantification of structural damage by neural-net-processed moiré profilometry

    NASA Astrophysics Data System (ADS)

    Grossman, Barry G.; Gonzalez, Frank S.; Blatt, Joel H.; Hooker, Jeffery A.

    1992-03-01

    The development of efficient high speed techniques to recognize, locate, and quantify damage is vitally important for successful automated inspection systems such as ones used for the inspection of undersea pipelines. Two critical problems must be solved to achieve these goals: the reduction of nonuseful information present in the video image and automatic recognition and quantification of extent and location of damage. Artificial neural network processed moire profilometry appears to be a promising technique to accomplish this. Real time video moire techniques have been developed which clearly distinguish damaged and undamaged areas on structures, thus reducing the amount of extraneous information input into an inspection system. Artificial neural networks have demonstrated advantages for image processing, since they can learn the desired response to a given input and are inherently fast when implemented in hardware due to their parallel computing architecture. Video moire images of pipes with dents of different depths were used to train a neural network, with the desired output being the location and severity of the damage. The system was then successfully tested with a second series of moire images. The techniques employed and the results obtained are discussed.

  17. Decision peptide-driven: a free software tool for accurate protein quantification using gel electrophoresis and matrix assisted laser desorption ionization time of flight mass spectrometry.

    PubMed

    Santos, Hugo M; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Nunes-Miranda, J D; Fdez-Riverola, Florentino; Carvallo, R; Capelo, J L

    2010-09-15

    The decision peptide-driven tool implements a software application for assisting the user in a protocol for accurate protein quantification based on the following steps: (1) protein separation through gel electrophoresis; (2) in-gel protein digestion; (3) direct and inverse (18)O-labeling and (4) matrix assisted laser desorption ionization time of flight mass spectrometry, MALDI analysis. The DPD software compares the MALDI results of the direct and inverse (18)O-labeling experiments and quickly identifies those peptides with paralleled loses in different sets of a typical proteomic workflow. Those peptides are used for subsequent accurate protein quantification. The interpretation of the MALDI data from direct and inverse labeling experiments is time-consuming requiring a significant amount of time to do all comparisons manually. The DPD software shortens and simplifies the searching of the peptides that must be used for quantification from a week to just some minutes. To do so, it takes as input several MALDI spectra and aids the researcher in an automatic mode (i) to compare data from direct and inverse (18)O-labeling experiments, calculating the corresponding ratios to determine those peptides with paralleled losses throughout different sets of experiments; and (ii) allow to use those peptides as internal standards for subsequent accurate protein quantification using (18)O-labeling. In this work the DPD software is presented and explained with the quantification of protein carbonic anhydrase. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  18. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    PubMed

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P < 0.001). Correlations between AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0.880, 0.994, and 0.856, respectively. For MR(glc), these correlations were 0.963, 0.994, and 0.966, respectively. In response monitoring, these correlations were 0.947, 0.982, and 0.949, respectively. Additional scaling by 1 late arterial sample showed a significant improvement (P < 0.001). The fitted input function calibrated for AA and iDV performed similarly to IDIF. Performance improved significantly using 1 late arterial sample. The proposed model can be used when an IDIF is not available or when serial arterial sampling is not feasible.

  19. Sample preparation and EFTEM of Meat Samples for Nanoparticle Analysis in Food

    NASA Astrophysics Data System (ADS)

    Lari, L.; Dudkiewicz, A.

    2014-06-01

    Nanoparticles are used in industry for personal care products and the preparation of food. In the latter application, their functions include the prevention of microbes' growth, increase of the foods nutritional value and sensory quality. EU regulations require a risk assessment of the nanoparticles used in foods and food contact materials before the products can reach the market. However, availability of validated analytical methodologies for detection and characterisation of the nanoparticles in food hampers appropriate risk assessment. As part of a research on the evaluation of the methods for screening and quantification of Ag nanoparticles in meat we have tested a new TEM sample preparation alternative to resin embedding and cryo-sectioning. Energy filtered TEM analysis was applied to evaluate thickness and the uniformity of thin meat layers acquired at increasing input of the sample demonstrating that the protocols used ensured good stability under the electron beam, reliable sample concentration and reproducibility.

  20. An inexpensive frequency-modulated (FM) audio monitor of time-dependent analog parameters.

    PubMed

    Langdon, R B; Jacobs, R S

    1980-02-01

    The standard method for quantification and presentation of an experimental variable in real time is the use of visual display on the ordinate of an oscilloscope screen or chart recorder. This paper describes a relatively simple electronic circuit, using commercially available and inexpensive integrated circuits (IC), which generates an audible tone, the pitch of which varies in proportion to a running variable of interest. This device, which we call an "Audioscope," can accept as input the monitor output from any instrument that expresses an experimental parameter as a dc voltage. The Audioscope is particularly useful in implanting microelectrodes intracellularly. It may also function to mediate the first step in data recording on magnetic tape, and/or data analysis and reduction by electronic circuitary. We estimate that this device can be built, with two-channel capability, for less than $50, and in less than 10 hr by an experienced electronics technician.

  1. Space-Time Data fusion for Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, H.; Cressie, N.

    2011-01-01

    NASA has been collecting massive amounts of remote sensing data about Earth's systems for more than a decade. Missions are selected to be complementary in quantities measured, retrieval techniques, and sampling characteristics, so these datasets are highly synergistic. To fully exploit this, a rigorous methodology for combining data with heterogeneous sampling characteristics is required. For scientific purposes, the methodology must also provide quantitative measures of uncertainty that propagate input-data uncertainty appropriately. We view this as a statistical inference problem. The true but notdirectly- observed quantities form a vector-valued field continuous in space and time. Our goal is to infer those true values or some function of them, and provide to uncertainty quantification for those inferences. We use a spatiotemporal statistical model that relates the unobserved quantities of interest at point-level to the spatially aggregated, observed data. We describe and illustrate our method using CO2 data from two NASA data sets.

  2. Quantification of Trapezius Muscle Innervation During Neck Dissections: Cervical Plexus Versus the Spinal Accessory Nerve.

    PubMed

    Svenberg Lind, Clara; Lundberg, Bertil; Hammarstedt Nordenvall, Lalle; Heiwe, Susanne; Persson, Jonas K E; Hydman, Jonas

    2015-11-01

    Despite increasing use of selective, nerve-sparing surgical techniques during neck dissections, the reported rate of postoperative paralysis of the trapezius muscle is still high. The aim of the study is to measure and compare motor inflow to the trapezius muscle, in order to better understand the peripheral neuroanatomy. Intraoperative nerve monitoring (electroneurography) in patients undergoing routine neck dissection (n=18). The innervation of the 3 functional parts of the trapezius muscle was mapped and quantified through compound muscle action potentials. In 18/18 (100%) of the patients, the spinal accessory nerve (SAN) innervated all parts of the trapezius muscle. In 7/18 (39%) of the patients, an active motor branch from the cervical plexus was detected, equally distributed to all functional parts of the trapezius muscle, at levels comparable to the SAN. Compared to the SAN, branches from cervical plexus provide a significant amount of neural input to all parts of the trapezius muscle. Intraoperative nerve monitoring can be used in routine neck dissections to detect these branches, which may be important following surgical injury to the SAN. © The Author(s) 2015.

  3. Chaos in the heart: the interaction between body and mind

    NASA Astrophysics Data System (ADS)

    Redington, Dana

    1993-11-01

    A number of factors influence the chaotic dynamics of heart function. Genetics, age, sex, disease, the environment, experience, and of course the mind, play roles in influencing cardiovascular dynamics. The mind is of particular interest because it is an emergent phenomenon of the body admittedly seated and co-occurrent in the brain. The brain serves as the body's controller, and commands the heart through complex multipathway feedback loops. Structures deep within the brain, the hypothalamus and other centers in the brainstem, modulate heart function, partially as a result of afferent input from the body but also a result of higher mental processes. What can chaos in the body, i.e., the nonlinear dynamics of the heart, tell of the mind? This paper presents a brief overview of the spectral structure of heart rate activity followed by a summary of experimental results based on phase space analysis of data from semi-structured interviews. This paper then describes preliminary quantification of cardiovascular dynamics during different stressor conditions in an effort to apply more quantitative methods to clinical data.

  4. Model-free quantification of dynamic PET data using nonparametric deconvolution

    PubMed Central

    Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R

    2015-01-01

    Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427

  5. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  6. Double-input compartmental modeling and spectral analysis for the quantification of positron emission tomography data in oncology.

    PubMed

    Tomasi, G; Kimberley, S; Rosso, L; Aboagye, E; Turkheimer, F

    2012-04-07

    In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [¹¹C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[¹⁸F]fluorouracil (5-[¹⁸F]FU) and [¹⁸F]fluorothymidine ([¹⁸F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[¹⁸F]FU and to tumor, vertebra and liver for [¹⁸F]FLT were analyzed. For 5-[¹⁸F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[¹⁸F]FU (R(2) = 0.91) and metabolite [¹⁸F]FBAL (R(2) = 0.99). For [¹⁸F]FLT, the DI methods provided notable improvements but less substantial than for 5-[¹⁸F]FU due to the lower rate of metabolism of [¹⁸F]FLT. On the basis of the AIC values, agreement between [¹⁸F]FLT K(i) estimated with the SI and DI models was good (R² = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [¹⁸F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R² = 0.33 for K(i)). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with a high rate of metabolism. Furthermore, they showed that SA is suitable for DI modeling and can be used effectively in the analysis of PET data.

  7. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  8. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z.; Gilli, L.; Lathouwers, D.

    2013-07-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less

  9. Positron emission tomography quantification of serotonin transporter in suicide attempters with major depressive disorder.

    PubMed

    Miller, Jeffrey M; Hesselgrave, Natalie; Ogden, R Todd; Sullivan, Gregory M; Oquendo, Maria A; Mann, J John; Parsey, Ramin V

    2013-08-15

    Several lines of evidence implicate abnormal serotonergic function in suicidal behavior and completed suicide, including low serotonin transporter binding in postmortem studies of completed suicide. We have also reported low in vivo serotonin transporter binding in major depressive disorder (MDD) during a major depressive episode using positron emission tomography (PET) with [(11)C]McN5652. We quantified regional brain serotonin transporter binding in vivo in depressed suicide attempters, depressed nonattempters, and healthy controls using PET and a superior radiotracer, [(11)C]DASB. Fifty-one subjects with DSM-IV current MDD, 15 of whom were past suicide attempters, and 32 healthy control subjects underwent PET scanning with [(11)C]DASB to quantify in vivo regional brain serotonin transporter binding. Metabolite-corrected arterial input functions and plasma free-fraction were acquired to improve quantification. Depressed suicide attempters had lower serotonin transporter binding in midbrain compared with depressed nonattempters (p = .031) and control subjects (p = .0093). There was no difference in serotonin transporter binding comparing all depressed subjects with healthy control subjects considering six a priori regions of interest simultaneously (p = .41). Low midbrain serotonin transporter binding appears to be related to the pathophysiology of suicidal behavior rather than of major depressive disorder. This is consistent with postmortem work showing low midbrain serotonin transporter binding capacity in depressed suicides and may partially explain discrepant in vivo findings quantifying serotonin transporter in depression. Future studies should investigate midbrain serotonin transporter binding as a predictor of suicidal behavior in MDD and determine the cause of low binding. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  10. Real-Time Microfluidic Blood-Counting System for PET and SPECT Preclinical Pharmacokinetic Studies.

    PubMed

    Convert, Laurence; Lebel, Réjean; Gascon, Suzanne; Fontaine, Réjean; Pratte, Jean-François; Charette, Paul; Aimez, Vincent; Lecomte, Roger

    2016-09-01

    Small-animal nuclear imaging modalities have become essential tools in the development process of new drugs, diagnostic procedures, and therapies. Quantification of metabolic or physiologic parameters is based on pharmacokinetic modeling of radiotracer biodistribution, which requires the blood input function in addition to tissue images. Such measurements are challenging in small animals because of their small blood volume. In this work, we propose a microfluidic counting system to monitor rodent blood radioactivity in real time, with high efficiency and small detection volume (∼1 μL). A microfluidic channel is built directly above unpackaged p-i-n photodiodes to detect β-particles with maximum efficiency. The device is embedded in a compact system comprising dedicated electronics, shielding, and pumping unit controlled by custom firmware to enable measurements next to small-animal scanners. Data corrections required to use the input function in pharmacokinetic models were established using calibrated solutions of the most common PET and SPECT radiotracers. Sensitivity, dead time, propagation delay, dispersion, background sensitivity, and the effect of sample temperature were characterized. The system was tested for pharmacokinetic studies in mice by quantifying myocardial perfusion and oxygen consumption with (11)C-acetate (PET) and by measuring the arterial input function using (99m)TcO4 (-) (SPECT). Sensitivity for PET isotopes reached 20%-47%, a 2- to 10-fold improvement relative to conventional catheter-based geometries. Furthermore, the system detected (99m)Tc-based SPECT tracers with an efficiency of 4%, an outcome not possible through a catheter. Correction for dead time was found to be unnecessary for small-animal experiments, whereas propagation delay and dispersion within the microfluidic channel were accurately corrected. Background activity and sample temperature were shown to have no influence on measurements. Finally, the system was successfully used in animal studies. A fully operational microfluidic blood-counting system for preclinical pharmacokinetic studies was developed. Microfluidics enabled reliable and high-efficiency measurement of the blood concentration of most common PET and SPECT radiotracers with high temporal resolution in small blood volume. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  11. Estimation of contrast agent bolus arrival delays for improved reproducibility of liver DCE MRI

    NASA Astrophysics Data System (ADS)

    Chouhan, Manil D.; Bainbridge, Alan; Atkinson, David; Punwani, Shonit; Mookerjee, Rajeshwar P.; Lythgoe, Mark F.; Taylor, Stuart A.

    2016-10-01

    Delays between contrast agent (CA) arrival at the site of vascular input function (VIF) sampling and the tissue of interest affect dynamic contrast enhanced (DCE) MRI pharmacokinetic modelling. We investigate effects of altering VIF CA bolus arrival delays on liver DCE MRI perfusion parameters, propose an alternative approach to estimating delays and evaluate reproducibility. Thirteen healthy volunteers (28.7  ±  1.9 years, seven males) underwent liver DCE MRI using dual-input single compartment modelling, with reproducibility (n  =  9) measured at 7 days. Effects of VIF CA bolus arrival delays were assessed for arterial and portal venous input functions. Delays were pre-estimated using linear regression, with restricted free modelling around the pre-estimated delay. Perfusion parameters and 7 days reproducibility were compared using this method, freely modelled delays and no delays using one-way ANOVA. Reproducibility was assessed using Bland-Altman analysis of agreement. Maximum percent change relative to parameters obtained using zero delays, were  -31% for portal venous (PV) perfusion, +43% for total liver blood flow (TLBF), +3247% for hepatic arterial (HA) fraction, +150% for mean transit time and  -10% for distribution volume. Differences were demonstrated between the 3 methods for PV perfusion (p  =  0.0085) and HA fraction (p  <  0.0001), but not other parameters. Improved mean differences and Bland-Altman 95% Limits-of-Agreement for reproducibility of PV perfusion (9.3 ml/min/100 g, ±506.1 ml/min/100 g) and TLBF (43.8 ml/min/100 g, ±586.7 ml/min/100 g) were demonstrated using pre-estimated delays with constrained free modelling. CA bolus arrival delays cause profound differences in liver DCE MRI quantification. Pre-estimation of delays with constrained free modelling improved 7 days reproducibility of perfusion parameters in volunteers.

  12. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  13. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    NASA Astrophysics Data System (ADS)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.

  14. Evaluation of four commercial quantitative real-time PCR kits with inhibited and degraded samples.

    PubMed

    Holmes, Amy S; Houston, Rachel; Elwick, Kyleen; Gangitano, David; Hughes-Stamm, Sheree

    2018-05-01

    DNA quantification is a vital step in forensic DNA analysis to determine the optimal input amount for DNA typing. A quantitative real-time polymerase chain reaction (qPCR) assay that can predict DNA degradation or inhibitors present in the sample prior to DNA amplification could aid forensic laboratories in creating a more streamlined and efficient workflow. This study compares the results from four commercial qPCR kits: (1) Investigator® Quantiplex® Pro Kit, (2) Quantifiler® Trio DNA Quantification Kit, (3) PowerQuant® System, and (4) InnoQuant® HY with high molecular weight DNA, low template samples, degraded samples, and DNA spiked with various inhibitors.The results of this study indicate that all kits were comparable in accurately predicting quantities of high quality DNA down to the sub-picogram level. However, the InnoQuant(R) HY kit showed the highest precision across the DNA concentration range tested in this study. In addition, all kits performed similarly with low concentrations of forensically relevant PCR inhibitors. However, in general, the Investigator® Quantiplex® Pro Kit was the most tolerant kit to inhibitors and provided the most accurate quantification results with higher concentrations of inhibitors (except with salt). PowerQuant® and InnoQuant® HY were the most sensitive to inhibitors, but they did indicate significant levels of PCR inhibition. When quantifying degraded samples, each kit provided different degradation indices (DI), with Investigator® Quantiplex® Pro indicating the largest DI and Quantifiler® Trio indicating the smallest DI. When the qPCR kits were paired with their respective STR kit to genotype highly degraded samples, the Investigator® 24plex QS and GlobalFiler® kits generated more complete profiles when the small target concentrations were used for calculating input amount.

  15. Uncertainty quantification of Antarctic contribution to sea-level rise using the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model

    NASA Astrophysics Data System (ADS)

    Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel

    2017-04-01

    Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.

  16. Optimally Repeatable Kinetic Model Variant for Myocardial Blood Flow Measurements with 82Rb PET.

    PubMed

    Ocneanu, Adrian F; deKemp, Robert A; Renaud, Jennifer M; Adler, Andy; Beanlands, Rob S B; Klein, Ran

    2017-01-01

    Purpose. Myocardial blood flow (MBF) quantification with 82 Rb positron emission tomography (PET) is gaining clinical adoption, but improvements in precision are desired. This study aims to identify analysis variants producing the most repeatable MBF measures. Methods. 12 volunteers underwent same-day test-retest rest and dipyridamole stress imaging with dynamic 82 Rb PET, from which MBF was quantified using 1-tissue-compartment kinetic model variants: (1) blood-pool versus uptake region sampled input function (Blood/Uptake-ROI), (2) dual spillover correction (SOC-On/Off), (3) right blood correction (RBC-On/Off), (4) arterial blood transit delay (Delay-On/Off), and (5) distribution volume (DV) constraint (Global/Regional-DV). Repeatability of MBF, stress/rest myocardial flow reserve (MFR), and stress/rest MBF difference (ΔMBF) was assessed using nonparametric reproducibility coefficients (RPC np = 1.45 × interquartile range). Results. MBF using SOC-On, RVBC-Off, Blood-ROI, Global-DV, and Delay-Off was most repeatable for combined rest and stress: RPC np = 0.21 mL/min/g (15.8%). Corresponding MFR and ΔMBF RPC np were 0.42 (20.2%) and 0.24 mL/min/g (23.5%). MBF repeatability improved with SOC-On at stress ( p < 0.001) and tended to improve with RBC-Off at both rest and stress ( p < 0.08). DV and ROI did not significantly influence repeatability. The Delay-On model was overdetermined and did not reliably converge. Conclusion. MBF and MFR test-retest repeatability were the best with dual spillover correction, left atrium blood input function, and global DV.

  17. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    PubMed Central

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  18. Systems and methods for reconfiguring input devices

    NASA Technical Reports Server (NTRS)

    Lancaster, Jeff (Inventor); De Mers, Robert E. (Inventor)

    2012-01-01

    A system includes an input device having first and second input members configured to be activated by a user. The input device is configured to generate activation signals associated with activation of the first and second input members, and each of the first and second input members are associated with an input function. A processor is coupled to the input device and configured to receive the activation signals. A memory coupled to the processor, and includes a reconfiguration module configured to store the input functions assigned to the first and second input members and, upon execution of the processor, to reconfigure the input functions assigned to the input members when the first input member is inoperable.

  19. Rapid Quantification of Energy Absorption and Dissipation Metrics for PPE Padding Materials

    DTIC Science & Technology

    2010-01-22

    dampers ,   i.e.,  Hooke’s  Law  springs  and   viscous ...absorbing/dissipating materials. Input forces caused by blast pressures, determined from computational fluid dynamics (CFD) analysis and simulation...simple  lumped-­‐ parameter  elements   –  spring,  k  (energy  storage)   –  damper ,  b  (energy  dissipa/on   Rapid

  20. Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.

    2014-08-01

    In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less

  1. State-space estimation of the input stimulus function using the Kalman filter: a communication system model for fMRI experiments.

    PubMed

    Ward, B Douglas; Mazaheri, Yousef

    2006-12-15

    The blood oxygenation level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments in response to input stimuli is temporally delayed and distorted due to the blurring effect of the voxel hemodynamic impulse response function (IRF). Knowledge of the IRF, obtained during the same experiment, or as the result of a separate experiment, can be used to dynamically obtain an estimate of the input stimulus function. Reconstruction of the input stimulus function allows the fMRI experiment to be evaluated as a communication system. The input stimulus function may be considered as a "message" which is being transmitted over a noisy "channel", where the "channel" is characterized by the voxel IRF. Following reconstruction of the input stimulus function, the received message is compared with the transmitted message on a voxel-by-voxel basis to determine the transmission error rate. Reconstruction of the input stimulus function provides insight into actual brain activity during task activation with less temporal blurring, and may be considered as a first step toward estimation of the true neuronal input function.

  2. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    NASA Astrophysics Data System (ADS)

    Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.

  3. Quantitative and Functional Requirements for Bioluminescent Cancer Models.

    PubMed

    Feys, Lynn; Descamps, Benedicte; Vanhove, Christian; Vermeulen, Stefan; Vandesompele, J O; Vanderheyden, Katrien; Messens, Kathy; Bracke, Marc; De Wever, Olivier

    2016-01-01

    Bioluminescent cancer models are widely used but detailed quantification of the luciferase signal and functional comparison with a non-transfected control cell line are generally lacking. In the present study, we provide quantitative and functional tests for luciferase-transfected cells. We quantified the luciferase expression in BLM and HCT8/E11 transfected cancer cells, and examined the effect of long-term luciferin exposure. The present study also investigated functional differences between parental and transfected cancer cells. Our results showed that quantification of different single-cell-derived populations are superior with droplet digital polymerase chain reaction. Quantification of luciferase protein level and luciferase bioluminescent activity is only useful when there is a significant difference in copy number. Continuous exposure of cell cultures to luciferin leads to inhibitory effects on mitochondrial activity, cell growth and bioluminescence. These inhibitory effects correlate with luciferase copy number. Cell culture and mouse xenograft assays showed no significant functional differences between luciferase-transfected and parental cells. Luciferase-transfected cells should be validated by quantitative and functional assays before starting large-scale experiments. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  4. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  5. Quantification of in vivo short echo-time proton magnetic resonance spectra at 14.1 T using two different approaches of modelling the macromolecule spectrum

    NASA Astrophysics Data System (ADS)

    Cudalbu, C.; Mlynárik, V.; Xin, L.; Gruetter, Rolf

    2009-10-01

    Reliable quantification of the macromolecule signals in short echo-time 1H MRS spectra is particularly important at high magnetic fields for an accurate quantification of metabolite concentrations (the neurochemical profile) due to effectively increased spectral resolution of the macromolecule components. The purpose of the present study was to assess two approaches of quantification, which take the contribution of macromolecules into account in the quantification step. 1H spectra were acquired on a 14.1 T/26 cm horizontal scanner on five rats using the ultra-short echo-time SPECIAL (spin echo full intensity acquired localization) spectroscopy sequence. Metabolite concentrations were estimated using LCModel, combined with a simulated basis set of metabolites using published spectral parameters and either the spectrum of macromolecules measured in vivo, using an inversion recovery technique, or baseline simulated by the built-in spline function. The fitted spline function resulted in a smooth approximation of the in vivo macromolecules, but in accordance with previous studies using Subtract-QUEST could not reproduce completely all features of the in vivo spectrum of macromolecules at 14.1 T. As a consequence, the measured macromolecular 'baseline' led to a more accurate and reliable quantification at higher field strengths.

  6. A global Fine-Root Ecology Database to address below-ground challenges in plant ecology

    DOE PAGES

    Iversen, Colleen M.; McCormack, M. Luke; Powell, A. Shafer; ...

    2017-02-28

    Variation and tradeoffs within and among plant traits are increasingly being harnessed by empiricists and modelers to understand and predict ecosystem processes under changing environmental conditions. And while fine roots play an important role in ecosystem functioning, fine-root traits are underrepresented in global trait databases. This has hindered efforts to analyze fine-root trait variation and link it with plant function and environmental conditions at a global scale. This Viewpoint addresses the need for a centralized fine-root trait database, and introduces the Fine-Root Ecology Database (FRED, http://roots.ornl.gov) which so far includes > 70 000 observations encompassing a broad range of rootmore » traits and also includes associated environmental data. FRED represents a critical step toward improving our understanding of below-ground plant ecology. For example, FRED facilitates the quantification of variation in fine-root traits across root orders, species, biomes, and environmental gradients while also providing a platform for assessments of covariation among root, leaf, and wood traits, the role of fine roots in ecosystem functioning, and the representation of fine roots in terrestrial biosphere models. There has been a continued input of observations into FRED to fill gaps in trait coverage will improve our understanding of changes in fine-root traits across space and time.« less

  7. A global Fine-Root Ecology Database to address below-ground challenges in plant ecology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iversen, Colleen M.; McCormack, M. Luke; Powell, A. Shafer

    Variation and tradeoffs within and among plant traits are increasingly being harnessed by empiricists and modelers to understand and predict ecosystem processes under changing environmental conditions. And while fine roots play an important role in ecosystem functioning, fine-root traits are underrepresented in global trait databases. This has hindered efforts to analyze fine-root trait variation and link it with plant function and environmental conditions at a global scale. This Viewpoint addresses the need for a centralized fine-root trait database, and introduces the Fine-Root Ecology Database (FRED, http://roots.ornl.gov) which so far includes > 70 000 observations encompassing a broad range of rootmore » traits and also includes associated environmental data. FRED represents a critical step toward improving our understanding of below-ground plant ecology. For example, FRED facilitates the quantification of variation in fine-root traits across root orders, species, biomes, and environmental gradients while also providing a platform for assessments of covariation among root, leaf, and wood traits, the role of fine roots in ecosystem functioning, and the representation of fine roots in terrestrial biosphere models. There has been a continued input of observations into FRED to fill gaps in trait coverage will improve our understanding of changes in fine-root traits across space and time.« less

  8. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment.

    PubMed

    Conklin, Emily E; Lee, Kathyann L; Schlabach, Sadie A; Woods, Ian G

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs.

  9. Serotonin 2A receptor agonist binding in the human brain with [11C]Cimbi-36

    PubMed Central

    Ettrup, Anders; da Cunha-Bang, Sophie; McMahon, Brenda; Lehel, Szabolcs; Dyssegaard, Agnete; Skibsted, Anine W; Jørgensen, Louise M; Hansen, Martin; Baandrup, Anders O; Bache, Søren; Svarer, Claus; Kristensen, Jesper L; Gillings, Nic; Madsen, Jacob; Knudsen, Gitte M

    2014-01-01

    [11C]Cimbi-36 was recently developed as a selective serotonin 2A (5-HT2A) receptor agonist radioligand for positron emission tomography (PET) brain imaging. Such an agonist PET radioligand may provide a novel, and more functional, measure of the serotonergic system and agonist binding is more likely than antagonist binding to reflect 5-HT levels in vivo. Here, we show data from a first-in-human clinical trial with [11C]Cimbi-36. In 29 healthy volunteers, we found high brain uptake and distribution according to 5-HT2A receptors with [11C]Cimbi-36 PET. The two-tissue compartment model using arterial input measurements provided the most optimal quantification of cerebral [11C]Cimbi-36 binding. Reference tissue modeling was feasible as it induced a negative but predictable bias in [11C]Cimbi-36 PET outcome measures. In five subjects, pretreatment with the 5-HT2A receptor antagonist ketanserin before a second PET scan significantly decreased [11C]Cimbi-36 binding in all cortical regions with no effects in cerebellum. These results confirm that [11C]Cimbi-36 binding is selective for 5-HT2A receptors in the cerebral cortex and that cerebellum is an appropriate reference tissue for quantification of 5-HT2A receptors in the human brain. Thus, we here describe [11C]Cimbi-36 as the first agonist PET radioligand to successfully image and quantify 5-HT2A receptors in the human brain. PMID:24780897

  10. Development of Best practices document for Peptide Standards | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    The Assay Development Working Group (ADWG) of the CPTAC Program is currently drafting a document to propose best practices for generation, quantification, storage, and handling of peptide standards used for mass spectrometry-based assays, as well as interpretation of quantitative proteomic data based on peptide standards. The ADWG is seeking input from commercial entities that provide peptide standards for mass spectrometry-based assays or that perform amino acid analysis.

  11. Modelling land cover change in the Ganga basin

    NASA Astrophysics Data System (ADS)

    Moulds, S.; Tsarouchi, G.; Mijic, A.; Buytaert, W.

    2013-12-01

    Over recent decades the green revolution in India has driven substantial environmental change. Modelling experiments have identified northern India as a 'hot spot' of land-atmosphere coupling strength during the boreal summer. However, there is a wide range of sensitivity of atmospheric variables to soil moisture between individual climate models. The lack of a comprehensive land cover change dataset to force climate models has been identified as a major contributor to model uncertainty. In this work a time series dataset of land cover change between 1970 and 2010 is constructed for northern India to improve the quantification of regional hydrometeorological feedbacks. The MODIS instrument on board the Aqua and Terra satellites provides near-continuous remotely sensed datasets from 2000 to the present day. However, the quality of satellite products before 2000 is poor. To complete the dataset MODIS images are extrapolated back in time using the Conversion of Land Use and its Effects at small regional extent (CLUE-s) modelling framework. Non-spatial estimates of land cover area from national agriculture and forest statistics, available on a state-wise, annual basis, are used as a direct model input. Land cover change is allocated spatially as a function of biophysical and socioeconomic drivers identified using logistic regression. This dataset will provide an essential input to a high resolution, physically based land surface model to generate the lower boundary condition to assess the impact of land cover change on regional climate.

  12. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic

    PubMed Central

    Guillas, S.; Georgiopoulou, A.; Dias, F.

    2017-01-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained. PMID:28484339

  13. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic.

    PubMed

    Salmanidou, D M; Guillas, S; Georgiopoulou, A; Dias, F

    2017-04-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained.

  14. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  15. Functional enzyme-based modeling approach for dynamic simulation of denitrification process in hyporheic zone sediments: Genetically structured microbial community model

    NASA Astrophysics Data System (ADS)

    Song, H. S.; Li, M.; Qian, W.; Song, X.; Chen, X.; Scheibe, T. D.; Fredrickson, J.; Zachara, J. M.; Liu, C.

    2016-12-01

    Modeling environmental microbial communities at individual organism level is currently intractable due to overwhelming structural complexity. Functional guild-based approaches alleviate this problem by lumping microorganisms into fewer groups based on their functional similarities. This reduction may become ineffective, however, when individual species perform multiple functions as environmental conditions vary. In contrast, the functional enzyme-based modeling approach we present here describes microbial community dynamics based on identified functional enzymes (rather than individual species or their groups). Previous studies in the literature along this line used biomass or functional genes as surrogate measures of enzymes due to the lack of analytical methods for quantifying enzymes in environmental samples. Leveraging our recent development of a signature peptide-based technique enabling sensitive quantification of functional enzymes in environmental samples, we developed a genetically structured microbial community model (GSMCM) to incorporate enzyme concentrations and various other omics measurements (if available) as key modeling input. We formulated the GSMCM based on the cybernetic metabolic modeling framework to rationally account for cellular regulation without relying on empirical inhibition kinetics. In the case study of modeling denitrification process in Columbia River hyporheic zone sediments collected from the Hanford Reach, our GSMCM provided a quantitative fit to complex experimental data in denitrification, including the delayed response of enzyme activation to the change in substrate concentration. Our future goal is to extend the modeling scope to the prediction of carbon and nitrogen cycles and contaminant fate. Integration of a simpler version of the GSMCM with PFLOTRAN for multi-scale field simulations is in progress.

  16. Bacterial gene abundances as indicators of greenhouse gas emission in soils.

    PubMed

    Morales, Sergio E; Cosart, Theodore; Holben, William E

    2010-06-01

    Nitrogen fixing and denitrifying bacteria, respectively, control bulk inputs and outputs of nitrogen in soils, thereby mediating nitrogen-based greenhouse gas emissions in an ecosystem. Molecular techniques were used to evaluate the relative abundances of nitrogen fixing, denitrifying and two numerically dominant ribotypes (based on the > or =97% sequence similarity at the 16S rRNA gene) of bacteria in plots representing 10 agricultural and other land-use practices at the Kellogg biological station long-term ecological research site. Quantification of nitrogen-related functional genes (nitrite reductase, nirS; nitrous oxide reductase, nosZ; and nitrogenase, nifH) as well as two dominant 16S ribotypes (belonging to the phyla Acidobacteria, Thermomicrobia) allowed us to evaluate the hypothesis that microbial community differences are linked to greenhouse gas emissions under different land management practices. Our results suggest that the successional stages of the ecosystem are strongly linked to bacterial functional group abundance, and that the legacy of agricultural practices can be sustained over decades. We also link greenhouse gas emissions with specific compositional responses in the soil bacterial community and assess the use of denitrifying gene abundances as proxies for determining nitrous oxide emissions from soils.

  17. Nested polynomial trends for the improvement of Gaussian process-based predictors

    NASA Astrophysics Data System (ADS)

    Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.

    2017-10-01

    The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.

  18. Uncertainty Quantification of Equilibrium Climate Sensitivity in CCSM4

    NASA Astrophysics Data System (ADS)

    Covey, C. C.; Lucas, D. D.; Tannahill, J.; Klein, R.

    2013-12-01

    Uncertainty in the global mean equilibrium surface warming due to doubled atmospheric CO2, as computed by a "slab ocean" configuration of the Community Climate System Model version 4 (CCSM4), is quantified using 1,039 perturbed-input-parameter simulations. The slab ocean configuration reduces the model's e-folding time when approaching an equilibrium state to ~5 years. This time is much less than for the full ocean configuration, consistent with the shallow depth of the upper well-mixed layer of the ocean represented by the "slab." Adoption of the slab ocean configuration requires the assumption of preset values for the convergence of ocean heat transport beneath the upper well-mixed layer. A standard procedure for choosing these values maximizes agreement with the full ocean version's simulation of the present-day climate when input parameters assume their default values. For each new set of input parameter values, we computed the change in ocean heat transport implied by a "Phase 1" model run in which sea surface temperatures and sea ice concentrations were set equal to present-day values. The resulting total ocean heat transport (= standard value + change implied by Phase 1 run) was then input into "Phase 2" slab ocean runs with varying values of atmospheric CO2. Our uncertainty estimate is based on Latin Hypercube sampling over expert-provided uncertainty ranges of N = 36 adjustable parameters in the atmosphere (CAM4) and sea ice (CICE4) components of CCSM4. Two-dimensional projections of our sampling distribution for the N(N-1)/2 possible pairs of input parameters indicate full coverage of the N-dimensional parameter space, including edges. We used a machine learning-based support vector regression (SVR) statistical model to estimate the probability density function (PDF) of equilibrium warming. This fitting procedure produces a PDF that is qualitatively consistent with the raw histogram of our CCSM4 results. Most of the values from the SVR statistical model are within ~0.1 K of the raw results, well below the inter-decile range inferred below. Independent validation of the fit indicates residual errors that are distributed about zero with a standard deviation of 0.17 K. Analysis of variance shows that the equilibrium warming in CCSM4 is mainly linear in parameter changes. Thus, in accord with the Central Limit Theorem of statistics, the PDF of the warming is approximately Gaussian, i.e. symmetric about its mean value (3.0 K). Since SVR allows for highly nonlinear fits, the symmetry is not an artifact of the fitting procedure. The 10-90 percentile range of the PDF is 2.6-3.4 K, consistent with earlier estimates from CCSM4 but narrower than estimates from other models, which sometimes produce a high-temperature asymmetric tail in the PDF. This work was performed under auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was funded by LLNL's Uncertainty Quantification Strategic Initiative (Laboratory Directed Research and Development Project 10-SI-013).

  19. Development of a 3D coupled physical-biogeochemical model for the Marseille coastal area (NW Mediterranean Sea): what complexity is required in the coastal zone?

    PubMed

    Fraysse, Marion; Pinazo, Christel; Faure, Vincent Martin; Fuchs, Rosalie; Lazzari, Paolo; Raimbault, Patrick; Pairaud, Ivane

    2013-01-01

    Terrestrial inputs (natural and anthropogenic) from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C) and nitrogen (N) cycles and a second model that also considers the phosphorus (P) cycle. Realistic simulations were performed for a period of 5 years (2007-2011). The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model.

  20. Development of a 3D Coupled Physical-Biogeochemical Model for the Marseille Coastal Area (NW Mediterranean Sea): What Complexity Is Required in the Coastal Zone?

    PubMed Central

    Fraysse, Marion; Pinazo, Christel; Faure, Vincent Martin; Fuchs, Rosalie; Lazzari, Paolo; Raimbault, Patrick; Pairaud, Ivane

    2013-01-01

    Terrestrial inputs (natural and anthropogenic) from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C) and nitrogen (N) cycles and a second model that also considers the phosphorus (P) cycle. Realistic simulations were performed for a period of 5 years (2007–2011). The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model. PMID:24324589

  1. Medial surface dynamics of an in vivo canine vocal fold during phonation

    NASA Astrophysics Data System (ADS)

    Döllinger, Michael; Berry, David A.; Berke, Gerald S.

    2005-05-01

    Quantitative measurement of the medial surface dynamics of the vocal folds is important for understanding how sound is generated within the larynx. Building upon previous excised hemilarynx studies, the present study extended the hemilarynx methodology to the in vivo canine larynx. Through use of an in vivo model, the medial surface dynamics of the vocal fold were examined as a function of active thyroarytenoid muscle contraction. Data were collected using high-speed digital imaging at a sampling frequency of 2000 Hz, and a spatial resolution of 1024×1024 pixels. Chest-like and fry-like vibrations were observed, but could not be distinguished based on the input stimulation current to the recurrent laryngeal nerve. The subglottal pressure did distinguish the registers, as did an estimate of the thyroarytenoid muscle activity. Upon quantification of the three-dimensional motion, the method of Empirical Eigenfunctions was used to extract the underlying modes of vibration, and to investigate mechanisms of sustained oscillation. Results were compared with previous findings from excised larynx experiments and theoretical models. .

  2. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes

    PubMed Central

    De Gregorio, Sofia; Camarda, Marco

    2016-01-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years. PMID:27456812

  3. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes.

    PubMed

    De Gregorio, Sofia; Camarda, Marco

    2016-07-26

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years.

  4. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  5. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.

  6. Parametrically defined cerebral blood vessels as non-invasive blood input functions for brain PET studies

    NASA Astrophysics Data System (ADS)

    Asselin, Marie-Claude; Cunningham, Vincent J.; Amano, Shigeko; Gunn, Roger N.; Nahmias, Claude

    2004-03-01

    A non-invasive alternative to arterial blood sampling for the generation of a blood input function for brain positron emission tomography (PET) studies is presented. The method aims to extract the dimensions of the blood vessel directly from PET images and to simultaneously correct the radioactivity concentration for partial volume and spillover. This involves simulation of the tomographic imaging process to generate images of different blood vessel and background geometries and selecting the one that best fits, in a least-squares sense, the acquired PET image. A phantom experiment was conducted to validate the method which was then applied to eight subjects injected with 6-[18F]fluoro-L-DOPA and one subject injected with [11C]CO-labelled red blood cells. In the phantom study, the diameter of syringes filled with an 11C solution and inserted into a water-filled cylinder were estimated with an accuracy of half a pixel (1 mm). The radioactivity concentration was recovered to 100 ± 4% in the 8.7 mm diameter syringe, the one that most closely approximated the superior sagittal sinus. In the human studies, the method systematically overestimated the calibre of the superior sagittal sinus by 2-3 mm compared to measurements made in magnetic resonance venograms on the same subjects. Sources of discrepancies related to the anatomy of the blood vessel were found not to be fundamental limitations to the applicability of the method to human subjects. This method has the potential to provide accurate quantification of blood radioactivity concentration from PET images without the need for blood samples, corrections for delay and dispersion, co-registered anatomical images, or manually defined regions of interest.

  7. Metallogeny, exploitation and environmental impact of the Mt. Amiata mercury ore district (Southern Tuscany, Italy)

    USGS Publications Warehouse

    Rimondi, V.; Chiarantini, L.; Lattanzi, P.; Benvenuti, M.; Beutel, M.; Colica, A.; Costagliola, P.; Di Benedetto, F.; Gabbani, G.; Gray, John E.; Pandeli, E.; Pattelli, G.; Paolieri, M.; Ruggieri, G.

    2015-01-01

    Results of our studies indicate that the Mt. Amiata region is at present a source of Hg of remarkable environmental concern at the local, regional (Tiber River), and Mediterranean scales. Ongoing studies are aimed to a more detailed quantification of the Hg mass load input to the Mediterranean Sea, and to unravel the processes concerning Hg transport and fluid dynamics.                   

  8. Fish habitat characterization and quantification using lidar and conventional topographic information in river survey

    NASA Astrophysics Data System (ADS)

    Marchamalo, Miguel; Bejarano, María-Dolores; García de Jalón, Diego; Martínez Marín, Rubén

    2007-10-01

    This study presents the application of LIDAR data to the evaluation and quantification of fluvial habitat in river systems, coupling remote sensing techniques with hydrological modeling and ecohydraulics. Fish habitat studies depend on the quality and continuity of the input topographic data. Conventional fish habitat studies are limited by the feasibility of field survey in time and budget. This limitation results in differences between the level of river management and the level of models. In order to facilitate upscaling processes from modeling to management units, meso-scale methods were developed (Maddock & Bird, 1996; Parasiewicz, 2001). LIDAR data of regulated River Cinca (Ebro Basin, Spain) were acquired in the low flow season, maximizing the recorded instream area. DTM meshes obtained from LIDAR were used as the input for hydraulic simulation for a range of flows using GUAD2D software. Velocity and depth outputs were combined with gradient data to produce maps reflecting the availability of each mesohabitat unit type for each modeled flow. Fish habitat was then estimated and quantified according to the preferences of main target species as brown trout (Salmo trutta). LIDAR data combined with hydraulic modeling allowed the analysis of fluvial habitat in long fluvial segments which would be time-consuming with traditional survey. LIDAR habitat assessment at mesoscale level avoids the problems of time efficiency and upscaling and is a recommended approach for large river basin management.

  9. Uncertainty quantification for personalized analyses of human proximal femurs.

    PubMed

    Wille, Hagen; Ruess, Martin; Rank, Ernst; Yosibash, Zohar

    2016-02-29

    Computational models for the personalized analysis of human femurs contain uncertainties in bone material properties and loads, which affect the simulation results. To quantify the influence we developed a probabilistic framework based on polynomial chaos (PC) that propagates stochastic input variables through any computational model. We considered a stochastic E-ρ relationship and a stochastic hip contact force, representing realistic variability of experimental data. Their influence on the prediction of principal strains (ϵ1 and ϵ3) was quantified for one human proximal femur, including sensitivity and reliability analysis. Large variabilities in the principal strain predictions were found in the cortical shell of the femoral neck, with coefficients of variation of ≈40%. Between 60 and 80% of the variance in ϵ1 and ϵ3 are attributable to the uncertainty in the E-ρ relationship, while ≈10% are caused by the load magnitude and 5-30% by the load direction. Principal strain directions were unaffected by material and loading uncertainties. The antero-superior and medial inferior sides of the neck exhibited the largest probabilities for tensile and compression failure, however all were very small (pf<0.001). In summary, uncertainty quantification with PC has been demonstrated to efficiently and accurately describe the influence of very different stochastic inputs, which increases the credibility and explanatory power of personalized analyses of human proximal femurs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. The development of an hourly gridded rainfall product for hydrological applications in England and Wales

    NASA Astrophysics Data System (ADS)

    Liguori, Sara; O'Loughlin, Fiachra; Souvignet, Maxime; Coxon, Gemma; Freer, Jim; Woods, Ross

    2014-05-01

    This research presents a newly developed observed sub-daily gridded precipitation product for England and Wales. Importantly our analysis specifically allows a quantification of rainfall errors from grid to the catchment scale, useful for hydrological model simulation and the evaluation of prediction uncertainties. Our methodology involves the disaggregation of the current one kilometre daily gridded precipitation records available for the United Kingdom[1]. The hourly product is created using information from: 1) 2000 tipping-bucket rain gauges; and 2) the United Kingdom Met-Office weather radar network. These two independent datasets provide rainfall estimates at temporal resolutions much smaller than the current daily gridded rainfall product; thus allowing the disaggregation of the daily rainfall records to an hourly timestep. Our analysis is conducted for the period 2004 to 2008, limited by the current availability of the datasets. We analyse the uncertainty components affecting the accuracy of this product. Specifically we explore how these uncertainties vary spatially, temporally and with climatic regimes. Preliminary results indicate scope for improvement of hydrological model performance by the utilisation of this new hourly gridded rainfall product. Such product will improve our ability to diagnose and identify structural errors in hydrological modelling by including the quantification of input errors. References [1] Keller V, Young AR, Morris D, Davies H (2006) Continuous Estimation of River Flows. Technical Report: Estimation of Precipitation Inputs. in Agency E (ed.). Environmental Agency.

  11. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    NASA Astrophysics Data System (ADS)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  12. Recent advances in stable isotope labeling based techniques for proteome relative quantification.

    PubMed

    Zhou, Yuan; Shan, Yichu; Zhang, Lihua; Zhang, Yukui

    2014-10-24

    The large scale relative quantification of all proteins expressed in biological samples under different states is of great importance for discovering proteins with important biological functions, as well as screening disease related biomarkers and drug targets. Therefore, the accurate quantification of proteins at proteome level has become one of the key issues in protein science. Herein, the recent advances in stable isotope labeling based techniques for proteome relative quantification were reviewed, from the aspects of metabolic labeling, chemical labeling and enzyme-catalyzed labeling. Furthermore, the future research direction in this field was prospected. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. quantGenius: implementation of a decision support system for qPCR-based gene quantification.

    PubMed

    Baebler, Špela; Svalina, Miha; Petek, Marko; Stare, Katja; Rotter, Ana; Pompe-Novak, Maruša; Gruden, Kristina

    2017-05-25

    Quantitative molecular biology remains a challenge for researchers due to inconsistent approaches for control of errors in the final results. Due to several factors that can influence the final result, quantitative analysis and interpretation of qPCR data are still not trivial. Together with the development of high-throughput qPCR platforms, there is a need for a tool allowing for robust, reliable and fast nucleic acid quantification. We have developed "quantGenius" ( http://quantgenius.nib.si ), an open-access web application for a reliable qPCR-based quantification of nucleic acids. The quantGenius workflow interactively guides the user through data import, quality control (QC) and calculation steps. The input is machine- and chemistry-independent. Quantification is performed using the standard curve approach, with normalization to one or several reference genes. The special feature of the application is the implementation of user-guided QC-based decision support system, based on qPCR standards, that takes into account pipetting errors, assay amplification efficiencies, limits of detection and quantification of the assays as well as the control of PCR inhibition in individual samples. The intermediate calculations and final results are exportable in a data matrix suitable for further statistical analysis or visualization. We additionally compare the most important features of quantGenius with similar advanced software tools and illustrate the importance of proper QC system in the analysis of qPCR data in two use cases. To our knowledge, quantGenius is the only qPCR data analysis tool that integrates QC-based decision support and will help scientists to obtain reliable results which are the basis for biologically meaningful data interpretation.

  14. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  15. Non-perturbative Quantification of Ionic Charge Transfer through Nm-Scale Protein Pores Using Graphene Microelectrodes

    NASA Astrophysics Data System (ADS)

    Ping, Jinglei; Johnson, A. T. Charlie; A. T. Charlie Johnson Team

    Conventional electrical methods for detecting charge transfer through protein pores perturb the electrostatic condition of the solution and chemical reactivity of the pore, and are not suitable to be used for complex biofluids. We developed a non-perturbative methodology ( fW input power) for quantifying trans-pore electrical current and detecting the pore status (i.e., open vs. closes) via graphene microelectrodes. Ferritin was used as a model protein featuring a large interior compartment, well-separated from the exterior solution with discrete pores as charge commuting channels. The charge flowing through the ferritin pores transfers into the graphene microelectrode and is recorded by an electrometer. In this example, our methodology enables the quantification of an inorganic nanoparticle-protein nanopore interaction in complex biofluids. The authors acknowledge the support from the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army Research Office under Grant Number W911NF1010093.

  16. An Uncertainty Quantification Framework for Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Hobbs, J.

    2017-12-01

    Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.

  17. Automated Detection of Stereotypical Motor Movements in Autism Spectrum Disorder Using Recurrence Quantification Analysis

    PubMed Central

    Großekathöfer, Ulf; Manyakov, Nikolay V.; Mihajlović, Vojkan; Pandina, Gahan; Skalkin, Andrew; Ness, Seth; Bangerter, Abigail; Goodwin, Matthew S.

    2017-01-01

    A number of recent studies using accelerometer features as input to machine learning classifiers show promising results for automatically detecting stereotypical motor movements (SMM) in individuals with Autism Spectrum Disorder (ASD). However, replicating these results across different types of accelerometers and their position on the body still remains a challenge. We introduce a new set of features in this domain based on recurrence plot and quantification analyses that are orientation invariant and able to capture non-linear dynamics of SMM. Applying these features to an existing published data set containing acceleration data, we achieve up to 9% average increase in accuracy compared to current state-of-the-art published results. Furthermore, we provide evidence that a single torso sensor can automatically detect multiple types of SMM in ASD, and that our approach allows recognition of SMM with high accuracy in individuals when using a person-independent classifier. PMID:28261082

  18. Automated Detection of Stereotypical Motor Movements in Autism Spectrum Disorder Using Recurrence Quantification Analysis.

    PubMed

    Großekathöfer, Ulf; Manyakov, Nikolay V; Mihajlović, Vojkan; Pandina, Gahan; Skalkin, Andrew; Ness, Seth; Bangerter, Abigail; Goodwin, Matthew S

    2017-01-01

    A number of recent studies using accelerometer features as input to machine learning classifiers show promising results for automatically detecting stereotypical motor movements (SMM) in individuals with Autism Spectrum Disorder (ASD). However, replicating these results across different types of accelerometers and their position on the body still remains a challenge. We introduce a new set of features in this domain based on recurrence plot and quantification analyses that are orientation invariant and able to capture non-linear dynamics of SMM. Applying these features to an existing published data set containing acceleration data, we achieve up to 9% average increase in accuracy compared to current state-of-the-art published results. Furthermore, we provide evidence that a single torso sensor can automatically detect multiple types of SMM in ASD, and that our approach allows recognition of SMM with high accuracy in individuals when using a person-independent classifier.

  19. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  20. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  1. Stochastic collocation using Kronrod-Patterson-Hermite quadrature with moderate delay for subsurface flow and transport

    NASA Astrophysics Data System (ADS)

    Liao, Q.; Tchelepi, H.; Zhang, D.

    2015-12-01

    Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.

  2. A Race Against Time: Time Lags in Terrestrial-Aquatic Linkages

    NASA Astrophysics Data System (ADS)

    Basu, N. B.

    2017-12-01

    Unprecedented decreases in atmospheric nitrogen (N) deposition together with increases in agricultural N-use efficiency have led to decreases in net anthropogenic N inputs in many eastern U.S. and Canadian watersheds as well as in Europe. Despite such decreases, N concentrations in streams and rivers continue to increase, and problems of coastal eutrophication remain acute. Such a mismatch between N inputs and outputs can arise due to legacy N accumulation and subsequent lag times between implementation of conservation measures and improvements in water quality. In the present study, we quantified such lag times by pairing long-term N input trajectories with stream N concentration data in multiple watersheds in North America. Results show significant nonlinearity between N inputs and outputs, with a strong hysteresis effect indicative of decadal-scale lag times. Lag times were found to be negatively correlated with both tile drainage and watershed slope, with tile drainage being a dominant control in fall and watershed slope being significant during the spring snowmelt period. Quantification of such lags will be crucial to policy-makers as they struggle to set appropriate goals for water quality improvement in human-impacted watersheds.

  3. First-in-human PET quantification study of cerebral α4β2* nicotinic acetylcholine receptors using the novel specific radioligand (-)-[(18)F]Flubatine.

    PubMed

    Sabri, Osama; Becker, Georg-Alexander; Meyer, Philipp M; Hesse, Swen; Wilke, Stephan; Graef, Susanne; Patt, Marianne; Luthardt, Julia; Wagenknecht, Gudrun; Hoepping, Alexander; Smits, René; Franke, Annegret; Sattler, Bernhard; Habermann, Bernd; Neuhaus, Petra; Fischer, Steffen; Tiepolt, Solveig; Deuther-Conrad, Winnie; Barthel, Henryk; Schönknecht, Peter; Brust, Peter

    2015-09-01

    α4β2* nicotinic receptors (α4β2* nAChRs) could provide a biomarker in neuropsychiatric disorders (e.g., Alzheimer's and Parkinson's diseases, depressive disorders, and nicotine addiction). However, there is a lack of α4β2* nAChR specific PET radioligands with kinetics fast enough to enable quantification of nAChR within a reasonable time frame. Following on from promising preclinical results, the aim of the present study was to evaluate for the first time in humans the novel PET radioligand (-)-[(18)F]Flubatine, formerly known as (-)-[(18)F]NCFHEB, as a tool for α4β2* nAChR imaging and in vivo quantification. Dynamic PET emission recordings lasting 270min were acquired on an ECAT EXACT HR+ scanner in 12 healthy male non-smoking subjects (71.0±5.0years) following the intravenous injection of 353.7±9.4MBq of (-)-[(18)F]Flubatine. Individual magnetic resonance imaging (MRI) was performed for co-registration. PET frames were motion-corrected, before the kinetics in 29 brain regions were characterized using 1- and 2-tissue compartment models (1TCM, 2TCM). Given the low amounts of metabolite present in plasma, we tested arterial input functions with and without metabolite corrections. In addition, pixel-based graphical analysis (Logan plot) was used. The model's goodness of fit, with and without metabolite correction was assessed by Akaike's information criterion. Model parameters of interest were the total distribution volume VT (mL/cm(3)), and the binding potential BPND relative to the corpus callosum, which served as a reference region. The tracer proved to have high stability in vivo, with 90% of the plasma radioactivity remaining as untransformed parent compound at 90min, fast brain kinetics with rapid uptake and equilibration between free and receptor-bound tracer. Adequate fits of brain TACs were obtained with the 1TCM. VT could be reliably estimated within 90min for all regions investigated, and within 30min for low-binding regions such as the cerebral cortex. The rank order of VT by region corresponded well with the known distribution of α4β2* receptors (VT [thalamus] 27.4±3.8, VT [putamen] 12.7±0.9, VT [frontal cortex] 10.0±0.8, and VT [corpus callosum] 6.3±0.8). The BPND, which is a parameter of α4β2* nAChR availability, was 3.41±0.79 for the thalamus, 1.04±0.25 for the putamen and 0.61±0.23 for the frontal cortex, indicating high specific tracer binding. Use of the arterial input function without metabolite correction resulted in a 10% underestimation in VT, and was without important biasing effects on BPND. Altogether, kinetics and imaging properties of (-)-[(18)F]Flubatine appear favorable and suggest that (-)-[(18)F]Flubatine is a very suitable and clinically applicable PET tracer for in vivo imaging of α4β2* nAChRs in neuropsychiatric disorders. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  5. IDENTIFICATION AND QUANTIFICATION OF AEROSOL POLAR OXYGENATED COMPOUNDS BEARING CARBOXYLIC AND/OR HYDROXYL GROUPS. 1. METHOD DEVELOPMENT

    EPA Science Inventory

    In this study, a new analytical technique was developed for the identification and quantification of multi-functional compounds containing simultaneously at least one hydroxyl or one carboxylic group, or both. This technique is based on derivatizing first the carboxylic group(s) ...

  6. Full uncertainty quantification of N2O and NO emissions using the biogeochemical model LandscapeDNDC on site and regional scale

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus

    2017-04-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.

  7. UQTk Version 3.0.3 User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sargsyan, Khachik; Safta, Cosmin; Chowdhary, Kamaljit Singh

    2017-05-01

    The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0.3 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sen- sitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

  8. Detection and quantification of large-vessel inflammation with 11C-(R)-PK11195 PET/CT.

    PubMed

    Lamare, Frederic; Hinz, Rainer; Gaemperli, Oliver; Pugliese, Francesca; Mason, Justin C; Spinks, Terence; Camici, Paolo G; Rimoldi, Ornella E

    2011-01-01

    We investigated whether PET/CT angiography using 11C-(R)-PK11195, a selective ligand for the translocator protein (18 kDa) expressed in activated macrophages, could allow imaging and quantification of arterial wall inflammation in patients with large-vessel vasculitis. Seven patients with systemic inflammatory disorders (3 symptomatic patients with clinical suspicion of active vasculitis and 4 asymptomatic patients) underwent PET with 11C-(R)-PK11195 and CT angiography to colocalize arterial wall uptake of 11C-(R)-PK11195. Tissue regions of interest were defined in bone marrow, lung parenchyma, wall of the ascending aorta, aortic arch, and descending aorta. Blood-derived and image-derived input functions (IFs) were generated. A reversible 1-tissue compartment with 2 kinetic rate constants and a fractional blood volume term were used to fit the time-activity curves to calculate total volume of distribution (VT). The correlation between VT and standardized uptake values was assessed. VT was significantly higher in symptomatic than in asymptomatic patients using both image-derived total plasma IF (0.55±0.15 vs. 0.27±0.12, P=0.009) and image-derived parent plasma IF (1.40±0.50 vs. 0.58±0.25, P=0.018). A good correlation was observed between VT and standardized uptake value (R=0.79; P=0.03). 11C-(R)-PK11195 imaging allows visualization of macrophage infiltration in inflamed arterial walls. Tracer uptake can be quantified with image-derived IF without the need for metabolite corrections and evaluated semiquantitatively with standardized uptake values.

  9. Transforming the Way We Teach Function Transformations

    ERIC Educational Resources Information Center

    Faulkenberry, Eileen Durand; Faulkenberry, Thomas J.

    2010-01-01

    In this article, the authors discuss "function," a well-defined rule that relates inputs to outputs. They have found that by using the input-output definition of "function," they can examine transformations of functions simply by looking at changes to input or output and the respective changes to the graph. Applying transformations to the input…

  10. A global Fine-Root Ecology Database to address below-ground challenges in plant ecology.

    PubMed

    Iversen, Colleen M; McCormack, M Luke; Powell, A Shafer; Blackwood, Christopher B; Freschet, Grégoire T; Kattge, Jens; Roumet, Catherine; Stover, Daniel B; Soudzilovskaia, Nadejda A; Valverde-Barrantes, Oscar J; van Bodegom, Peter M; Violle, Cyrille

    2017-07-01

    Variation and tradeoffs within and among plant traits are increasingly being harnessed by empiricists and modelers to understand and predict ecosystem processes under changing environmental conditions. While fine roots play an important role in ecosystem functioning, fine-root traits are underrepresented in global trait databases. This has hindered efforts to analyze fine-root trait variation and link it with plant function and environmental conditions at a global scale. This Viewpoint addresses the need for a centralized fine-root trait database, and introduces the Fine-Root Ecology Database (FRED, http://roots.ornl.gov) which so far includes > 70 000 observations encompassing a broad range of root traits and also includes associated environmental data. FRED represents a critical step toward improving our understanding of below-ground plant ecology. For example, FRED facilitates the quantification of variation in fine-root traits across root orders, species, biomes, and environmental gradients while also providing a platform for assessments of covariation among root, leaf, and wood traits, the role of fine roots in ecosystem functioning, and the representation of fine roots in terrestrial biosphere models. Continued input of observations into FRED to fill gaps in trait coverage will improve our understanding of changes in fine-root traits across space and time. © 2017 UT-Battelle LLC. New Phytologist © 2017 New Phytologist Trust.

  11. How best to assess right ventricular function by echocardiography*

    PubMed Central

    DiLorenzo, Michael P.; Bhatt, Shivani M.; Mercer-Rosa, Laura

    2016-01-01

    Right ventricular function is a crucial determinant of long-term outcomes of children with heart disease. Quantification of right ventricular systolic and diastolic performance by echocardiography is of paramount importance, given the prevalence of children with heart disease, particularly those with involvement of the right heart, such as single or systemic right ventricles, tetralogy of Fallot, and pulmonary arterial hypertension. Identification of poor right ventricular performance can provide an opportunity to intervene. In this review, we will go through the different systolic and diastolic indices, as well as their application in practice. Quantification of right ventricular function is possible and should be routinely performed using a combination of different measures, taking into account each disease state. Quantification is extremely useful for individual patient follow-up. Laboratories should continue to strive to optimise reproducibility through quality improvement and quality assurance efforts in addition to investing in technology and training for new, promising techniques, such as three-dimensional echocardiography. PMID:26675593

  12. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  13. Synthetic Genetic Arrays: Automation of Yeast Genetics.

    PubMed

    Kuzmin, Elena; Costanzo, Michael; Andrews, Brenda; Boone, Charles

    2016-04-01

    Genome-sequencing efforts have led to great strides in the annotation of protein-coding genes and other genomic elements. The current challenge is to understand the functional role of each gene and how genes work together to modulate cellular processes. Genetic interactions define phenotypic relationships between genes and reveal the functional organization of a cell. Synthetic genetic array (SGA) methodology automates yeast genetics and enables large-scale and systematic mapping of genetic interaction networks in the budding yeast,Saccharomyces cerevisiae SGA facilitates construction of an output array of double mutants from an input array of single mutants through a series of replica pinning steps. Subsequent analysis of genetic interactions from SGA-derived mutants relies on accurate quantification of colony size, which serves as a proxy for fitness. Since its development, SGA has given rise to a variety of other experimental approaches for functional profiling of the yeast genome and has been applied in a multitude of other contexts, such as genome-wide screens for synthetic dosage lethality and integration with high-content screening for systematic assessment of morphology defects. SGA-like strategies can also be implemented similarly in a number of other cell types and organisms, includingSchizosaccharomyces pombe,Escherichia coli, Caenorhabditis elegans, and human cancer cell lines. The genetic networks emerging from these studies not only generate functional wiring diagrams but may also play a key role in our understanding of the complex relationship between genotype and phenotype. © 2016 Cold Spring Harbor Laboratory Press.

  14. Quantification of Spatial Heterogeneity in Old Growth Forst of Korean Pine

    Treesearch

    Wang Zhengquan; Wang Qingcheng; Zhang Yandong

    1997-01-01

    Spatial hetergeneity is a very important issue in studying functions and processes of ecological systems at various scales. Semivariogram analysis is an effective technique to summarize spatial data, and quantification of sptail heterogeneity. In this paper, we propose some principles to use semivariograms to characterize and compare spatial heterogeneity of...

  15. Diagnosis and Quantification of Climatic Sensitivity of Carbon Fluxes in Ensemble Global Ecosystem Models

    NASA Astrophysics Data System (ADS)

    Wang, W.; Hashimoto, H.; Milesi, C.; Nemani, R. R.; Myneni, R.

    2011-12-01

    Terrestrial ecosystem models are primary scientific tools to extrapolate our understanding of ecosystem functioning from point observations to global scales as well as from the past climatic conditions into the future. However, no model is nearly perfect and there are often considerable structural uncertainties existing between different models. Ensemble model experiments thus become a mainstream approach in evaluating the current status of global carbon cycle and predicting its future changes. A key task in such applications is to quantify the sensitivity of the simulated carbon fluxes to climate variations and changes. Here we develop a systematic framework to address this question solely by analyzing the inputs and the outputs from the models. The principle of our approach is to assume the long-term (~30 years) average of the inputs/outputs as a quasi-equlibrium of the climate-vegetation system while treat the anomalies of carbon fluxes as responses to climatic disturbances. In this way, the corresponding relationships can be largely linearized and analyzed using conventional time-series techniques. This method is used to characterize three major aspects of the vegetation models that are mostly important to global carbon cycle, namely the primary production, the biomass dynamics, and the ecosystem respiration. We apply this analytical framework to quantify the climatic sensitivity of an ensemble of models including CASA, Biome-BGC, LPJ as well as several other DGVMs from previous studies, all driven by the CRU-NCEP climate dataset. The detailed analysis results are reported in this study.

  16. SU-G-BRA-06: Quantification of Tracking Performance of a Multi-Layer Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Y; Rottmann, J; Myronakis, M

    2016-06-15

    Purpose: The purpose of this study was to quantify the improvement in tumor tracking, with and without fiducial markers, afforded by employing a multi-layer (MLI) electronic portal imaging device (EPID) over the current state-of-the-art, single-layer, digital megavolt imager (DMI) architecture. Methods: An ideal observer signal-to-noise ratio (d’) approach was used to quantify the ability of an MLI EPID and a current, state-of-the-art DMI EPID to track lung tumors from the treatment beam’s-eye-view. Using each detector modulation transfer function (MTF) and noise power spectrum (NPS) as inputs, a detection task was employed with object functions describing simple three-dimensional Cartesian shapes (spheresmore » and cylinders). Marker-less tumor tracking algorithms often use texture discrimination to differentiate benign and malignant tissue. The performance of such algorithms is simulated by employing a discrimination task for the ideal observer, which measures the ability of a system to differentiate two image quantities. These were defined as the measured textures for benign and malignant lung tissue. Results: The NNPS of the MLI ∼25% of that of the DMI at the expense of decreased MTF at intermediate frequencies (0.25≤« less

  17. A method for quantitative analysis of standard and high-throughput qPCR expression data based on input sample quantity.

    PubMed

    Adamski, Mateusz G; Gumann, Patryk; Baird, Alison E

    2014-01-01

    Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR) have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR) and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells) and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA)) permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1) the achievement of absolute quantification and (2) a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.

  18. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  19. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  20. Environmental impacts and production performances of organic agriculture in China: A monetary valuation.

    PubMed

    Meng, Fanqiao; Qiao, Yuhui; Wu, Wenliang; Smith, Pete; Scott, Steffanie

    2017-03-01

    Organic agriculture has developed rapidly in China since the 1990s, driven by the increasing domestic and international demand for organic products. Quantification of the environmental benefits and production performances of organic agriculture on a national scale helps to develop sustainable high yielding agricultural production systems with minimum impacts on the environment. Data of organic production for 2013 were obtained from a national survey organized by the Certification and Accreditation Administration of China. Farming performance and environmental impact indicators were screened and indicator values were defined based on an intensive literature review and were validated by national statistics. The economic (monetary) values of farming inputs, crop production and individual environmental benefits were then quantified and integrated to compare the overall performances of organic vs. conventional agriculture. In 2013, organically managed farmland accounted for approximately 0.97% of national arable land, covering 1.158 million ha. If organic crop yields were assumed to be 10%-15% lower than conventional yields, the environmental benefits of organic agriculture (i.e., a decrease in nitrate leaching, an increase in farmland biodiversity, an increase in carbon sequestration and a decrease in greenhouse gas emissions) were valued at 1921 million RMB (320.2 million USD), or 1659 RMB (276.5 USD) per ha. By reducing the farming inputs, the costs saved was 3110 million RMB (518.3 million USD), or 2686 RMB (447.7 USD) per ha. The economic loss associated with the decrease in crop yields from organic agriculture was valued at 6115 million RMB (1019.2 million USD), or 5280 RMB (880 USD) per ha. Although they were likely underestimated because of the complex relationships among farming operations, ecosystems and humans, the production costs saved and environmental benefits of organic agriculture that were quantified in our study compensated substantially for the economic losses associated with the decrease in crop production. This suggests that payment for the environmental benefits of organic agriculture should be incorporated into public policies. Most of the environmental impacts of organic farming were related to N fluxes within agroecosystems, which is a call for the better management of N fertilizer in regions or countries with low levels of N-use efficiency. Issues such as higher external inputs and lack of integration cropping with animal husbandry should be addressed during the quantification of change of conventional to organic agriculture, and the quantification of this change is challenging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Quantification of isotopic turnover in agricultural systems

    NASA Astrophysics Data System (ADS)

    Braun, A.; Auerswald, K.; Schnyder, H.

    2012-04-01

    The isotopic turnover, which is a proxy for the metabolic rate, is gaining scientific importance. It is quantified for an increasing range of organisms, from microorganisms over plants to animals including agricultural livestock. Additionally, the isotopic turnover is analyzed on different scales, from organs to organisms to ecosystems and even to the biosphere. In particular, the quantification of the isotopic turnover of specific tissues within the same organism, e.g. organs like liver and muscle and products like milk and faeces, has brought new insights to improve understanding of nutrient cycles and fluxes, respectively. Thus, the knowledge of isotopic turnover is important in many areas, including physiology, e.g. milk synthesis, ecology, e.g. soil retention time of water, and medical science, e.g. cancer diagnosis. So far, the isotopic turnover is quantified by applying time, cost and expertise intensive tracer experiments. Usually, this comprises two isotopic equilibration periods. A first equilibration period with a constant isotopic input signal is followed by a second equilibration period with a distinct constant isotopic input signal. This yields a smooth signal change from the first to the second signal in the object under consideration. This approach reveals at least three major problems. (i) The input signals must be controlled isotopically, which is almost impossible in many realistic cases like free ranging animals. (ii) Both equilibration periods may be very long, especially when the turnover rate of the object under consideration is very slow, which aggravates the first problem. (iii) The detection of small or slow pools is improved by large isotopic signal changes, but large isotopic changes also involve a considerable change in the input material; e.g. animal studies are usually carried out as diet-switch experiments, where the diet is switched between C3 and C4 plants, since C3 and C4 plants differ strongly in their isotopic signal. The additional change in nutrition induces changes in physiology that are likely to bias the estimation of the isotopic turnover. We designed an experiment with lactating cows which were successively exposed to the diet's natural isotopic variation and a diet-switch. We examined whether the same turnover information can be obtained from the natural (uncontrolled, short-term) isotopic variation as from the diet-switch experiment. Statistical methods to retrieve the turnover characteristics comprised multi-pool compartmental modeling for the diet-switch experiment as well as correlation analysis to perform wiggle-matching and quantification of autocorrelation (geostatistics) for the analysis of the natural variation. All three methods yielded similar results but differed in their strengths and weaknesses that will be highlighted. Combining the strengths of the new methods can make this tool even more advantageous than diet-switch experiments in many cases. In particular, the new approach empowers studying isotope turnover under a wider range of keepings, wildlife conditions and species, yielding turnover estimates that are not biased by changes in nutrition.

  2. Comparative Evaluation of Flow Quantification across the Atrioventricular Valve in Patients with Functional Univentricular Heart after Fontan's Surgery and Healthy Controls: Measurement by 4D Flow Magnetic Resonance Imaging and Streamline Visualization.

    PubMed

    She, Hoi Lam; Roest, Arno A W; Calkoen, Emmeline E; van den Boogaard, Pieter J; van der Geest, Rob J; Hazekamp, Mark G; de Roos, Albert; Westenberg, Jos J M

    2017-01-01

    To evaluate the inflow pattern and flow quantification in patients with functional univentricular heart after Fontan's operation using 4D flow magnetic resonance imaging (MRI) with streamline visualization when compared with the conventional 2D flow approach. Seven patients with functional univentricular heart after Fontan's operation and twenty-three healthy controls underwent 4D flow MRI. In two orthogonal two-chamber planes, streamline visualization was applied, and inflow angles with peak inflow velocity (PIV) were measured. Transatrioventricular flow quantification was assessed using conventional 2D multiplanar reformation (MPR) and 4D MPR tracking the annulus and perpendicular to the streamline inflow at PIV, and they were validated with net forward aortic flow. Inflow angles at PIV in the patient group demonstrated wide variation of angles and directions when compared with the control group (P < .01). The use of 4D flow MRI with streamlines visualization in quantification of the transatrioventricular flow had smaller limits of agreement (2.2 ± 4.1 mL; 95% limit of agreement -5.9-10.3 mL) when compared with the static plane assessment from 2DFlow MRI (-2.2 ± 18.5 mL; 95% limit of agreement agreement -38.5-34.1 mL). Stronger correlation was present in the 4D flow between the aortic and trans-atrioventricular flow (R 2 correlation in 4D flow: 0.893; in 2D flow: 0.786). Streamline visualization in 4D flow MRI confirmed variable atrioventricular inflow directions in patients with functional univentricular heart with previous Fontan's procedure. 4D flow aided generation of measurement planes according to the blood flood dynamics and has proven to be more accurate than the fixed plane 2D flow measurements when calculating flow quantifications. © 2016 Wiley Periodicals, Inc.

  3. Targeted quantification of functional enzyme dynamics in environmental samples for microbially mediated biogeochemical processes: Targeted quantification of functional enzyme dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Minjing; Gao, Yuqian; Qian, Wei-Jun

    Microbially mediated biogeochemical processes are catalyzed by enzymes that control the transformation of carbon, nitrogen, and other elements in environment. The dynamic linkage between enzymes and biogeochemical species transformation has, however, rarely been investigated because of the lack of analytical approaches to efficiently and reliably quantify enzymes and their dynamics in soils and sediments. Herein, we developed a signature peptide-based technique for sensitively quantifying dissimilatory and assimilatory enzymes using nitrate-reducing enzymes in a hyporheic zone sediment as an example. Moreover, the measured changes in enzyme concentration were found to correlate with the nitrate reduction rate in a way different frommore » that inferred from biogeochemical models based on biomass or functional genes as surrogates for functional enzymes. This phenomenon has important implications for understanding and modeling the dynamics of microbial community functions and biogeochemical processes in environments. Our results also demonstrate the importance of enzyme quantification for the identification and interrogation of those biogeochemical processes with low metabolite concentrations as a result of faster enzyme-catalyzed consumption of metabolites than their production. The dynamic enzyme behaviors provide a basis for the development of enzyme-based models to describe the relationship between the microbial community and biogeochemical processes.« less

  4. Speech versus manual control of camera functions during a telerobotic task

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.

    1989-01-01

    Voice input for control of camera functions was investigated in this study. Objective were to (1) assess the feasibility of a voice-commanded camera control system, and (2) identify factors that differ between voice and manual control of camera functions. Subjects participated in a remote manipulation task that required extensive camera-aided viewing. Each subject was exposed to two conditions, voice and manual input, with a counterbalanced administration order. Voice input was found to be significantly slower than manual input for this task. However, in terms of remote manipulator performance errors and subject preference, there was no difference between modalities. Voice control of continuous camera functions is not recommended. It is believed that the use of voice input for discrete functions, such as multiplexing or camera switching, could aid performance. Hybrid mixes of voice and manual input may provide the best use of both modalities. This report contributes to a better understanding of the issues that affect the design of an efficient human/telerobot interface.

  5. PET Quantification of the Norepinephrine Transporter in Human Brain with (S,S)-18F-FMeNER-D2.

    PubMed

    Moriguchi, Sho; Kimura, Yasuyuki; Ichise, Masanori; Arakawa, Ryosuke; Takano, Harumasa; Seki, Chie; Ikoma, Yoko; Takahata, Keisuke; Nagashima, Tomohisa; Yamada, Makiko; Mimura, Masaru; Suhara, Tetsuya

    2017-07-01

    Norepinephrine transporter (NET) in the brain plays important roles in human cognition and the pathophysiology of psychiatric disorders. Two radioligands, ( S , S )- 11 C-MRB and ( S , S )- 18 F-FMeNER-D 2 , have been used for imaging NETs in the thalamus and midbrain (including locus coeruleus) using PET in humans. However, NET density in the equally important cerebral cortex has not been well quantified because of unfavorable kinetics with ( S , S )- 11 C-MRB and defluorination with ( S , S )- 18 F-FMeNER-D 2 , which can complicate NET quantification in the cerebral cortex adjacent to the skull containing defluorinated 18 F radioactivity. In this study, we have established analysis methods of quantification of NET density in the brain including the cerebral cortex using ( S , S )- 18 F-FMeNER-D 2 PET. Methods: We analyzed our previous ( S , S )- 18 F-FMeNER-D 2 PET data of 10 healthy volunteers dynamically acquired for 240 min with arterial blood sampling. The effects of defluorination on the NET quantification in the superficial cerebral cortex was evaluated by establishing a time stability of NET density estimations with an arterial input 2-tissue-compartment model, which guided the less-invasive reference tissue model and area under the time-activity curve methods to accurately quantify NET density in all brain regions including the cerebral cortex. Results: Defluorination of ( S , S )- 18 F-FMeNER-D 2 became prominent toward the latter half of the 240-min scan. Total distribution volumes in the superficial cerebral cortex increased with the scan duration beyond 120 min. We verified that 90-min dynamic scans provided a sufficient amount of data for quantification of NET density unaffected by defluorination. Reference tissue model binding potential values from the 90-min scan data and area under the time-activity curve ratios of 70- to 90-min data allowed for the accurate quantification of NET density in the cerebral cortex. Conclusion: We have established methods of quantification of NET densities in the brain including the cerebral cortex unaffected by defluorination using ( S , S )- 18 F-FMeNER-D 2 These results suggest that we can accurately quantify NET density with a 90-min ( S , S )- 18 F-FMeNER-D 2 scan in broad brain areas. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  6. Bayesian calibration of coarse-grained forces: Efficiently addressing transferability

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Rosch, Thomas W.; Phelan, Frederick R.

    2016-04-01

    Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics. In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f0 of forces anchored by standard methods such as force-matching. Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our workflow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f0. In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.

  7. Quantification of Inlet Impedance Concept and a Study of the Rayleigh Formula for Noise Radiation from Ducted Fan Engines

    NASA Technical Reports Server (NTRS)

    Posey, Joe W.; Dunn, M. H.; Farassat, F.

    2004-01-01

    This paper addresses two aspects of duct propagation and radiation which can contribute to more efficient fan noise predictions. First, we assess the effectiveness of Rayleigh's formula as a ducted fan noise prediction tool. This classical result which predicts the sound produced by a piston in a flanged duct is expanded to include the uniform axial inflow case. Radiation patterns using Rayleigh's formula with single radial mode input are compared to those obtained from the more precise ducted fan noise prediction code TBIEM3D. Agreement between the two methods is excellent in the peak noise regions both forward and aft. Next, we use TBIEM3D to calculate generalized radiation impedances and power transmission coefficients. These quantities are computed for a wide range of operating parameters. Results were obtained for higher Mach numbers, frequencies, and circumferential mode orders than have been previously published. Viewed as functions of frequency, calculated trends in lower order inlet impedances and power transmission coefficients are in agreement with known results. The relationships are more oscillatory for higher order modes and higher Mach numbers.

  8. Tinnitus: causes and clinical management.

    PubMed

    Langguth, Berthold; Kreuzer, Peter M; Kleinjung, Tobias; De Ridder, Dirk

    2013-09-01

    Tinnitus is the perception of sound in the absence of a corresponding external acoustic stimulus. With prevalence ranging from 10% to 15%, tinnitus is a common disorder. Many people habituate to the phantom sound, but tinnitus severely impairs quality of life of about 1-2% of all people. Tinnitus has traditionally been regarded as an otological disorder, but advances in neuroimaging methods and development of animal models have increasingly shifted the perspective towards its neuronal correlates. Increased neuronal firing rate, enhanced neuronal synchrony, and changes in the tonotopic organisation are recorded in central auditory pathways in reaction to deprived auditory input and represent--together with changes in non-auditory brain areas--the neuronal correlate of tinnitus. Assessment of patients includes a detailed case history, measurement of hearing function, quantification of tinnitus severity, and identification of causal factors, associated symptoms, and comorbidities. Most widely used treatments for tinnitus involve counselling, and best evidence is available for cognitive behavioural therapy. New pathophysiological insights have prompted the development of innovative brain-based treatment approaches to directly target the neuronal correlates of tinnitus. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Fitting Curves by Fractal Interpolation: AN Application to the Quantification of Cognitive Brain Processes

    NASA Astrophysics Data System (ADS)

    Navascues, M. A.; Sebastian, M. V.

    Fractal interpolants of Barnsley are defined for any continuous function defined on a real compact interval. The uniform distance between the function and its approximant is bounded in terms of the vertical scale factors. As a general result, the density of the affine fractal interpolation functions of Barnsley in the space of continuous functions in a compact interval is proved. A method of data fitting by means of fractal interpolation functions is proposed. The procedure is applied to the quantification of cognitive brain processes. In particular, the increase in the complexity of the electroencephalographic signal produced by the execution of a test of visual attention is studied. The experiment was performed on two types of children: a healthy control group and a set of children diagnosed with an attention deficit disorder.

  10. Risk Quantification for Sustaining Coastal Military Installation Asset and Mission Capabilities (RC-1701)

    DTIC Science & Technology

    2014-06-06

    al. 2012, and references therein). The world’s oceans have an en01m ous capacity to store this heat , but the result is ocean wruming and all the...TC96 wind model computes surface stress and average wind speed and direction in the PBL of a tropical cyclone. The model inputs are meteorological...is the effective earth elasticity factor; τs,winds and τs,waves are surface stresses due to winds and waves, respectively; τb is bottom stress ; M

  11. Generation of sub-femtoliter droplet by T-junction splitting on microfluidic chips

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Jun; Feng, Xuan; Xu, Na; Pang, Dai-Wen; Zhang, Zhi-Ling

    2013-03-01

    In the paper, sub-femtoliter droplets were easily produced by droplet splitting at a simple T-junction with orifice, which did not need expensive equipments, complex photolithography skill, or high energy input. The volume of the daughter droplet was not limited by channel size but controlled by channel geometry and fluidic characteristic. Moreover, single bead sampling and bead quantification in different orders of magnitude of droplet volumes were investigated. The droplets split at our T-junction chip had small volume and monodispersed size and could be produced efficiently, orderly, and controllably.

  12. Recent advances in parametric neuroreceptor mapping with dynamic PET: basic concepts and graphical analyses.

    PubMed

    Seo, Seongho; Kim, Su Jin; Lee, Dong Soo; Lee, Jae Sung

    2014-10-01

    Tracer kinetic modeling in dynamic positron emission tomography (PET) has been widely used to investigate the characteristic distribution patterns or dysfunctions of neuroreceptors in brain diseases. Its practical goal has progressed from regional data quantification to parametric mapping that produces images of kinetic-model parameters by fully exploiting the spatiotemporal information in dynamic PET data. Graphical analysis (GA) is a major parametric mapping technique that is independent on any compartmental model configuration, robust to noise, and computationally efficient. In this paper, we provide an overview of recent advances in the parametric mapping of neuroreceptor binding based on GA methods. The associated basic concepts in tracer kinetic modeling are presented, including commonly-used compartment models and major parameters of interest. Technical details of GA approaches for reversible and irreversible radioligands are described, considering both plasma input and reference tissue input models. Their statistical properties are discussed in view of parametric imaging.

  13. The global gridded crop model intercomparison: Data and modeling protocols for Phase 1 (v1.0)

    DOE PAGES

    Elliott, J.; Müller, C.; Deryng, D.; ...

    2015-02-11

    We present protocols and input data for Phase 1 of the Global Gridded Crop Model Intercomparison, a project of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The project consist of global simulations of yields, phenologies, and many land-surface fluxes using 12–15 modeling groups for many crops, climate forcing data sets, and scenarios over the historical period from 1948 to 2012. The primary outcomes of the project include (1) a detailed comparison of the major differences and similarities among global models commonly used for large-scale climate impact assessment, (2) an evaluation of model and ensemble hindcasting skill, (3) quantification ofmore » key uncertainties from climate input data, model choice, and other sources, and (4) a multi-model analysis of the agricultural impacts of large-scale climate extremes from the historical record.« less

  14. State-space adjustment of radar rainfall and skill score evaluation of stochastic volume forecasts in urban drainage systems.

    PubMed

    Löwe, Roland; Mikkelsen, Peter Steen; Rasmussen, Michael R; Madsen, Henrik

    2013-01-01

    Merging of radar rainfall data with rain gauge measurements is a common approach to overcome problems in deriving rain intensities from radar measurements. We extend an existing approach for adjustment of C-band radar data using state-space models and use the resulting rainfall intensities as input for forecasting outflow from two catchments in the Copenhagen area. Stochastic grey-box models are applied to create the runoff forecasts, providing us with not only a point forecast but also a quantification of the forecast uncertainty. Evaluating the results, we can show that using the adjusted radar data improves runoff forecasts compared with using the original radar data and that rain gauge measurements as forecast input are also outperformed. Combining the data merging approach with short-term rainfall forecasting algorithms may result in further improved runoff forecasts that can be used in real time control.

  15. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  16. Fuzzy Neuron: Method and Hardware Realization

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2014-01-01

    This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

  17. Phylogenetic Quantification of Intra-tumour Heterogeneity

    PubMed Central

    Schwarz, Roland F.; Trinh, Anne; Sipos, Botond; Brenton, James D.; Goldman, Nick; Markowetz, Florian

    2014-01-01

    Intra-tumour genetic heterogeneity is the result of ongoing evolutionary change within each cancer. The expansion of genetically distinct sub-clonal populations may explain the emergence of drug resistance, and if so, would have prognostic and predictive utility. However, methods for objectively quantifying tumour heterogeneity have been missing and are particularly difficult to establish in cancers where predominant copy number variation prevents accurate phylogenetic reconstruction owing to horizontal dependencies caused by long and cascading genomic rearrangements. To address these challenges, we present MEDICC, a method for phylogenetic reconstruction and heterogeneity quantification based on a Minimum Event Distance for Intra-tumour Copy-number Comparisons. Using a transducer-based pairwise comparison function, we determine optimal phasing of major and minor alleles, as well as evolutionary distances between samples, and are able to reconstruct ancestral genomes. Rigorous simulations and an extensive clinical study show the power of our method, which outperforms state-of-the-art competitors in reconstruction accuracy, and additionally allows unbiased numerical quantification of tumour heterogeneity. Accurate quantification and evolutionary inference are essential to understand the functional consequences of tumour heterogeneity. The MEDICC algorithms are independent of the experimental techniques used and are applicable to both next-generation sequencing and array CGH data. PMID:24743184

  18. Test-retest reproducibility of quantitative binding measures of [11C]Ro15-4513, a PET ligand for GABAA receptors containing alpha5 subunits.

    PubMed

    McGinnity, Colm J; Riaño Barros, Daniela A; Rosso, Lula; Veronese, Mattia; Rizzo, Gaia; Bertoldo, Alessandra; Hinz, Rainer; Turkheimer, Federico E; Koepp, Matthias J; Hammers, Alexander

    2017-05-15

    Alteration of γ-aminobutyric acid "A" (GABA A ) receptor-mediated neurotransmission has been associated with various neurological and psychiatric disorders. [ 11 C]Ro15-4513 is a PET ligand with high affinity for α5-subunit-containing GABA A receptors, which are highly expressed in limbic regions of the human brain (Sur et al., 1998). We quantified the test-retest reproducibility of measures of [ 11 C]Ro15-4513 binding derived from six different quantification methods (12 variants). Five healthy males (median age 40 years, range 38-49 years) had a 90-min PET scan on two occasions (median interval 12 days, range 11-30 days), after injection of a median dose of 441 MegaBequerels of [ 11 C]Ro15-4513. Metabolite-corrected arterial plasma input functions (parent plasma input functions, ppIFs) were generated for all scans. We quantified regional binding using six methods (12 variants), some of which were region-based (applied to the average time-activity curve within a region) and others were voxel-based: 1) Models requiring arterial ppIFs - regional reversible compartmental models with one and two tissue compartments (2kbv and 4kbv); 2) Regional and voxelwise Logan's graphical analyses (Logan et al., 1990), which required arterial ppIFs; 3) Model-free regional and voxelwise (exponential) spectral analyses (SA; (Cunningham and Jones, 1993)), which also required arterial ppIFs; 4) methods not requiring arterial ppIFs - voxelwise standardised uptake values (Kenney et al., 1941), and regional and voxelwise simplified reference tissue models (SRTM/SRTM2) using brainstem or alternatively cerebellum as pseudo-reference regions (Lammertsma and Hume, 1996; Gunn et al., 1997). To compare the variants, we sampled the mean values of the outcome parameters within six bilateral, non-reference grey matter regions-of-interest. Reliability was quantified in terms of median absolute percentage test-retest differences (MA-TDs; preferentially low) and between-subject coefficient of variation (BS-CV, preferentially high), both compounded by the intraclass correlation coefficient (ICC). These measures were compared between variants, with particular interest in the hippocampus. Two of the six methods (5/12 variants) yielded reproducible data (i.e. MA-TD <10%): regional SRTMs and voxelwise SRTM2s, both using either the brainstem or the cerebellum; and voxelwise SA. However, the SRTMs using the brainstem yielded a lower median BS-CV (7% for regional, 7% voxelwise) than the other variants (8-11%), resulting in lower ICCs. The median ICCs across six regions were 0.89 (interquartile range 0.75-0.90) for voxelwise SA, 0.71 (0.64-0.84) for regional SRTM-cerebellum and 0.83 (0.70-0.86) for voxelwise SRTM-cerebellum. The ICCs for the hippocampus were 0.89 for voxelwise SA, 0.95 for regional SRTM-cerebellum and 0.93 for voxelwise SRTM-cerebellum. Quantification of [ 11 C]Ro15-4513 binding shows very good to excellent reproducibility with SRTM and with voxelwise SA which, however, requires an arterial ppIF. Quantification in the α5 subunit-rich hippocampus is particularly reliable. The very low expression of the α5 in the cerebellum (Fritschy and Mohler, 1995; Veronese et al., 2016) and the substantial α1 subunit density in this region may hamper the application of reference tissue methods. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Layer-specific input to distinct cell types in layer 6 of monkey primary visual cortex.

    PubMed

    Briggs, F; Callaway, E M

    2001-05-15

    Layer 6 of monkey V1 contains a physiologically and anatomically diverse population of excitatory pyramidal neurons. Distinctive arborization patterns of axons and dendrites within the functionally specialized cortical layers define eight types of layer 6 pyramidal neurons and suggest unique information processing roles for each cell type. To address how input sources contribute to cellular function, we examined the laminar sources of functional excitatory input onto individual layer 6 pyramidal neurons using scanning laser photostimulation. We find that excitatory input sources correlate with cell type. Class I neurons with axonal arbors selectively targeting magnocellular (M) recipient layer 4Calpha receive input from M-dominated layer 4B, whereas class I neurons whose axonal arbors target parvocellular (P) recipient layer 4Cbeta receive input from P-dominated layer 2/3. Surprisingly, these neuronal types do not differ significantly in the inputs they receive directly from layers 4Calpha or 4Cbeta. Class II cells, which lack dense axonal arbors within layer 4C, receive excitatory input from layers targeted by their local axons. Specifically, type IIA cells project axons to and receive input from the deep but not superficial layers. Type IIB neurons project to and receive input from the deepest and most superficial, but not middle layers. Type IIC neurons arborize throughout the cortical layers and tend to receive inputs from all cortical layers. These observations have implications for the functional roles of different layer 6 cell types in visual information processing.

  20. Poplar Interactome: Project Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaiswal, Pankaj

    The feedstock plant Poplar has many advantages over traditional crop plants. Not only Poplar needs low energy input and off season storage as compared to feedstocks such as corn, in the winter season Poplar biomass is stored on the stem/trunk, and Poplar plantations serve as large carbon sink. A key constraint to the expansion of cellulosic bioenergy sources such as in Poplar however, is the negative consequence of converting land use from food crops to energy crops. Therefore in order for Poplar to become a viable energy crop it needs to be grown mostly on marginal land unsuitable agricultural crops.more » For this we need a better understanding of abiotic stress and adaptation response in poplar. In the process we expected to find new and existing poplar genes and their function that respond to sustain abiotic stress. We carried out an extensive gene expression study on the control untreated and stress (drought, salinity, cold and heat) treated poplar plants. The samples were collected from the stem, leaf and root tissues. The RNA of protein coding genes and regulatory smallRNA genes were sequenced generating more than a billion reads. This is the first such known study in Poplar plants. These were used for quantification and genomic analysis to identify stress responsive genes in poplar. Based on the quantification and genomic analysis, a select set of genes were studied for gene-gene interactions to find their association to stress response. The data was also used to find novel stress responsive genes in poplar that were previously not identified in the Poplar reference genome. The data is made available to the public through the national and international genomic data archives.« less

  1. GPU-Accelerated Voxelwise Hepatic Perfusion Quantification

    PubMed Central

    Wang, H; Cao, Y

    2012-01-01

    Voxelwise quantification of hepatic perfusion parameters from dynamic contrast enhanced (DCE) imaging greatly contributes to assessment of liver function in response to radiation therapy. However, the efficiency of the estimation of hepatic perfusion parameters voxel-by-voxel in the whole liver using a dual-input single-compartment model requires substantial improvement for routine clinical applications. In this paper, we utilize the parallel computation power of a graphics processing unit (GPU) to accelerate the computation, while maintaining the same accuracy as the conventional method. Using CUDA-GPU, the hepatic perfusion computations over multiple voxels are run across the GPU blocks concurrently but independently. At each voxel, non-linear least squares fitting the time series of the liver DCE data to the compartmental model is distributed to multiple threads in a block, and the computations of different time points are performed simultaneously and synchronically. An efficient fast Fourier transform in a block is also developed for the convolution computation in the model. The GPU computations of the voxel-by-voxel hepatic perfusion images are compared with ones by the CPU using the simulated DCE data and the experimental DCE MR images from patients. The computation speed is improved by 30 times using a NVIDIA Tesla C2050 GPU compared to a 2.67 GHz Intel Xeon CPU processor. To obtain liver perfusion maps with 626400 voxels in a patient’s liver, it takes 0.9 min with the GPU-accelerated voxelwise computation, compared to 110 min with the CPU, while both methods result in perfusion parameters differences less than 10−6. The method will be useful for generating liver perfusion images in clinical settings. PMID:22892645

  2. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  3. CometQ: An automated tool for the detection and quantification of DNA damage using comet assay image analysis.

    PubMed

    Ganapathy, Sreelatha; Muraleedharan, Aparna; Sathidevi, Puthumangalathu Savithri; Chand, Parkash; Rajkumar, Ravi Philip

    2016-09-01

    DNA damage analysis plays an important role in determining the approaches for treatment and prevention of various diseases like cancer, schizophrenia and other heritable diseases. Comet assay is a sensitive and versatile method for DNA damage analysis. The main objective of this work is to implement a fully automated tool for the detection and quantification of DNA damage by analysing comet assay images. The comet assay image analysis consists of four stages: (1) classifier (2) comet segmentation (3) comet partitioning and (4) comet quantification. Main features of the proposed software are the design and development of four comet segmentation methods, and the automatic routing of the input comet assay image to the most suitable one among these methods depending on the type of the image (silver stained or fluorescent stained) as well as the level of DNA damage (heavily damaged or lightly/moderately damaged). A classifier stage, based on support vector machine (SVM) is designed and implemented at the front end, to categorise the input image into one of the above four groups to ensure proper routing. Comet segmentation is followed by comet partitioning which is implemented using a novel technique coined as modified fuzzy clustering. Comet parameters are calculated in the comet quantification stage and are saved in an excel file. Our dataset consists of 600 silver stained images obtained from 40 Schizophrenia patients with different levels of severity, admitted to a tertiary hospital in South India and 56 fluorescent stained images obtained from different internet sources. The performance of "CometQ", the proposed standalone application for automated analysis of comet assay images, is evaluated by a clinical expert and is also compared with that of a most recent and related software-OpenComet. CometQ gave 90.26% positive predictive value (PPV) and 93.34% sensitivity which are much higher than those of OpenComet, especially in the case of silver stained images. The results are validated using confusion matrix and Jaccard index (JI). Comet assay images obtained after DNA damage repair by incubation in the nutrient medium were also analysed, and CometQ showed a significant change in all the comet parameters in most of the cases. Results show that CometQ is an accurate and efficient tool with good sensitivity and PPV for DNA damage analysis using comet assay images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Seismic waveform inversion using neural networks

    NASA Astrophysics Data System (ADS)

    De Wit, R. W.; Trampert, J.

    2012-12-01

    Full waveform tomography aims to extract all available information on Earth structure and seismic sources from seismograms. The strongly non-linear nature of this inverse problem is often addressed through simplifying assumptions for the physical theory or data selection, thus potentially neglecting valuable information. Furthermore, the assessment of the quality of the inferred model is often lacking. This calls for the development of methods that fully appreciate the non-linear nature of the inverse problem, whilst providing a quantification of the uncertainties in the final model. We propose to invert seismic waveforms in a fully non-linear way by using artificial neural networks. Neural networks can be viewed as powerful and flexible non-linear filters. They are very common in speech, handwriting and pattern recognition. Mixture Density Networks (MDN) allow us to obtain marginal posterior probability density functions (pdfs) of all model parameters, conditioned on the data. An MDN can approximate an arbitrary conditional pdf as a linear combination of Gaussian kernels. Seismograms serve as input, Earth structure parameters are the so-called targets and network training aims to learn the relationship between input and targets. The network is trained on a large synthetic data set, which we construct by drawing many random Earth models from a prior model pdf and solving the forward problem for each of these models, thus generating synthetic seismograms. As a first step, we aim to construct a 1D Earth model. Training sets are constructed using the Mineos package, which computes synthetic seismograms in a spherically symmetric non-rotating Earth by summing normal modes. We train a network on the body waveforms present in these seismograms. Once the network has been trained, it can be presented with new unseen input data, in our case the body waves in real seismograms. We thus obtain the posterior pdf which represents our final state of knowledge given the information in the training set and the real data.

  5. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  6. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    PubMed

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  7. Neurochemical and BOLD responses during neuronal activation measured in the human visual cortex at 7 Tesla.

    PubMed

    Bednařík, Petr; Tkáč, Ivan; Giove, Federico; DiNuzzo, Mauro; Deelchand, Dinesh K; Emir, Uzay E; Eberly, Lynn E; Mangia, Silvia

    2015-03-31

    Several laboratories have consistently reported small concentration changes in lactate, glutamate, aspartate, and glucose in the human cortex during prolonged stimuli. However, whether such changes correlate with blood oxygenation level-dependent functional magnetic resonance imaging (BOLD-fMRI) signals have not been determined. The present study aimed at characterizing the relationship between metabolite concentrations and BOLD-fMRI signals during a block-designed paradigm of visual stimulation. Functional magnetic resonance spectroscopy (fMRS) and fMRI data were acquired from 12 volunteers. A short echo-time semi-LASER localization sequence optimized for 7 Tesla was used to achieve full signal-intensity MRS data. The group analysis confirmed that during stimulation lactate and glutamate increased by 0.26 ± 0.06 μmol/g (~30%) and 0.28 ± 0.03 μmol/g (~3%), respectively, while aspartate and glucose decreased by 0.20 ± 0.04 μmol/g (~5%) and 0.19 ± 0.03 μmol/g (~16%), respectively. The single-subject analysis revealed that BOLD-fMRI signals were positively correlated with glutamate and lactate concentration changes. The results show a linear relationship between metabolic and BOLD responses in the presence of strong excitatory sensory inputs, and support the notion that increased functional energy demands are sustained by oxidative metabolism. In addition, BOLD signals were inversely correlated with baseline γ-aminobutyric acid concentration. Finally, we discussed the critical importance of taking into account linewidth effects on metabolite quantification in fMRS paradigms.

  8. Estimating leaf functional traits by inversion of PROSPECT: Assessing leaf dry matter content and specific leaf area in mixed mountainous forest

    NASA Astrophysics Data System (ADS)

    Ali, Abebe Mohammed; Darvishzadeh, Roshanak; Skidmore, Andrew K.; Duren, Iris van; Heiden, Uta; Heurich, Marco

    2016-03-01

    Assessments of ecosystem functioning rely heavily on quantification of vegetation properties. The search is on for methods that produce reliable and accurate baseline information on plant functional traits. In this study, the inversion of the PROSPECT radiative transfer model was used to estimate two functional leaf traits: leaf dry matter content (LDMC) and specific leaf area (SLA). Inversion of PROSPECT usually aims at quantifying its direct input parameters. This is the first time the technique has been used to indirectly model LDMC and SLA. Biophysical parameters of 137 leaf samples were measured in July 2013 in the Bavarian Forest National Park, Germany. Spectra of the leaf samples were measured using an ASD FieldSpec3 equipped with an integrating sphere. PROSPECT was inverted using a look-up table (LUT) approach. The LUTs were generated with and without using prior information. The effect of incorporating prior information on the retrieval accuracy was studied before and after stratifying the samples into broadleaf and conifer categories. The estimated values were evaluated using R2 and normalized root mean square error (nRMSE). Among the retrieved variables the lowest nRMSE (0.0899) was observed for LDMC. For both traits higher R2 values (0.83 for LDMC and 0.89 for SLA) were discovered in the pooled samples. The use of prior information improved accuracy of the retrieved traits. The strong correlation between the estimated traits and the NIR/SWIR region of the electromagnetic spectrum suggests that these leaf traits could be assessed at canopy level by using remotely sensed data.

  9. Force-independent distribution of correlated neural inputs to hand muscles during three-digit grasping.

    PubMed

    Poston, Brach; Danna-Dos Santos, Alessander; Jesunathadas, Mark; Hamm, Thomas M; Santello, Marco

    2010-08-01

    The ability to modulate digit forces during grasping relies on the coordination of multiple hand muscles. Because many muscles innervate each digit, the CNS can potentially choose from a large number of muscle coordination patterns to generate a given digit force. Studies of single-digit force production tasks have revealed that the electromyographic (EMG) activity scales uniformly across all muscles as a function of digit force. However, the extent to which this finding applies to the coordination of forces across multiple digits is unknown. We addressed this question by asking subjects (n = 8) to exert isometric forces using a three-digit grip (thumb, index, and middle fingers) that allowed for the quantification of hand muscle coordination within and across digits as a function of grasp force (5, 20, 40, 60, and 80% maximal voluntary force). We recorded EMG from 12 muscles (6 extrinsic and 6 intrinsic) of the three digits. Hand muscle coordination patterns were quantified in the amplitude and frequency domains (EMG-EMG coherence). EMG amplitude scaled uniformly across all hand muscles as a function of grasp force (muscle x force interaction: P = 0.997; cosines of angle between muscle activation pattern vector pairs: 0.897-0.997). Similarly, EMG-EMG coherence was not significantly affected by force (P = 0.324). However, coherence was stronger across extrinsic than that across intrinsic muscle pairs (P = 0.0039). These findings indicate that the distribution of neural drive to multiple hand muscles is force independent and may reflect the anatomical properties or functional roles of hand muscle groups.

  10. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  11. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  12. Microwave- and ultrasound-assisted extraction of vanillin and its quantification by high-performance liquid chromatography in Vanilla planifolia.

    PubMed

    Sharma, Anuj; Verma, Subash Chandra; Saxena, Nisha; Chadda, Neetu; Singh, Narendra Pratap; Sinha, Arun Kumar

    2006-03-01

    Microwave-assisted extraction (MAE), ultrasound-assisted extraction (UAE) and conventional extraction of vanillin and its quantification by HPLC in pods of Vanilla planifolia is described. A range of nonpolar to polar solvents were used for the extraction of vanillin employing MAE, UAE and conventional methods. Various extraction parameters such as nature of the solvent, solvent volume, time of irradiation, microwave and ultrasound energy inputs were optimized. HPLC was performed on RP ODS column (4.6 mm ID x 250 mm, 5 microm, Waters), a photodiode array detector (Waters 2996) using gradient solvent system of ACN and ortho-phosphoric acid in water (0.001:99.999 v/v) at 25 degrees C. Regression equation revealed a linear relationship (r2 > 0.9998) between the mass of vanillin injected and the peak areas. The detection limit (S/N = 3) and limit of quantification (S/N = 10) were 0.65 and 1.2 microg/g, respectively. Recovery was achieved in the range 98.5-99.6% for vanillin. Maximum yield of vanilla extract (29.81, 29.068 and 14.31% by conventional extraction, MAE and UAE, respectively) was found in a mixture of ethanol/water (40:60 v/v). Dehydrated ethanolic extract showed the highest amount of vanillin (1.8, 1.25 and 0.99% by MAE, conventional extraction and UAE, respectively).

  13. Composition of complex numbers: Delineating the computational role of the left anterior temporal lobe.

    PubMed

    Blanco-Elorrieta, Esti; Pylkkänen, Liina

    2016-01-01

    What is the neurobiological basis of our ability to create complex messages with language? Results from multiple methodologies have converged on a set of brain regions as relevant for this general process, but the computational details of these areas remain to be characterized. The left anterior temporal lobe (LATL) has been a consistent node within this network, with results suggesting that although it rather systematically shows increased activation for semantically complex structured stimuli, this effect does not extend to number phrases such as 'three books.' In the present work we used magnetoencephalography to investigate whether numbers in general are an invalid input to the combinatory operations housed in the LATL or whether the lack of LATL engagement for stimuli such as 'three books' is due to the quantificational nature of such phrases. As a relevant test case, we employed complex number terms such as 'twenty-three', where one number term is not a quantifier of the other but rather, the two terms form a type of complex concept. In a number naming paradigm, participants viewed rows of numbers and depending on task instruction, named them as complex number terms ('twenty-three'), numerical quantifications ('two threes'), adjectival modifications ('blue threes') or non-combinatory lists (e.g., 'two, three'). While quantificational phrases failed to engage the LATL as compared to non-combinatory controls, both complex number terms and adjectival modifications elicited a reliable activity increase in the LATL. Our results show that while the LATL does not participate in the enumeration of tokens within a set, exemplified by the quantificational phrases, it does support conceptual combination, including the composition of complex number concepts. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less

  15. Quantitative assessment of multiple sclerosis lesion load using CAD and expert input

    NASA Astrophysics Data System (ADS)

    Gertych, Arkadiusz; Wong, Alexis; Sangnil, Alan; Liu, Brent J.

    2008-03-01

    Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on magnetic resonance (MR) images is clinically useful and provides information about the development and change reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL quantification strategies. We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological findings to original images according to current DICOM standard. CAD is also capable to display and track changes and make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good correlation of CAD-derived results vs. those incorporated as expert's reading. Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS progression tracking.

  16. Toward computer-aided emphysema quantification on ultralow-dose CT: reproducibility of ventrodorsal gravity effect measurement and correction

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Opfer, Roland; Bülow, Thomas; Rogalla, Patrik; Steinberg, Amnon; Dharaiya, Ekta; Subramanyan, Krishna

    2007-03-01

    Computer aided quantification of emphysema in high resolution CT data is based on identifying low attenuation areas below clinically determined Hounsfield thresholds. However, the emphysema quantification is prone to error since a gravity effect can influence the mean attenuation of healthy lung parenchyma up to +/- 50 HU between ventral and dorsal lung areas. Comparing ultra-low-dose (7 mAs) and standard-dose (70 mAs) CT scans of each patient we show that measurement of the ventrodorsal gravity effect is patient specific but reproducible. It can be measured and corrected in an unsupervised way using robust fitting of a linear function.

  17. High-throughput monitoring of major cell functions by means of lensfree video microscopy

    PubMed Central

    Kesavan, S. Vinjimore; Momey, F.; Cioni, O.; David-Watine, B.; Dubrulle, N.; Shorte, S.; Sulpice, E.; Freida, D.; Chalmond, B.; Dinten, J. M.; Gidrol, X.; Allier, C.

    2014-01-01

    Quantification of basic cell functions is a preliminary step to understand complex cellular mechanisms, for e.g., to test compatibility of biomaterials, to assess the effectiveness of drugs and siRNAs, and to control cell behavior. However, commonly used quantification methods are label-dependent, and end-point assays. As an alternative, using our lensfree video microscopy platform to perform high-throughput real-time monitoring of cell culture, we introduce specifically devised metrics that are capable of non-invasive quantification of cell functions such as cell-substrate adhesion, cell spreading, cell division, cell division orientation and cell death. Unlike existing methods, our platform and associated metrics embrace entire population of thousands of cells whilst monitoring the fate of every single cell within the population. This results in a high content description of cell functions that typically contains 25,000 – 900,000 measurements per experiment depending on cell density and period of observation. As proof of concept, we monitored cell-substrate adhesion and spreading kinetics of human Mesenchymal Stem Cells (hMSCs) and primary human fibroblasts, we determined the cell division orientation of hMSCs, and we observed the effect of transfection of siCellDeath (siRNA known to induce cell death) on hMSCs and human Osteo Sarcoma (U2OS) Cells. PMID:25096726

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  19. Design and implementation of a risk assessment module in a spatial decision support system

    NASA Astrophysics Data System (ADS)

    Zhang, Kaixi; van Westen, Cees; Bakker, Wim

    2014-05-01

    The spatial decision support system named 'Changes SDSS' is currently under development. The goal of this system is to analyze changing hydro-meteorological hazards and the effect of risk reduction alternatives to support decision makers in choosing the best alternatives. The risk assessment module within the system is to assess the current risk, analyze the risk after implementations of risk reduction alternatives, and analyze the risk in different future years when considering scenarios such as climate change, land use change and population growth. The objective of this work is to present the detailed design and implementation plan of the risk assessment module. The main challenges faced consist of how to shift the risk assessment from traditional desktop software to an open source web-based platform, the availability of input data and the inclusion of uncertainties in the risk analysis. The risk assessment module is developed using Ext JS library for the implementation of user interface on the client side, using Python for scripting, as well as PostGIS spatial functions for complex computations on the server side. The comprehensive consideration of the underlying uncertainties in input data can lead to a better quantification of risk assessment and a more reliable Changes SDSS, since the outputs of risk assessment module are the basis for decision making module within the system. The implementation of this module will contribute to the development of open source web-based modules for multi-hazard risk assessment in the future. This work is part of the "CHANGES SDSS" project, funded by the European Community's 7th Framework Program.

  20. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  1. Computed tomographic-based quantification of emphysema and correlation to pulmonary function and mechanics.

    PubMed

    Washko, George R; Criner, Gerald J; Mohsenifar, Zab; Sciurba, Frank C; Sharafkhaneh, Amir; Make, Barry J; Hoffman, Eric A; Reilly, John J

    2008-06-01

    Computed tomographic based indices of emphysematous lung destruction may highlight differences in disease pathogenesis and further enable the classification of subjects with Chronic Obstructive Pulmonary Disease. While there are multiple techniques that can be utilized for such radiographic analysis, there is very little published information comparing the performance of these methods in a clinical case series. Our objective was to examine several quantitative and semi-quantitative methods for the assessment of the burden of emphysema apparent on computed tomographic scans and compare their ability to predict lung mechanics and function. Automated densitometric analysis was performed on 1094 computed tomographic scans collected upon enrollment into the National Emphysema Treatment Trial. Trained radiologists performed an additional visual grading of emphysema on high resolution CT scans. Full pulmonary function test results were available for correlation, with a subset of subjects having additional measurements of lung static recoil. There was a wide range of emphysematous lung destruction apparent on the CT scans and univariate correlations to measures of lung function were of modest strength. No single method of CT scan analysis clearly outperformed the rest of the group. Quantification of the burden of emphysematous lung destruction apparent on CT scan is a weak predictor of lung function and mechanics in severe COPD with no uniformly superior method found to perform this analysis. The CT based quantification of emphysema may augment pulmonary function testing in the characterization of COPD by providing complementary phenotypic information.

  2. Advancing the Food-Energy-Water Nexus: Closing Nutrient Loops in Arid River Corridors.

    PubMed

    Mortensen, Jacob G; González-Pinzón, Ricardo; Dahm, Clifford N; Wang, Jingjing; Zeglin, Lydia H; Van Horn, David J

    2016-08-16

    Closing nutrient loops in terrestrial and aquatic ecosystems is integral to achieve resource security in the food-energy-water (FEW) nexus. We performed multiyear (2005-2008), monthly sampling of instream dissolved inorganic nutrient concentrations (NH4-N, NO3-N, soluble reactive phosphorus-SRP) along a ∼ 300-km arid-land river (Rio Grande, NM) and generated nutrient budgets to investigate how the net source/sink behavior of wastewater and irrigated agriculture can be holistically managed to improve water quality and close nutrient loops. Treated wastewater on average contributed over 90% of the instream dissolved inorganic nutrients (101 kg/day NH4-N, 1097 kg/day NO3-N, 656 kg/day SRP). During growing seasons, the irrigation network downstream of wastewater outfalls retained on average 37% of NO3-N and 45% of SRP inputs, with maximum retention exceeding 60% and 80% of NO3-N and SRP inputs, respectively. Accurate quantification of NH4-N retention was hindered by low loading and high variability. Nutrient retention in the irrigation network and instream processes together limited downstream export during growing seasons, with total retention of 33-99% of NO3-N inputs and 45-99% of SRP inputs. From our synoptic analysis, we identify trade-offs associated with wastewater reuse for agriculture within the scope of the FEW nexus and propose strategies for closing nutrient loops in arid-land rivers.

  3. Compact universal logic gates realized using quantization of current in nanodevices.

    PubMed

    Zhang, Wancheng; Wu, Nan-Jian; Yang, Fuhua

    2007-12-12

    This paper proposes novel universal logic gates using the current quantization characteristics of nanodevices. In nanodevices like the electron waveguide (EW) and single-electron (SE) turnstile, the channel current is a staircase quantized function of its control voltage. We use this unique characteristic to compactly realize Boolean functions. First we present the concept of the periodic-threshold threshold logic gate (PTTG), and we build a compact PTTG using EW and SE turnstiles. We show that an arbitrary three-input Boolean function can be realized with a single PTTG, and an arbitrary four-input Boolean function can be realized by using two PTTGs. We then use one PTTG to build a universal programmable two-input logic gate which can be used to realize all two-input Boolean functions. We also build a programmable three-input logic gate by using one PTTG. Compared with linear threshold logic gates, with the PTTG one can build digital circuits more compactly. The proposed PTTGs are promising for future smart nanoscale digital system use.

  4. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  5. Differential inputs to striatal cholinergic and parvalbumin interneurons imply functional distinctions

    PubMed Central

    Klug, Jason R; Engelhardt, Max D; Cadman, Cara N; Li, Hao; Smith, Jared B; Ayala, Sarah; Williams, Elora W; Hoffman, Hilary

    2018-01-01

    Striatal cholinergic (ChAT) and parvalbumin (PV) interneurons exert powerful influences on striatal function in health and disease, yet little is known about the organization of their inputs. Here using rabies tracing, electrophysiology and genetic tools, we compare the whole-brain inputs to these two types of striatal interneurons and dissect their functional connectivity in mice. ChAT interneurons receive a substantial cortical input from associative regions of cortex, such as the orbitofrontal cortex. Amongst subcortical inputs, a previously unknown inhibitory thalamic reticular nucleus input to striatal PV interneurons is identified. Additionally, the external segment of the globus pallidus targets striatal ChAT interneurons, which is sufficient to inhibit tonic ChAT interneuron firing. Finally, we describe a novel excitatory pathway from the pedunculopontine nucleus that innervates ChAT interneurons. These results establish the brain-wide direct inputs of two major types of striatal interneurons and allude to distinct roles in regulating striatal activity and controlling behavior. PMID:29714166

  6. Quantitative myocardial perfusion from static cardiac and dynamic arterial CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.

    2018-05-01

    Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.

  7. iTOUGH2 V6.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan A.

    2010-11-01

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less

  8. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) Applications in Quantitative Proteomics.

    PubMed

    Chahrour, Osama; Malone, John

    2017-01-01

    Recent advances in inductively coupled plasma mass spectrometry (ICP-MS) hyphenated to different separation techniques have promoted it as a valuable tool in protein/peptide quantification. These emerging ICP-MS applications allow absolute quantification by measuring specific elemental responses. One approach quantifies elements already present in the structure of the target peptide (e.g. phosphorus and sulphur) as natural tags. Quantification of these natural tags allows the elucidation of the degree of protein phosphorylation in addition to absolute protein quantification. A separate approach is based on utilising bi-functional labelling substances (those containing ICP-MS detectable elements), that form a covalent chemical bond with the protein thus creating analogs which are detectable by ICP-MS. Based on the previously established stoichiometries of the labelling reagents, quantification can be achieved. This technique is very useful for the design of precise multiplexed quantitation schemes to address the challenges of biomarker screening and discovery. This review discusses the capabilities and different strategies to implement ICP-MS in the field of quantitative proteomics. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Transfer Function Control for Biometric Monitoring System

    NASA Technical Reports Server (NTRS)

    Chmiel, Alan J. (Inventor); Grodinsky, Carlos M. (Inventor); Humphreys, Bradley T. (Inventor)

    2015-01-01

    A modular apparatus for acquiring biometric data may include circuitry operative to receive an input signal indicative of a biometric condition, the circuitry being configured to process the input signal according to a transfer function thereof and to provide a corresponding processed input signal. A controller is configured to provide at least one control signal to the circuitry to programmatically modify the transfer function of the modular system to facilitate acquisition of the biometric data.

  10. Mechanisms Controlling the Plant Diversity Effect on Soil Microbial Community Composition and Soil Microbial Diversity

    NASA Astrophysics Data System (ADS)

    Mellado Vázquez, P. G.; Lange, M.; Griffiths, R.; Malik, A.; Ravenek, J.; Strecker, T.; Eisenhauer, N.; Gleixner, G.

    2015-12-01

    Soil microorganisms are the main drivers of soil organic matter cycling. Organic matter input by living plants is the major energy and matter source for soil microorganisms, higher organic matter inputs are found in highly diverse plant communities. It is therefore relevant to understand how plant diversity alters the soil microbial community and soil organic matter. In a general sense, microbial biomass and microbial diversity increase with increasing plant diversity, however the mechanisms driving these interactions are not fully explored. Working with soils from a long-term biodiversity experiment (The Jena Experiment), we investigated how changes in the soil microbial dynamics related to plant diversity were explained by biotic and abiotic factors. Microbial biomass quantification and differentiation of bacterial and fungal groups was done by phospholipid fatty acid (PLFA) analysis; terminal-restriction fragment length polymorphism was used to determine the bacterial diversity. Gram negative (G-) bacteria predominated in high plant diversity; Gram positive (G+) bacteria were more abundant in low plant diversity and saprotrophic fungi were independent from plant diversity. The separation between G- and G+ bacteria in relation to plant diversity was governed by a difference in carbon-input related factors (e.g. root biomass and soil moisture) between plant diversity levels. Moreover, the bacterial diversity increased with plant diversity and the evenness of the PLFA markers decreased. Our results showed that higher plant diversity favors carbon-input related factors and this in turn favors the development of microbial communities specialized in utilizing new carbon inputs (i.e. G- bacteria), which are contributing to the export of new C from plants to soils.

  11. Uncertainty Quantification of the FUN3D-Predicted NASA CRM Flutter Boundary

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.; Massey, Steven J.

    2017-01-01

    A nonintrusive point collocation method is used to propagate parametric uncertainties of the flexible Common Research Model, a generic transport configuration, through the unsteady aeroelastic CFD solver FUN3D. A range of random input variables are considered, including atmospheric flow variables, structural variables, and inertial (lumped mass) variables. UQ results are explored for a range of output metrics (with a focus on dynamic flutter stability), for both subsonic and transonic Mach numbers, for two different CFD mesh refinements. A particular focus is placed on computing failure probabilities: the probability that the wing will flutter within the flight envelope.

  12. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    PubMed

    Kawabata, Takeshi

    2018-07-01

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  13. All-Electronic Quantification of Neuropeptide-Receptor Interaction Using a Bias-Free Functionalized Graphene Microelectrode.

    PubMed

    Ping, Jinglei; Vishnubhotla, Ramya; Xi, Jin; Ducos, Pedro; Saven, Jeffery G; Liu, Renyu; Johnson, Alan T Charlie

    2018-05-22

    Opioid neuropeptides play a significant role in pain perception, appetite regulation, sleep, memory, and learning. Advances in understanding of opioid peptide physiology are held back by the lack of methodologies for real-time quantification of affinities and kinetics of the opioid neuropeptide-receptor interaction at levels typical of endogenous secretion (<50 pM) in biosolutions with physiological ionic strength. To address this challenge, we developed all-electronic opioid-neuropeptide biosensors based on graphene microelectrodes functionalized with a computationally redesigned water-soluble μ-opioid receptor. We used the functionalized microelectrode in a bias-free charge measurement configuration to measure the binding kinetics and equilibrium binding properties of the engineered receptor with [d-Ala 2 , N-MePhe 4 , Gly-ol]-enkephalin and β-endorphin at picomolar levels in real time.

  14. Lesion Quantification in Dual-Modality Mammotomography

    NASA Astrophysics Data System (ADS)

    Li, Heng; Zheng, Yibin; More, Mitali J.; Goodale, Patricia J.; Williams, Mark B.

    2007-02-01

    This paper describes a novel x-ray/SPECT dual modality breast imaging system that provides 3D structural and functional information. While only a limited number of views on one side of the breast can be acquired due to mechanical and time constraints, we developed a technique to compensate for the limited angle artifact in reconstruction images and accurately estimate both the lesion size and radioactivity concentration. Various angular sampling strategies were evaluated using both simulated and experimental data. It was demonstrated that quantification of lesion size to an accuracy of 10% and quantification of radioactivity to an accuracy of 20% are feasible from limited-angle data acquired with clinically practical dosage and acquisition time

  15. Existence conditions for unknown input functional observers

    NASA Astrophysics Data System (ADS)

    Fernando, T.; MacDougall, S.; Sreeram, V.; Trinh, H.

    2013-01-01

    This article presents necessary and sufficient conditions for the existence and design of an unknown input Functional observer. The existence of the observer can be verified by computing a nullspace of a known matrix and testing some matrix rank conditions. The existence of the observer does not require the satisfaction of the observer matching condition (i.e. Equation (16) in Hou and Muller 1992, 'Design of Observers for Linear Systems with Unknown Inputs', IEEE Transactions on Automatic Control, 37, 871-875), is not limited to estimating scalar functionals and allows for arbitrary pole placement. The proposed observer always exists when a state observer exists for the unknown input system, and furthermore, the proposed observer can exist even in some instances when an unknown input state observer does not exist.

  16. Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2016-11-01

    Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.

  17. Development and application of damage assessment modeling: example assessment for the North Cape oil spill.

    PubMed

    McCay, Deborah French

    2003-01-01

    Natural resource damage assessment (NRDA) models for oil spills have been under development since 1984. Generally applicable (simplified) versions with built-in data sets are included in US government regulations for NRDAs in US waters. The most recent version of these models is SIMAP (Spill Impact Model Application Package), which contains oil fates and effects models that may be applied to any spill event and location in marine or freshwater environments. It is often not cost-effective or even possible to quantify spill impacts using field data collections. Modeling allows quantification of spill impacts using as much site-specific data as available, either as input or as validation of model results. SIMAP was used for the North Cape oil spill in Rhode Island (USA) in January 1996, for injury quantification in the first and largest NRDA case to be performed under the 1996 Oil Pollution Act NRDA regulations. The case was successfully settled in 1999. This paper, which contains a description of the model and application to the North Cape spill, delineates and demonstrates the approach.

  18. On uncertainty quantification of lithium-ion batteries: Application to an LiC6/LiCoO2 cell

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Maute, Kurt; Doostan, Alireza

    2015-12-01

    In this work, a stochastic, physics-based model for Lithium-ion batteries (LIBs) is presented in order to study the effects of parametric model uncertainties on the cell capacity, voltage, and concentrations. To this end, the proposed uncertainty quantification (UQ) approach, based on sparse polynomial chaos expansions, relies on a small number of battery simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobol' indices. Such information aids in designing more efficient and targeted quality control procedures, which consequently may result in reducing the LIB production cost. An LiC6/LiCoO2 cell with 19 uncertain parameters discharged at 0.25C, 1C and 4C rates is considered to study the performance and accuracy of the proposed UQ approach. The results suggest that, for the considered cell, the battery discharge rate is a key factor affecting not only the performance variability of the cell, but also the determination of most important random inputs.

  19. INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE

    PubMed Central

    Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval

    2008-01-01

    SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077

  20. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  1. Significance of Input Correlations in Striatal Function

    PubMed Central

    Yim, Man Yi; Aertsen, Ad; Kumar, Arvind

    2011-01-01

    The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia. PMID:22125480

  2. Parallel, but Dissociable, Processing in Discrete Corticostriatal Inputs Encodes Skill Learning.

    PubMed

    Kupferschmidt, David A; Juczewski, Konrad; Cui, Guohong; Johnson, Kari A; Lovinger, David M

    2017-10-11

    Changes in cortical and striatal function underlie the transition from novel actions to refined motor skills. How discrete, anatomically defined corticostriatal projections function in vivo to encode skill learning remains unclear. Using novel fiber photometry approaches to assess real-time activity of associative inputs from medial prefrontal cortex to dorsomedial striatum and sensorimotor inputs from motor cortex to dorsolateral striatum, we show that associative and sensorimotor inputs co-engage early in action learning and disengage in a dissociable manner as actions are refined. Disengagement of associative, but not sensorimotor, inputs predicts individual differences in subsequent skill learning. Divergent somatic and presynaptic engagement in both projections during early action learning suggests potential learning-related in vivo modulation of presynaptic corticostriatal function. These findings reveal parallel processing within associative and sensorimotor circuits that challenges and refines existing views of corticostriatal function and expose neuronal projection- and compartment-specific activity dynamics that encode and predict action learning. Published by Elsevier Inc.

  3. BLISS is a versatile and quantitative method for genome-wide profiling of DNA double-strand breaks.

    PubMed

    Yan, Winston X; Mirzazadeh, Reza; Garnerone, Silvano; Scott, David; Schneider, Martin W; Kallas, Tomasz; Custodio, Joaquin; Wernersson, Erik; Li, Yinqing; Gao, Linyi; Federova, Yana; Zetsche, Bernd; Zhang, Feng; Bienko, Magda; Crosetto, Nicola

    2017-05-12

    Precisely measuring the location and frequency of DNA double-strand breaks (DSBs) along the genome is instrumental to understanding genomic fragility, but current methods are limited in versatility, sensitivity or practicality. Here we present Breaks Labeling In Situ and Sequencing (BLISS), featuring the following: (1) direct labelling of DSBs in fixed cells or tissue sections on a solid surface; (2) low-input requirement by linear amplification of tagged DSBs by in vitro transcription; (3) quantification of DSBs through unique molecular identifiers; and (4) easy scalability and multiplexing. We apply BLISS to profile endogenous and exogenous DSBs in low-input samples of cancer cells, embryonic stem cells and liver tissue. We demonstrate the sensitivity of BLISS by assessing the genome-wide off-target activity of two CRISPR-associated RNA-guided endonucleases, Cas9 and Cpf1, observing that Cpf1 has higher specificity than Cas9. Our results establish BLISS as a versatile, sensitive and efficient method for genome-wide DSB mapping in many applications.

  4. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    NASA Astrophysics Data System (ADS)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  5. An overview of particulate emissions from residential biomass combustion

    NASA Astrophysics Data System (ADS)

    Vicente, E. D.; Alves, C. A.

    2018-01-01

    Residential biomass burning has been pointed out as one of the largest sources of fine particles in the global troposphere with serious impacts on air quality, climate and human health. Quantitative estimations of the contribution of this source to the atmospheric particulate matter levels are hard to obtain, because emission factors vary greatly with wood type, combustion equipment and operating conditions. Updated information should improve not only regional and global biomass burning emission inventories, but also the input for atmospheric models. In this work, an extensive tabulation of particulate matter emission factors obtained worldwide is presented and critically evaluated. Existing quantifications and the suitability of specific organic markers to assign the input of residential biomass combustion to the ambient carbonaceous aerosol are also discussed. Based on these organic markers or other tracers, estimates of the contribution of this sector to observed particulate levels by receptor models for different regions around the world are compiled. Key areas requiring future research are highlighted and briefly discussed.

  6. Quantifying the vitamin D economy.

    PubMed

    Heaney, Robert P; Armas, Laura A G

    2015-01-01

    Vitamin D enters the body through multiple routes and in a variety of chemical forms. Utilization varies with input, demand, and genetics. Vitamin D and its metabolites are carried in the blood on a Gc protein that has three principal alleles with differing binding affinities and ethnic prevalences. Three major metabolites are produced, which act via two routes, endocrine and autocrine/paracrine, and in two compartments, extracellular and intracellular. Metabolic consumption is influenced by physiological controls, noxious stimuli, and tissue demand. When administered as a supplement, varying dosing schedules produce major differences in serum metabolite profiles. To understand vitamin D's role in human physiology, it is necessary both to identify the foregoing entities, mechanisms, and pathways and, specifically, to quantify them. This review was performed to delineate the principal entities and transitions involved in the vitamin D economy, summarize the status of present knowledge of the applicable rates and masses, draw inferences about functions that are implicit in these quantifications, and point out implications for the determination of adequacy. © The Author(s) 2014. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. BUMPER: the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction

    NASA Astrophysics Data System (ADS)

    Holden, Phil; Birks, John; Brooks, Steve; Bush, Mark; Hwang, Grace; Matthews-Bird, Frazer; Valencia, Bryan; van Woesik, Robert

    2017-04-01

    We describe the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. The principal motivation for a Bayesian approach is that the palaeoenvironment is treated probabilistically, and can be updated as additional data become available. Bayesian approaches therefore provide a reconstruction-specific quantification of the uncertainty in the data and in the model parameters. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring 2 seconds to build a 100-taxon model from a 100-site training-set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training-sets under ideal assumptions. We then use these to demonstrate both the general applicability of the model and the sensitivity of reconstructions to the characteristics of the training-set, considering assemblage richness, taxon tolerances, and the number of training sites. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. In all of these applications an identically configured model is used, the only change being the input files that provide the training-set environment and taxon-count data.

  8. Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images.

    PubMed

    Gong, Kuang; Yang, Jaewon; Kim, Kyungsang; El Fakhri, Georges; Seo, Youngho; Li, Quanzheng

    2018-05-23

    Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. © 2018 Institute of Physics and Engineering in Medicine.

  9. Semi-quantitative assessment of pulmonary perfusion in children using dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Thong, William E.; Ou, Phalla

    2013-03-01

    This paper addresses the study of semi-quantitative assessment of pulmonary perfusion acquired from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in a study population mainly composed of children with pulmonary malformations. The automatic analysis approach proposed is based on the indicator-dilution theory introduced in 1954. First, a robust method is developed to segment the pulmonary artery and the lungs from anatomical MRI data, exploiting 2D and 3D mathematical morphology operators. Second, the time-dependent contrast signal of the lung regions is deconvolved by the arterial input function for the assessment of the local hemodynamic system parameters, ie. mean transit time, pulmonary blood volume and pulmonary blood flow. The discrete deconvolution method implements here a truncated singular value decomposition (tSVD) method. Parametric images for the entire lungs are generated as additional elements for diagnosis and quantitative follow-up. The preliminary results attest the feasibility of perfusion quantification in pulmonary DCE-MRI and open an interesting alternative to scintigraphy for this type of evaluation, to be considered at least as a preliminary decision in the diagnostic due to the large availability of the technique and to the non-invasive aspects.

  10. Inter-comparison of receptor models for PM source apportionment: Case study in an industrial area

    NASA Astrophysics Data System (ADS)

    Viana, M.; Pandolfi, M.; Minguillón, M. C.; Querol, X.; Alastuey, A.; Monfort, E.; Celades, I.

    2008-05-01

    Receptor modelling techniques are used to identify and quantify the contributions from emission sources to the levels and major and trace components of ambient particulate matter (PM). A wide variety of receptor models are currently available, and consequently the comparability between models should be evaluated if source apportionment data are to be used as input in health effects studies or mitigation plans. Three of the most widespread receptor models (principal component analysis, PCA; positive matrix factorization, PMF; chemical mass balance, CMB) were applied to a single PM10 data set (n=328 samples, 2002-2005) obtained from an industrial area in NE Spain, dedicated to ceramic production. Sensitivity and temporal trend analyses (using the Mann-Kendall test) were applied. Results evidenced the good overall performance of the three models (r2>0.83 and α>0.91×between modelled and measured PM10 mass), with a good agreement regarding source identification and high correlations between input (CMB) and output (PCA, PMF) source profiles. Larger differences were obtained regarding the quantification of source contributions (up to a factor of 4 in some cases). The combined application of different types of receptor models would solve the limitations of each of the models, by constructing a more robust solution based on their strengths. The authors suggest the combined use of factor analysis techniques (PCA, PMF) to identify and interpret emission sources, and to obtain a first quantification of their contributions to the PM mass, and the subsequent application of CMB. Further research is needed to ensure that source apportionment methods are robust enough for application to PM health effects assessments.

  11. Quantification of uncertainties in the tsunami hazard for Cascadia using statistical emulation

    NASA Astrophysics Data System (ADS)

    Guillas, S.; Day, S. J.; Joakim, B.

    2016-12-01

    We present new high resolution tsunami wave propagation and coastal inundation for the Cascadia region in the Pacific Northwest. The coseismic representation in this analysis is novel, and more realistic than in previous studies, as we jointly parametrize multiple aspects of the seabed deformation. Due to the large computational cost of such simulators, statistical emulation is required in order to carry out uncertainty quantification tasks, as emulators efficiently approximate simulators. The emulator replaces the tsunami model VOLNA by a fast surrogate, so we are able to efficiently propagate uncertainties from the source characteristics to wave heights, in order to probabilistically assess tsunami hazard for Cascadia. We employ a new method for the design of the computer experiments in order to reduce the number of runs while maintaining good approximations properties of the emulator. Out of the initial nine parameters, mostly describing the geometry and time variation of the seabed deformation, we drop two parameters since these turn out to not have an influence on the resulting tsunami waves at the coast. We model the impact of another parameter linearly as its influence on the wave heights is identified as linear. We combine this screening approach with the sequential design algorithm MICE (Mutual Information for Computer Experiments), that adaptively selects the input values at which to run the computer simulator, in order to maximize the expected information gain (mutual information) over the input space. As a result, the emulation is made possible and accurate. Starting from distributions of the source parameters that encapsulate geophysical knowledge of the possible source characteristics, we derive distributions of the tsunami wave heights along the coastline.

  12. Inhibition of [11C]mirtazapine binding by alpha2-adrenoceptor antagonists studied by positron emission tomography in living porcine brain.

    PubMed

    Smith, Donald F; Dyve, Suzan; Minuzzi, Luciano; Jakobsen, Steen; Munk, Ole L; Marthi, Katalin; Cumming, Paul

    2006-06-15

    We have developed [(11)C]mirtazapine as a ligand for PET studies of antidepressant binding in living brain. However, previous studies have determined neither optimal methods for quantification of [(11)C]mirtazapine binding nor the pharmacological identity of this binding. To obtain that information, we have now mapped the distribution volume (V(d)) of [(11)C]mirtazapine relative to the arterial input in the brain of three pigs, in a baseline condition and after pretreatment with excess cold mirtazapine (3 mg/kg). Baseline V(d) ranged from 6 ml/ml in cerebellum to 18 ml/ml in frontal cortex, with some evidence for a small self-displaceable binding component in the cerebellum. Regional binding potentials (pBs) obtained by a constrained two-compartment model, using the V(d) observation in cerebellum, were consistently higher than pBs obtained by other arterial input or reference tissue methods. We found that adequate quantification of pB was obtained using the simplified reference tissue method. Concomitant PET studies with [(15)O]-water indicated that mirtazapine challenge increased CBF uniformly in cerebellum and other brain regions, supporting the use of this reference tissue for calculation of [(11)C]mirtazapine pB. Displacement by mirtazapine was complete in the cerebral cortex, but only 50% in diencephalon, suggesting the presence of multiple binding sites of differing affinities in that tissue. Competition studies with yohimbine and RX 821002 showed decreases in [(11)C]mirtazapine pB throughout the forebrain; use of the multireceptor version of the Michaelis-Menten equation indicated that 42% of [(11)C]mirtazapine binding in cortical regions is displaceable by yohimbine. Thus, PET studies confirm that [(11)C]mirtazapine affects alpha(2)-adrenoceptor binding sites in living brain. (c) 2006 Wiley-Liss, Inc.

  13. Uncertainty Quantification of Evapotranspiration and Infiltration from Modeling and Historic Time Series at the Savannah River F-Area

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.; Flach, G. P.

    2012-12-01

    The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.

  14. Quantification of Dynamic 11C-Phenytoin PET Studies.

    PubMed

    Mansor, Syahir; Boellaard, Ronald; Froklage, Femke E; Bakker, Esther D M; Yaqub, Maqsood; Voskuyl, Rob A; Schwarte, Lothar A; Verbeek, Joost; Windhorst, Albert D; Lammertsma, Adriaan

    2015-09-01

    The overexpression of P-glycoprotein (Pgp) is thought to be an important mechanism of pharmacoresistance in epilepsy. Recently, (11)C-phenytoin has been evaluated preclinically as a tracer for Pgp. The aim of the present study was to assess the optimal plasma kinetic model for quantification of (11)C-phenytoin studies in humans. Dynamic (11)C-phenytoin PET scans of 6 healthy volunteers with arterial sampling were acquired twice on the same day and analyzed using single- and 2-tissue-compartment models with and without a blood volume parameter. Global and regional test-retest (TRT) variability was determined for both plasma to tissue rate constant (K1) and volume of distribution (VT). According to the Akaike information criterion, the reversible single-tissue-compartment model with blood volume parameter was the preferred plasma input model. Mean TRT variability ranged from 1.5% to 16.9% for K1 and from 0.5% to 5.8% for VT. Larger volumes of interest showed better repeatabilities than smaller regions. A 45-min scan provided essentially the same K1 and VT values as a 60-min scan. A reversible single-tissue-compartment model with blood volume seems to be a good candidate model for quantification of dynamic (11)C-phenytoin studies. Scan duration may be reduced to 45 min without notable loss of accuracy and precision of both K1 and VT, although this still needs to be confirmed under pathologic conditions. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  15. Uptake Index of 123I-metaiodobenzylguanidine Myocardial Scintigraphy for Diagnosing Lewy Body Disease

    PubMed Central

    Kamiya, Yoshito; Ota, Satoru; Okumiya, Shintaro; Yamashita, Kosuke; Takaki, Akihiro; Ito, Shigeki

    2017-01-01

    Objective(s): Iodine-123 metaiodobenzylguanidine (123I-MIBG) myocardial scintigraphy has been used to evaluate cardiac sympathetic denervation in Lewy body disease (LBD), including Parkinson’s disease (PD) and dementia with Lewy bodies (DLB). The heart-to-mediastinum ratio (H/M) in PD and DLB is significantly lower than that in Parkinson’s plus syndromes and Alzheimer’s disease. Although this ratio is useful for distinguishing LBD from non-LBD, it fluctuates depending on the system performance of the gamma cameras. Therefore, a new, simple quantification method using 123I-MIBG uptake analysis is required for clinical study. The purpose of this study was to develop a new uptake index with a simple protocol to determine 123I-MIBG uptake on planar images. Methods: The 123I-MIBG input function was obtained from the input counts of the pulmonary artery (PA), which were assessed by analyzing the PA time-activity curves. The heart region of interest used for determining the H/M was used for calculating the uptake index, which was obtained by dividing the heart count by the input count. Results: Forty-eight patients underwent 123I-MIBG chest angiography and planar imaging, after clinical feature assessment and tracer injection. The H/M and 123I-MIBG uptake index were calculated and correlated with clinical features. Values for LBD were significantly lower than those for non-LBD in all analyses (P<0.001). The overlapping ranges between non-LBD and LBD were 2.15 to 2.49 in the H/M method, and 1.04 to 1.22% in the uptake index method. The diagnostic accuracy of the uptake index (area under the curve (AUC), 0.98; sensitivity, 96%; specificity, 91%; positive predictive value (PPV), 90%; negative predictive value (NPV), 93%; and accuracy, 92%) was approximately equal to that of the H/M (AUC, 0.95; sensitivity, 93%; specificity, 91%; PPV, 90%; NPV, 93%; and accuracy, 92%) for discriminating patients with LBD and non-LBD. Conclusion: A simple uptake index method was developed using 123I-MIBG planar imaging and the input counts determined by analyzing chest radioisotope angiography images of the PA. The diagnostic accuracy of the uptake index was approximately equal to that of the H/M for discriminating patients with LBD and non-LBD. PMID:28840137

  16. A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation

    NASA Astrophysics Data System (ADS)

    Akram, Muhammad Farooq Bin

    The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.

  17. Qualitative and Quantitative Detection of Botulinum Neurotoxins from Complex Matrices: Results of the First International Proficiency Test

    PubMed Central

    Worbs, Sylvia; Fiebig, Uwe; Zeleny, Reinhard; Schimmel, Heinz; Rummel, Andreas; Luginbühl, Werner; Dorner, Brigitte G.

    2015-01-01

    In the framework of the EU project EQuATox, a first international proficiency test (PT) on the detection and quantification of botulinum neurotoxins (BoNT) was conducted. Sample materials included BoNT serotypes A, B and E spiked into buffer, milk, meat extract and serum. Different methods were applied by the participants combining different principles of detection, identification and quantification. Based on qualitative assays, 95% of all results reported were correct. Successful strategies for BoNT detection were based on a combination of complementary immunological, MS-based and functional methods or on suitable functional in vivo/in vitro approaches (mouse bioassay, hemidiaphragm assay and Endopep-MS assay). Quantification of BoNT/A, BoNT/B and BoNT/E was performed by 48% of participating laboratories. It turned out that precise quantification of BoNT was difficult, resulting in a substantial scatter of quantitative data. This was especially true for results obtained by the mouse bioassay which is currently considered as “gold standard” for BoNT detection. The results clearly demonstrate the urgent need for certified BoNT reference materials and the development of methods replacing animal testing. In this context, the BoNT PT provided the valuable information that both the Endopep-MS assay and the hemidiaphragm assay delivered quantitative results superior to the mouse bioassay. PMID:26703724

  18. Quantification of anthropogenic impact on groundwater dependent terrestrial ecosystem using geochemical and isotope tools combined with 3-D flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Zurek, A. J.; Witczak, S.; Dulinski, M.; Wachniew, P.; Rozanski, K.; Kania, J.; Postawa, A.; Karczewski, J.; Moscicki, W. J.

    2014-08-01

    A dedicated study was launched in 2010 with the main aim to better understand the functioning of groundwater dependent terrestrial ecosystem (GDTE) located in southern Poland. The GDTE consists of a valuable forest stand (Niepolomice Forest) and associated wetland (Wielkie Bloto fen). A wide range of tools (environmental tracers, geochemistry, geophysics, 3-D flow and transport modeling) was used. The research was conducted along three major directions: (i) quantification of the dynamics of groundwater flow in various parts of the aquifer associated with GDTE, (ii) quantification of the degree of interaction between the GDTE and the aquifer, and (iii) 3-D modeling of groundwater flow in the vicinity of the studied GDTE and quantification of possible impact of enhanced exploitation of the aquifer on the status of GDTE. Environmental tracer data (tritium, stable isotopes of water) strongly suggest that upward leakage of the aquifer contributes significantly to the present water balance of the studied wetland and associated forest. Physico-chemical parameters of water (pH, conductivity, Na / Cl ratio) confirm this notion. Model runs indicate that prolonged groundwater abstraction through the newly-established network of water supply wells, conducted at maximum permitted capacity (ca. 10 000 m3 d-1), may trigger drastic changes in the ecosystem functioning, eventually leading to its degradation.

  19. Resolution and quantification accuracy enhancement of functional delay and sum beamforming for three-dimensional acoustic source identification with solid spherical arrays

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; Shen, Linbang

    2017-05-01

    Functional delay and sum (FDAS) is a novel beamforming algorithm introduced for the three-dimensional (3D) acoustic source identification with solid spherical microphone arrays. Being capable of offering significantly attenuated sidelobes with a fast speed, the algorithm promises to play an important role in interior acoustic source identification. However, it presents some intrinsic imperfections, specifically poor spatial resolution and low quantification accuracy. This paper focuses on conquering these imperfections by ridge detection (RD) and deconvolution approach for the mapping of acoustic sources (DAMAS). The suggested methods are referred to as FDAS+RD and FDAS+RD+DAMAS. Both computer simulations and experiments are utilized to validate their effects. Several interesting conclusions have emerged: (1) FDAS+RD and FDAS+RD+DAMAS both can dramatically ameliorate FDAS's spatial resolution and at the same time inherit its advantages. (2) Compared to the conventional DAMAS, FDAS+RD+DAMAS enjoys the same super spatial resolution, stronger sidelobe attenuation capability and more than two hundred times faster speed. (3) FDAS+RD+DAMAS can effectively conquer FDAS's low quantification accuracy. Whether the focus distance is equal to the distance from the source to the array center or not, it can quantify the source average pressure contribution accurately. This study will be of great significance to the accurate and quick localization and quantification of acoustic sources in cabin environments.

  20. MRI-based methods for quantification of the cerebral metabolic rate of oxygen

    PubMed Central

    Rodgers, Zachary B; Detre, John A

    2016-01-01

    The brain depends almost entirely on oxidative metabolism to meet its significant energy requirements. As such, the cerebral metabolic rate of oxygen (CMRO2) represents a key measure of brain function. Quantification of CMRO2 has helped elucidate brain functional physiology and holds potential as a clinical tool for evaluating neurological disorders including stroke, brain tumors, Alzheimer’s disease, and obstructive sleep apnea. In recent years, a variety of magnetic resonance imaging (MRI)-based CMRO2 quantification methods have emerged. Unlike positron emission tomography – the current “gold standard” for measurement and mapping of CMRO2 – MRI is non-invasive, relatively inexpensive, and ubiquitously available in modern medical centers. All MRI-based CMRO2 methods are based on modeling the effect of paramagnetic deoxyhemoglobin on the magnetic resonance signal. The various methods can be classified in terms of the MRI contrast mechanism used to quantify CMRO2: T2*, T2′, T2, or magnetic susceptibility. This review article provides an overview of MRI-based CMRO2 quantification techniques. After a brief historical discussion motivating the need for improved CMRO2 methodology, current state-of-the-art MRI-based methods are critically appraised in terms of their respective tradeoffs between spatial resolution, temporal resolution, and robustness, all of critical importance given the spatially heterogeneous and temporally dynamic nature of brain energy requirements. PMID:27089912

  1. Neurochemical and BOLD responses during neuronal activation measured in the human visual cortex at 7 Tesla

    PubMed Central

    Bednařík, Petr; Tkáč, Ivan; Giove, Federico; DiNuzzo, Mauro; Deelchand, Dinesh K; Emir, Uzay E; Eberly, Lynn E; Mangia, Silvia

    2015-01-01

    Several laboratories have consistently reported small concentration changes in lactate, glutamate, aspartate, and glucose in the human cortex during prolonged stimuli. However, whether such changes correlate with blood oxygenation level–dependent functional magnetic resonance imaging (BOLD-fMRI) signals have not been determined. The present study aimed at characterizing the relationship between metabolite concentrations and BOLD-fMRI signals during a block-designed paradigm of visual stimulation. Functional magnetic resonance spectroscopy (fMRS) and fMRI data were acquired from 12 volunteers. A short echo-time semi-LASER localization sequence optimized for 7 Tesla was used to achieve full signal-intensity MRS data. The group analysis confirmed that during stimulation lactate and glutamate increased by 0.26±0.06 μmol/g (~30%) and 0.28±0.03 μmol/g (~3%), respectively, while aspartate and glucose decreased by 0.20±0.04 μmol/g (~5%) and 0.19±0.03 μmol/g (~16%), respectively. The single-subject analysis revealed that BOLD-fMRI signals were positively correlated with glutamate and lactate concentration changes. The results show a linear relationship between metabolic and BOLD responses in the presence of strong excitatory sensory inputs, and support the notion that increased functional energy demands are sustained by oxidative metabolism. In addition, BOLD signals were inversely correlated with baseline γ-aminobutyric acid concentration. Finally, we discussed the critical importance of taking into account linewidth effects on metabolite quantification in fMRS paradigms. PMID:25564236

  2. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  3. Application of information-theoretic measures to quantitative analysis of immunofluorescent microscope imaging.

    PubMed

    Shutin, Dmitriy; Zlobinskaya, Olga

    2010-02-01

    The goal of this contribution is to apply model-based information-theoretic measures to the quantification of relative differences between immunofluorescent signals. Several models for approximating the empirical fluorescence intensity distributions are considered, namely Gaussian, Gamma, Beta, and kernel densities. As a distance measure the Hellinger distance and the Kullback-Leibler divergence are considered. For the Gaussian, Gamma, and Beta models the closed-form expressions for evaluating the distance as a function of the model parameters are obtained. The advantages of the proposed quantification framework as compared to simple mean-based approaches are analyzed with numerical simulations. Two biological experiments are also considered. The first is the functional analysis of the p8 subunit of the TFIIH complex responsible for a rare hereditary multi-system disorder--trichothiodystrophy group A (TTD-A). In the second experiment the proposed methods are applied to assess the UV-induced DNA lesion repair rate. A good agreement between our in vivo results and those obtained with an alternative in vitro measurement is established. We believe that the computational simplicity and the effectiveness of the proposed quantification procedure will make it very attractive for different analysis tasks in functional proteomics, as well as in high-content screening. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  4. Modelling land use change in the Ganga basin

    NASA Astrophysics Data System (ADS)

    Moulds, Simon; Mijic, Ana; Buytaert, Wouter

    2014-05-01

    Over recent decades the green revolution in India has driven substantial environmental change. Modelling experiments have identified northern India as a "hot spot" of land-atmosphere coupling strength during the boreal summer. However, there is a wide range of sensitivity of atmospheric variables to soil moisture between individual climate models. The lack of a comprehensive land use change dataset to force climate models has been identified as a major contributor to model uncertainty. This work aims to construct a monthly time series dataset of land use change for the period 1966 to 2007 for northern India to improve the quantification of regional hydrometeorological feedbacks. The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on board the Aqua and Terra satellites provides near-continuous remotely sensed datasets from 2000 to the present day. However, the quality and availability of satellite products before 2000 is poor. To complete the dataset MODIS images are extrapolated back in time using the Conversion of Land Use and its Effects at Small regional extent (CLUE-S) modelling framework, recoded in the R programming language to overcome limitations of the original interface. Non-spatial estimates of land use area published by the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) for the study period, available on an annual, district-wise basis, are used as a direct model input. Land use change is allocated spatially as a function of biophysical and socioeconomic drivers identified using logistic regression. The dataset will provide an essential input to a high-resolution, physically-based land-surface model to generate the lower boundary condition to assess the impact of land use change on regional climate.

  5. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  6. Weighted Iterative Bayesian Compressive Sensing (WIBCS) for High Dimensional Polynomial Surrogate Construction

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2016-12-01

    Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  7. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  8. Aircraft signal definition for flight safety system monitoring system

    NASA Technical Reports Server (NTRS)

    Gibbs, Michael (Inventor); Omen, Debi Van (Inventor)

    2003-01-01

    A system and method compares combinations of vehicle variable values against known combinations of potentially dangerous vehicle input signal values. Alarms and error messages are selectively generated based on such comparisons. An aircraft signal definition is provided to enable definition and monitoring of sets of aircraft input signals to customize such signals for different aircraft. The input signals are compared against known combinations of potentially dangerous values by operational software and hardware of a monitoring function. The aircraft signal definition is created using a text editor or custom application. A compiler receives the aircraft signal definition to generate a binary file that comprises the definition of all the input signals used by the monitoring function. The binary file also contains logic that specifies how the inputs are to be interpreted. The file is then loaded into the monitor function, where it is validated and used to continuously monitor the condition of the aircraft.

  9. Genetic inhibition of neurotransmission reveals role of glutamatergic input to dopamine neurons in high-effort behavior.

    PubMed

    Hutchison, M A; Gu, X; Adrover, M F; Lee, M R; Hnasko, T S; Alvarez, V A; Lu, W

    2018-05-01

    Midbrain dopamine neurons are crucial for many behavioral and cognitive functions. As the major excitatory input, glutamatergic afferents are important for control of the activity and plasticity of dopamine neurons. However, the role of glutamatergic input as a whole onto dopamine neurons remains unclear. Here we developed a mouse line in which glutamatergic inputs onto dopamine neurons are specifically impaired, and utilized this genetic model to directly test the role of glutamatergic inputs in dopamine-related functions. We found that while motor coordination and reward learning were largely unchanged, these animals showed prominent deficits in effort-related behavioral tasks. These results provide genetic evidence that glutamatergic transmission onto dopaminergic neurons underlies incentive motivation, a willingness to exert high levels of effort to obtain reinforcers, and have important implications for understanding the normal function of the midbrain dopamine system.

  10. On the complex quantification of risk: systems-based perspective on terrorism.

    PubMed

    Haimes, Yacov Y

    2011-08-01

    This article highlights the complexity of the quantification of the multidimensional risk function, develops five systems-based premises on quantifying the risk of terrorism to a threatened system, and advocates the quantification of vulnerability and resilience through the states of the system. The five premises are: (i) There exists interdependence between a specific threat to a system by terrorist networks and the states of the targeted system, as represented through the system's vulnerability, resilience, and criticality-impact. (ii) A specific threat, its probability, its timing, the states of the targeted system, and the probability of consequences can be interdependent. (iii) The two questions in the risk assessment process: "What is the likelihood?" and "What are the consequences?" can be interdependent. (iv) Risk management policy options can reduce both the likelihood of a threat to a targeted system and the associated likelihood of consequences by changing the states (including both vulnerability and resilience) of the system. (v) The quantification of risk to a vulnerable system from a specific threat must be built on a systemic and repeatable modeling process, by recognizing that the states of the system constitute an essential step to construct quantitative metrics of the consequences based on intelligence gathering, expert evidence, and other qualitative information. The fact that the states of all systems are functions of time (among other variables) makes the time frame pivotal in each component of the process of risk assessment, management, and communication. Thus, risk to a system, caused by an initiating event (e.g., a threat) is a multidimensional function of the specific threat, its probability and time frame, the states of the system (representing vulnerability and resilience), and the probabilistic multidimensional consequences. © 2011 Society for Risk Analysis.

  11. Performance Testing of a Prototypic Annular Linear Induction Pump for Fission Surface Power

    NASA Technical Reports Server (NTRS)

    Polzin, K. A.; Pearson, J. B.; Schoenfeld, M. P.; Webster, K.; Houts, M. G.; Godfroy, T. J.; Bossard, J. A.

    2010-01-01

    Results of performance testing of an annular linear induction pump are presented. The pump electromagnetically pumps liquid metal (NaK) through a circuit specially designed to allow for quantification of the performance. Testing was conducted over a range of conditions, including frequencies of 33, 36, 39, and 60 Hz, liquid metal temperatures from 25 to 525 C, and input voltages from 5 to 120 V. Pump performance spanned a range of flow rates from roughly 0.16 to 5.7 L/s (2.5 to 90 gpm), and pressure head <1 to 90 kPa (<0.145 to 13 psi). The maximum efficiency measured during testing was slightly greater than 6%. The efficiency was fairly insensitive to input frequency from 33 to 39 Hz, and was markedly lower at 60 Hz. In addition, the efficiency decreased as the NaK temperature was raised. While the pump was powered, the fluid responded immediately to changes in the input power level, but when power was removed altogether, there was a brief slow-down period before the fluid would come to rest. The performance of the pump operating on a variable frequency drive providing 60 Hz power compared favorably with the same pump operating on 60 Hz power drawn directly from the electrical grid.

  12. Study on embodied CO2 transfer between the Jing-Jin-Ji region and other regions in China: a quantification using an interregional input-output model.

    PubMed

    Chen, Mengmeng; Wu, Sanmang; Lei, Yalin; Li, Shantong

    2018-05-01

    Jing-Jin-Ji region (i.e., Beijing, Tianjin, and Hebei) is China's key development region, but it is also the leading and most serious air pollution region in China. High fossil fuel consumption is the major source of both carbon dioxide (CO 2 ) emissions and air pollutants. Therefore, it is important to reveal the source of CO 2 emissions to control the air pollution in the Jing-Jin-Ji region. In this study, an interregional input-output model was applied to quantitatively estimate the embodied CO 2 transfer between Jing-Jin-Ji region and other region in China using China's interregional input-output data in 2010. The results indicated that there was a significant difference in the production-based CO 2 emissions in China, and furthermore, the Jing-Jin-Ji region and its surrounding regions were the main regions of the production-based CO 2 emissions in China. Hebei Province exported a large amount of embodied CO 2 to meet the investment, consumption, and export demands of Beijing and Tianjin. The Jing-Jin-Ji regions exported a great deal of embodied CO 2 to the coastal provinces of southeast China and imported it from neighboring provinces.

  13. Estimation of arterial input by a noninvasive image derived method in brain H2 15O PET study: confirmation of arterial location using MR angiography

    NASA Astrophysics Data System (ADS)

    Muinul Islam, Muhammad; Tsujikawa, Tetsuya; Mori, Tetsuya; Kiyono, Yasushi; Okazawa, Hidehiko

    2017-06-01

    A noninvasive method to estimate input function directly from H2 15O brain PET data for measurement of cerebral blood flow (CBF) was proposed in this study. The image derived input function (IDIF) method extracted the time-activity curves (TAC) of the major cerebral arteries at the skull base from the dynamic PET data. The extracted primordial IDIF showed almost the same radioactivity as the arterial input function (AIF) from sampled blood at the plateau part in the later phase, but significantly lower radioactivity in the initial arterial phase compared with that of AIF-TAC. To correct the initial part of the IDIF, a dispersion function was applied and two constants for the correction were determined by fitting with the individual AIF in 15 patients with unilateral arterial stenoocclusive lesions. The area under the curves (AUC) from the two input functions showed good agreement with the mean AUCIDIF/AUCAIF ratio of 0.92  ±  0.09. The final products of CBF and arterial-to-capillary vascular volume (V 0) obtained from the IDIF and AIF showed no difference, and had with high correlation coefficients.

  14. Quantification of frequency-components contributions to the discharge of a karst spring

    NASA Astrophysics Data System (ADS)

    Taver, V.; Johannet, A.; Vinches, M.; Borrell, V.; Pistre, S.; Bertin, D.

    2013-12-01

    Karst aquifers represent important underground resources for water supplies, providing it to 25% of the population. Nevertheless such systems are currently underexploited because of their heterogeneity and complexity, which make work fields and physical measurements expensive, and frequently not representative of the whole aquifer. The systemic paradigm appears thus at a complementary approach to study and model karst aquifers in the framework of non-linear system analysis. Its input and output signals, namely rainfalls and discharge contain information about the function performed by the physical process. Therefore, improvement of knowledge about the karst system can be provided using time series analysis, for example Fourier analysis or orthogonal decomposition [1]. Another level of analysis consists in building non-linear models to identify rainfall/discharge relation, component by component [2]. In this context, this communication proposes to use neural networks to first model the rainfall-runoff relation using frequency components, and second to analyze the models, using the KnoX method [3], in order to quantify the importance of each component. Two different neural models were designed: (i) the recurrent model which implements a non-linear recurrent model fed by rainfalls, ETP and previous estimated discharge, (ii) the feed-forward model which implements a non-linear static model fed by rainfalls, ETP and previous observed discharges. The first model is known to better represent the rainfall-runoff relation; the second one to better predict the discharge based on previous discharge observations. KnoX method is based on a variable selection method, which simply considers values of parameters after the training without taking into account the non-linear behavior of the model during functioning. An amelioration of the KnoX method, is thus proposed in order to overcome this inadequacy. The proposed method, leads thus to both a hierarchization and a quantification of the input variables, here the frequency components, over output signal. Applied to the Lez karst aquifer, the combination of frequency decomposition and knowledge extraction improves knowledge on hydrological behavior. Both models and both extraction methods were applied and assessed using a fictitious reference model. Discussion is proposed in order to analyze efficiency of the methods compared to in situ measurements and tracing. [1] D. Labat et al. 'Rainfall-runoff relations for karst springs. Part II: continuous wavelet and discrete orthogonal multiresolution' In J of Hydrology, Vol. 238, 2000, pp. 149-178. [2] A. Johannet et al. 'Prediction of Lez Spring Discharge (Southern France) by Neural Networks using Orthogonal Wavelet Decomposition'.IJCNN Proceedings Brisbane 2012. [3] L. Kong A Siou et al. 'Modélisation hydrodynamique des karsts par réseaux de neurones : Comment dépasser la boîte noire. (Karst hydrodynamic modelling using artificial neural networks: how to surpass the black box ?)'. Proceedings of the 9th conference on limestone hydrogeology,2011 Besançon, France.

  15. A grid spacing control technique for algebraic grid generation methods

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Kudlinski, R. A.; Everton, E. L.

    1982-01-01

    A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.

  16. Gaussian entanglement generation from coherence using beam-splitters

    PubMed Central

    Wang, Zhong-Xiao; Wang, Shuhao; Ma, Teng; Wang, Tie-Jun; Wang, Chuan

    2016-01-01

    The generation and quantification of quantum entanglement is crucial for quantum information processing. Here we study the transition of Gaussian correlation under the effect of linear optical beam-splitters. We find the single-mode Gaussian coherence acts as the resource in generating Gaussian entanglement for two squeezed states as the input states. With the help of consecutive beam-splitters, single-mode coherence and quantum entanglement can be converted to each other. Our results reveal that by using finite number of beam-splitters, it is possible to extract all the entanglement from the single-mode coherence even if the entanglement is wiped out before each beam-splitter. PMID:27892537

  17. A model-constrained Monte Carlo method for blind arterial input function estimation in dynamic contrast-enhanced MRI: II. In vivo results

    NASA Astrophysics Data System (ADS)

    Schabel, Matthias C.; DiBella, Edward V. R.; Jensen, Randy L.; Salzman, Karen L.

    2010-08-01

    Accurate quantification of pharmacokinetic model parameters in tracer kinetic imaging experiments requires correspondingly accurate determination of the arterial input function (AIF). Despite significant effort expended on methods of directly measuring patient-specific AIFs in modalities as diverse as dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), dynamic positron emission tomography (PET), and perfusion computed tomography (CT), fundamental and technical difficulties have made consistent and reliable achievement of that goal elusive. Here, we validate a new algorithm for AIF determination, the Monte Carlo blind estimation (MCBE) method (which is described in detail and characterized by extensive simulations in a companion paper), by comparing AIFs measured in DCE-MRI studies of eight brain tumor patients with results of blind estimation. Blind AIFs calculated with the MCBE method using a pool of concentration-time curves from a region of normal brain tissue were found to be quite similar to the measured AIFs, with statistically significant decreases in fit residuals observed in six of eight patients. Biases between the blind and measured pharmacokinetic parameters were the dominant source of error. Averaged over all eight patients, the mean biases were +7% in K trans, 0% in kep, -11% in vp and +10% in ve. Corresponding uncertainties (median absolute deviation from the best fit line) were 0.0043 min-1 in K trans, 0.0491 min-1 in kep, 0.29% in vp and 0.45% in ve. The use of a published population-averaged AIF resulted in larger mean biases in three of the four parameters (-23% in K trans, -22% in kep, -63% in vp), with the bias in ve unchanged, and led to larger uncertainties in all four parameters (0.0083 min-1 in K trans, 0.1038 min-1 in kep, 0.31% in vp and 0.95% in ve). When blind AIFs were calculated from a region of tumor tissue, statistically significant decreases in fit residuals were observed in all eight patients despite larger deviations of these blind AIFs from the measured AIFs. The observed decrease in root-mean-square fit residuals between the normal brain and tumor tissue blind AIFs suggests that the local blood supply in tumors is measurably different from that in normal brain tissue and that the proposed method is able to discriminate between the two. We have shown the feasibility of applying the MCBE algorithm to DCE-MRI data acquired in brain, finding generally good agreement with measured AIFs and decreased biases and uncertainties relative to the use of a population-averaged AIF. These results demonstrate that the MCBE algorithm is a useful alternative to direct AIF measurement in cases where acquisition of high-quality arterial input function data is difficult or impossible.

  18. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    NASA Astrophysics Data System (ADS)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.

  19. Joint statistics of strongly correlated neurons via dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Deniz, Taşkın; Rotter, Stefan

    2017-06-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

  20. Linking Ecosystem Services Benefit Transfer Databases and Ecosystem Services Production Function Libraries

    EPA Science Inventory

    The quantification or estimation of the economic and non-economic values of ecosystem services can be done from a number of distinct approaches. For example, practitioners may use ecosystem services production function models (ESPFMs) for a particular location, or alternatively, ...

  1. FUNCTIONAL RECOVERY FOLLOWING MOTOR CORTEX LESIONS IN NON-HUMAN PRIMATES: EXPERIMENTAL IMPLICATIONS FOR HUMAN STROKE PATIENTS

    PubMed Central

    Darling, Warren G.; Pizzimenti, Marc A.; Morecraft, Robert J.

    2013-01-01

    This review discusses selected classical works and contemporary research on recovery of contralesional fine hand motor function following lesions to motor areas of the cerebral cortex in non-human primates. Findings from both the classical literature and contemporary studies show that lesions of cortical motor areas induce paresis initially, but are followed by remarkable recovery of fine hand/digit motor function that depends on lesion size and post-lesion training. Indeed, in recent work where considerable quantification of fine digit function associated with grasping and manipulating small objects has been observed, very favorable recovery is possible with minimal forced use of the contralesional limb. Studies of the mechanisms underlying recovery have shown that following small lesions of the digit areas of primary motor cortex (M1), there is expansion of the digit motor representations into areas of M1 that did not produce digit movements prior to the lesion. However, after larger lesions involving the elbow, wrist and digit areas of M1, no such expansion of the motor representation was observed, suggesting that recovery was due to other cortical or subcortical areas taking over control of hand/digit movements. Recently, we showed that one possible mechanism of recovery after lesion to the arm areas of M1 and lateral premotor cortex is enhancement of corticospinal projections from the medially located supplementary motor area (M2) to spinal cord laminae containing neurons which have lost substantial input from the lateral motor areas and play a critical role in reaching and digit movements. Because human stroke and brain injury patients show variable, and usually poorer, recovery of hand motor function than that of nonhuman primates after motor cortex damage, we conclude with a discussion of implications of this work for further experimentation to improve recovery of hand function in human stroke patients. PMID:21960307

  2. Negative dielectrophoresis spectroscopy for rare analyte quantification in biological samples

    NASA Astrophysics Data System (ADS)

    Kirmani, Syed Abdul Mannan; Gudagunti, Fleming Dackson; Velmanickam, Logeeshan; Nawarathna, Dharmakeerthi; Lima, Ivan T., Jr.

    2017-03-01

    We propose the use of negative dielectrophoresis (DEP) spectroscopy as a technique to improve the detection limit of rare analytes in biological samples. We observe a significant dependence of the negative DEP force on functionalized polystyrene beads at the edges of interdigitated electrodes with respect to the frequency of the electric field. We measured this velocity of repulsion for 0% and 0.8% conjugation of avidin with biotin functionalized polystyrene beads with our automated software through real-time image processing that monitors the Rayleigh scattering from the beads. A significant difference in the velocity of the beads was observed in the presence of as little as 80 molecules of avidin per biotin functionalized bead. This technology can be applied in the detection and quantification of rare analytes that can be useful in the diagnosis and the treatment of diseases, such as cancer and myocardial infarction, with the use of polystyrene beads functionalized with antibodies for the target biomarkers.

  3. Quantification of atrial dynamics using cardiovascular magnetic resonance: inter-study reproducibility.

    PubMed

    Kowallick, Johannes T; Morton, Geraint; Lamata, Pablo; Jogiya, Roy; Kutty, Shelby; Hasenfuß, Gerd; Lotz, Joachim; Nagel, Eike; Chiribiri, Amedeo; Schuster, Andreas

    2015-05-17

    Cardiovascular magnetic resonance (CMR) offers quantification of phasic atrial functions based on volumetric assessment and more recently, on CMR feature tracking (CMR-FT) quantitative strain and strain rate (SR) deformation imaging. Inter-study reproducibility is a key requirement for longitudinal studies but has not been defined for CMR-based quantification of left atrial (LA) and right atrial (RA) dynamics. Long-axis 2- and 4-chamber cine images were acquired at 9:00 (Exam A), 9:30 (Exam B) and 14:00 (Exam C) in 16 healthy volunteers. LA and RA reservoir, conduit and contractile booster pump functions were quantified by volumetric indexes as derived from fractional volume changes and by strain and SR as derived from CMR-FT. Exam A and B were compared to assess the inter-study reproducibility. Morning and afternoon scans were compared to address possible diurnal variation of atrial function. Inter-study reproducibility was within acceptable limits for all LA and RA volumetric, strain and SR parameters. Inter-study reproducibility was better for volumetric indexes and strain than for SR parameters and better for LA than for RA dynamics. For the LA, reservoir function showed the best reproducibility (intraclass correlation coefficient (ICC) 0.94-0.97, coefficient of variation (CoV) 4.5-8.2%), followed by conduit (ICC 0.78-0.97, CoV 8.2-18.5%) and booster pump function (ICC 0.71-0.95, CoV 18.3-22.7). Similarly, for the RA, reproducibility was best for reservoir function (ICC 0.76-0.96, CoV 7.5-24.0%) followed by conduit (ICC 0.67-0.91, CoV 13.9-35.9) and booster pump function (ICC 0.73-0.90, CoV 19.4-32.3). Atrial dynamics were not measurably affected by diurnal variation between morning and afternoon scans. Inter-study reproducibility for CMR-based derivation of LA and RA functions is acceptable using either volumetric, strain or SR parameters with LA function showing higher reproducibility than RA function assessment. Amongst the different functional components, reservoir function is most reproducibly assessed by either technique followed by conduit and booster pump function, which needs to be considered in future longitudinal research studies.

  4. Development of economic consequence methodology for process risk analysis.

    PubMed

    Zadakbar, Omid; Khan, Faisal; Imtiaz, Syed

    2015-04-01

    A comprehensive methodology for economic consequence analysis with appropriate models for risk analysis of process systems is proposed. This methodology uses loss functions to relate process deviations in a given scenario to economic losses. It consists of four steps: definition of a scenario, identification of losses, quantification of losses, and integration of losses. In this methodology, the process deviations that contribute to a given accident scenario are identified and mapped to assess potential consequences. Losses are assessed with an appropriate loss function (revised Taguchi, modified inverted normal) for each type of loss. The total loss is quantified by integrating different loss functions. The proposed methodology has been examined on two industrial case studies. Implementation of this new economic consequence methodology in quantitative risk assessment will provide better understanding and quantification of risk. This will improve design, decision making, and risk management strategies. © 2014 Society for Risk Analysis.

  5. Comparing fixed and variable-width Gaussian networks.

    PubMed

    Kůrková, Věra; Kainen, Paul C

    2014-09-01

    The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs.

    PubMed

    Ledoux, Erwan; Brunel, Nicolas

    2011-01-01

    We investigate the dynamics of recurrent networks of excitatory (E) and inhibitory (I) neurons in the presence of time-dependent inputs. The dynamics is characterized by the network dynamical transfer function, i.e., how the population firing rate is modulated by sinusoidal inputs at arbitrary frequencies. Two types of networks are studied and compared: (i) a Wilson-Cowan type firing rate model; and (ii) a fully connected network of leaky integrate-and-fire (LIF) neurons, in a strong noise regime. We first characterize the region of stability of the "asynchronous state" (a state in which population activity is constant in time when external inputs are constant) in the space of parameters characterizing the connectivity of the network. We then systematically characterize the qualitative behaviors of the dynamical transfer function, as a function of the connectivity. We find that the transfer function can be either low-pass, or with a single or double resonance, depending on the connection strengths and synaptic time constants. Resonances appear when the system is close to Hopf bifurcations, that can be induced by two separate mechanisms: the I-I connectivity and the E-I connectivity. Double resonances can appear when excitatory delays are larger than inhibitory delays, due to the fact that two distinct instabilities exist with a finite gap between the corresponding frequencies. In networks of LIF neurons, changes in external inputs and external noise are shown to be able to change qualitatively the network transfer function. Firing rate models are shown to exhibit the same diversity of transfer functions as the LIF network, provided delays are present. They can also exhibit input-dependent changes of the transfer function, provided a suitable static non-linearity is incorporated.

  7. Functional recovery of odor representations in regenerated sensory inputs to the olfactory bulb

    PubMed Central

    Cheung, Man C.; Jang, Woochan; Schwob, James E.; Wachowiak, Matt

    2014-01-01

    The olfactory system has a unique capacity for recovery from peripheral damage. After injury to the olfactory epithelium (OE), olfactory sensory neurons (OSNs) regenerate and re-converge on target glomeruli of the olfactory bulb (OB). Thus far, this process has been described anatomically for only a few defined populations of OSNs. Here we characterize this regeneration at a functional level by assessing how odor representations carried by OSN inputs to the OB recover after massive loss and regeneration of the sensory neuron population. We used chronic imaging of mice expressing synaptopHluorin in OSNs to monitor odor representations in the dorsal OB before lesion by the olfactotoxin methyl bromide and after a 12 week recovery period. Methyl bromide eliminated functional inputs to the OB, and these inputs recovered to near-normal levels of response magnitude within 12 weeks. We also found that the functional topography of odor representations recovered after lesion, with odorants evoking OSN input to glomerular foci within the same functional domains as before lesion. At a finer spatial scale, however, we found evidence for mistargeting of regenerated OSN axons onto OB targets, with odorants evoking synaptopHluorin signals in small foci that did not conform to a typical glomerular structure but whose distribution was nonetheless odorant-specific. These results indicate that OSNs have a robust ability to reestablish functional inputs to the OB and that the mechanisms underlying the topography of bulbar reinnervation during development persist in the adult and allow primary sensory representations to be largely restored after massive sensory neuron loss. PMID:24431990

  8. Refinements to the structure of graphite oxide: absolute quantification of functional groups via selective labelling

    NASA Astrophysics Data System (ADS)

    Eng, Alex Yong Sheng; Chua, Chun Kiang; Pumera, Martin

    2015-11-01

    Chemical modification and functionalization of inherent functional groups within graphite oxide (GO) are essential aspects of graphene-based nano-materials used in wide-ranging applications. Despite extensive research, there remains some discrepancy in its structure, with current knowledge limited primarily to spectroscopic data from XPS, NMR and vibrational spectroscopies. We report herein an innovative electrochemistry-based approach. Four electroactive labels are chosen to selectively functionalize groups in GO, and quantification of each group is achieved by voltammetric analysis. This allows for the first time quantification of absolute amounts of each group, with a further advantage of distinguishing various carbonyl species: namely ortho- and para-quinones from aliphatic ketones. Intrinsic variations in the compositions of permanganate versus chlorate-oxidized GOs were thus observed. Principal differences include permanganate-GO exhibiting substantial quinonyl content, in comparison to chlorate-GO with the vast majority of its carbonyls as isolated ketones. The results confirm that carboxylic groups are rare in actuality, and are in fact entirely absent from chlorate-GO. These observations refine and advance our understanding of GO structure by addressing certain disparities in past models resulting from employment of different oxidation routes, with the vital implication that GO production methods cannot be used interchangeably in the manufacture of graphene-based devices.Chemical modification and functionalization of inherent functional groups within graphite oxide (GO) are essential aspects of graphene-based nano-materials used in wide-ranging applications. Despite extensive research, there remains some discrepancy in its structure, with current knowledge limited primarily to spectroscopic data from XPS, NMR and vibrational spectroscopies. We report herein an innovative electrochemistry-based approach. Four electroactive labels are chosen to selectively functionalize groups in GO, and quantification of each group is achieved by voltammetric analysis. This allows for the first time quantification of absolute amounts of each group, with a further advantage of distinguishing various carbonyl species: namely ortho- and para-quinones from aliphatic ketones. Intrinsic variations in the compositions of permanganate versus chlorate-oxidized GOs were thus observed. Principal differences include permanganate-GO exhibiting substantial quinonyl content, in comparison to chlorate-GO with the vast majority of its carbonyls as isolated ketones. The results confirm that carboxylic groups are rare in actuality, and are in fact entirely absent from chlorate-GO. These observations refine and advance our understanding of GO structure by addressing certain disparities in past models resulting from employment of different oxidation routes, with the vital implication that GO production methods cannot be used interchangeably in the manufacture of graphene-based devices. Electronic supplementary information (ESI) available: Voltammograms of labelled GO at acidic vs. neutral pH; control experiment investigating effects of non-specific adsorption; X-ray photoelectron spectra and Fourier transform infrared spectra of GOs after functionalization and their corresponding controls; Coulombic charges passed from electrochemical redox of labels; detailed calculation of epoxyl content in GO; inherent electrochemistry of GOs; physical images of functionalized and control GOs. See DOI: 10.1039/c5nr05891k

  9. Quantification of endocytosis using a folate functionalized silica hollow nanoshell platform

    PubMed Central

    Sandoval, Sergio; Mendez, Natalie; Alfaro, Jesus G.; Yang, Jian; Aschemeyer, Sharraya; Liberman, Alex; Trogler, William C.; Kummel, Andrew C.

    2015-01-01

    Abstract. A quantification method to measure endocytosis was designed to assess cellular uptake and specificity of a targeting nanoparticle platform. A simple N-hydroxysuccinimide ester conjugation technique to functionalize 100-nm hollow silica nanoshell particles with fluorescent reporter fluorescein isothiocyanate and folate or polyethylene glycol (PEG) was developed. Functionalized nanoshells were characterized using scanning electron microscopy and transmission electron microscopy and the maximum amount of folate functionalized on nanoshell surfaces was quantified with UV-Vis spectroscopy. The extent of endocytosis by HeLa cervical cancer cells and human foreskin fibroblast (HFF-1) cells was investigated in vitro using fluorescence and confocal microscopy. A simple fluorescence ratio analysis was developed to quantify endocytosis versus surface adhesion. Nanoshells functionalized with folate showed enhanced endocytosis by cancer cells when compared to PEG functionalized nanoshells. Fluorescence ratio analyses showed that 95% of folate functionalized silica nanoshells which adhered to cancer cells were endocytosed, while only 27% of PEG functionalized nanoshells adhered to the cell surface and underwent endocytosis when functionalized with 200 and 900  μg, respectively. Additionally, the endocytosis of folate functionalized nanoshells proved to be cancer cell selective while sparing normal cells. The developed fluorescence ratio analysis is a simple and rapid verification/validation method to quantify cellular uptake between datasets by using an internal control for normalization. PMID:26315280

  10. Anthropogenic modification of the nitrogen cycling within the Greater Hangzhou Area system, China.

    PubMed

    Gu, Baojing; Chang, Jie; Ge, Ying; Ge, Hanliang; Yuan, Chi; Peng, Changhui; Jiang, Hong

    2009-06-01

    Based on the mass balance approach, a detailed quantification of nitrogen (N) cycling was constructed for an urban-rural complex system, named the Greater Hangzhou Area (GHA) system, for this paper. The GHA is located in the humid climatic region on the southeastern coast of China, one of the earliest regions in the Yangtze Delta to experience economic development. Total N input into the GHA was calculated at 274.66 Gg/yr (1 Gg = 10(9) g), and total output was calculated at 227.33 Gg/yr, while N accumulation was assessed at 47.33 Gg/yr (17.2% of the total N input). Human activity resulted in 73%of N input by means of synthetic fertilizers, human food, animal feed, imported N containing chemicals, fossil fuel combustion, and other items. More than 69.3% of N was released into the atmosphere, and riverine N export accounted for 22.2% of total N output. N input and output to and from the GHA in 1980 were estimated at 119.53 Gg/yr and 98.30 Gg/yr, respectively, with an increase of 130% and 131%, respectively, during a 24-year period (from 1980 to 2004). The N input increase was influenced by synthetic fertilizers (138%), animal feed (225%), N-containing chemicals (371%), riverine input (311%), and N deposition (441%). Compared to the N balance seen in the arid Central Arizona-Phoenix (CAP) system in the United States, the proportion of N transferred to water bodies in the humid GHA system was found to be 36 times higher than the CAP system. Anthropogenic activity, as it typically does, enhanced the flux of N biogeochemistry in the GHA; however, a lack of an N remover (N pollutant treatment facilities) causes excess reactive N (Nr; such as NH3, N2O, NOx), polluting water bodies and the atmosphere within the GHA. Therefore many challenges remain ahead in order to achieve sustainable development in the rapidly developing GHA system.

  11. Reproducibility of Lobar Perfusion and Ventilation Quantification Using SPECT/CT Segmentation Software in Lung Cancer Patients.

    PubMed

    Provost, Karine; Leblond, Antoine; Gauthier-Lemire, Annie; Filion, Édith; Bahig, Houda; Lord, Martin

    2017-09-01

    Planar perfusion scintigraphy with 99m Tc-labeled macroaggregated albumin is often used for pretherapy quantification of regional lung perfusion in lung cancer patients, particularly those with poor respiratory function. However, subdividing lung parenchyma into rectangular regions of interest, as done on planar images, is a poor reflection of true lobar anatomy. New tridimensional methods using SPECT and SPECT/CT have been introduced, including semiautomatic lung segmentation software. The present study evaluated inter- and intraobserver agreement on quantification using SPECT/CT software and compared the results for regional lung contribution obtained with SPECT/CT and planar scintigraphy. Methods: Thirty lung cancer patients underwent ventilation-perfusion scintigraphy with 99m Tc-macroaggregated albumin and 99m Tc-Technegas. The regional lung contribution to perfusion and ventilation was measured on both planar scintigraphy and SPECT/CT using semiautomatic lung segmentation software by 2 observers. Interobserver and intraobserver agreement for the SPECT/CT software was assessed using the intraclass correlation coefficient, Bland-Altman plots, and absolute differences in measurements. Measurements from planar and tridimensional methods were compared using the paired-sample t test and mean absolute differences. Results: Intraclass correlation coefficients were in the excellent range (above 0.9) for both interobserver and intraobserver agreement using the SPECT/CT software. Bland-Altman analyses showed very narrow limits of agreement. Absolute differences were below 2.0% in 96% of both interobserver and intraobserver measurements. There was a statistically significant difference between planar and SPECT/CT methods ( P < 0.001) for quantification of perfusion and ventilation for all right lung lobes, with a maximal mean absolute difference of 20.7% for the right middle lobe. There was no statistically significant difference in quantification of perfusion and ventilation for the left lung lobes using either method; however, absolute differences reached 12.0%. The total right and left lung contributions were similar for the two methods, with a mean difference of 1.2% for perfusion and 2.0% for ventilation. Conclusion: Quantification of regional lung perfusion and ventilation using SPECT/CT-based lung segmentation software is highly reproducible. This tridimensional method yields statistically significant differences in measurements for right lung lobes when compared with planar scintigraphy. We recommend that SPECT/CT-based quantification be used for all lung cancer patients undergoing pretherapy evaluation of regional lung function. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  12. Nitric Oxide Analyzer Quantification of Plant S-Nitrosothiols.

    PubMed

    Hussain, Adil; Yun, Byung-Wook; Loake, Gary J

    2018-01-01

    Nitric oxide (NO) is a small diatomic molecule that regulates multiple physiological processes in animals, plants, and microorganisms. In animals, it is involved in vasodilation and neurotransmission and is present in exhaled breath. In plants, it regulates both plant immune function and numerous developmental programs. The high reactivity and short half-life of NO and cross-reactivity of its various derivatives make its quantification difficult. Different methods based on calorimetric, fluorometric, and chemiluminescent detection of NO and its derivatives are available, but all of them have significant limitations. Here we describe a method for the chemiluminescence-based quantification of NO using ozone-chemiluminescence technology in plants. This approach provides a sensitive, robust, and flexible approach for determining the levels of NO and its signaling products, protein S-nitrosothiols.

  13. Visualization and quantification of magnetic nanoparticles into vesicular systems by combined atomic and magnetic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, C.; Department of Physics, SAPIENZA University of Rome, Piazzale A. Moro 5, 00185, Rome; Corsetti, S.

    2015-06-23

    We report a phenomenological approach for the quantification of the diameter of magnetic nanoparticles (MNPs) incorporated in non-ionic surfactant vesicles (niosomes) using magnetic force microscopy (MFM). After a simple specimen preparation, i.e., by putting a drop of solution containing MNPs-loaded niosomes on flat substrates, topography and MFM phase images are collected. To attempt the quantification of the diameter of entrapped MNPs, the method is calibrated on the sole MNPs deposited on the same substrates by analyzing the MFM signal as a function of the MNP diameter (at fixed tip-sample distance) and of the tip-sample distance (for selected MNPs). After calibration,more » the effective diameter of the MNPs entrapped in some niosomes is quantitatively deduced from MFM images.« less

  14. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  15. The Relationships Between Microstructure, Tensile Properties and Fatigue Life in Ti-5Al-5V-5Mo-3Cr-0.4Fe (Ti-5553)

    NASA Astrophysics Data System (ADS)

    Foltz, John W., IV

    beta-titanium alloys are being increasingly used in airframes as a way to decrease the weight of the aircraft. As a result of this movement, Ti-5Al-5V-5Mo-3Cr-0.4Fe (Timetal 555), a high-strength beta titanium alloy, is being used on the current generation of landing gear. This alloy features good combinations of strength, ductility, toughness and fatigue life in alpha+beta processed conditions, but little is known about beta-processed conditions. Recent work by the Center for the Accelerated Maturation of Materials (CAMM) research group at The Ohio State University has improved the tensile property knowledge base for beta-processed conditions in this alloy, and this thesis augments the aforementioned development with description of how microstructure affects fatigue life. In this work, beta-processed microstructures have been produced in a Gleeble(TM) thermomechanical simulator and subsequently characterized with a combination of electron and optical microscopy techniques. Four-point bending fatigue tests have been carried out on the material to characterize fatigue life. All the microstructural conditions have been fatigue tested with the maximum test stress equal to 90% of the measured yield strength. The subsequent results from tensile tests, fatigue tests, and microstructural quantification have been analyzed using Bayesian neural networks in an attempt to predict fatigue life using microstructural and tensile inputs. Good correlation has been developed between lifetime predictions and experimental results using microstructure and tensile inputs. Trained Bayesian neural networks have also been used in a predictive fashion to explore functional dependencies between these inputs and fatigue life. In this work, one section discusses the thermal treatments that led to the observed microstructures, and the possible sequence of precipitation that led to these microstructures. The thesis then describes the implications of microstructure on fatigue life and implications of tensile properties on fatigue life. Several additional experiments are then described that highlight possible causes for the observed dependence of microstructure on fatigue life, including fractographic evidence to provide support of microstructural dependencies.

  16. Cryptographic Boolean Functions with Biased Inputs

    DTIC Science & Technology

    2015-07-31

    theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the

  17. Thermal noise limit for ultra-high vacuum noncontact atomic force microscopy

    PubMed Central

    Lübbe, Jannis; Temmen, Matthias; Rode, Sebastian; Rahe, Philipp; Kühnle, Angelika

    2013-01-01

    Summary The noise of the frequency-shift signal Δf in noncontact atomic force microscopy (NC-AFM) consists of cantilever thermal noise, tip–surface-interaction noise and instrumental noise from the detection and signal processing systems. We investigate how the displacement-noise spectral density d z at the input of the frequency demodulator propagates to the frequency-shift-noise spectral density d Δ f at the demodulator output in dependence of cantilever properties and settings of the signal processing electronics in the limit of a negligible tip–surface interaction and a measurement under ultrahigh-vacuum conditions. For a quantification of the noise figures, we calibrate the cantilever displacement signal and determine the transfer function of the signal-processing electronics. From the transfer function and the measured d z, we predict d Δ f for specific filter settings, a given level of detection-system noise spectral density d z ds and the cantilever-thermal-noise spectral density d z th. We find an excellent agreement between the calculated and measured values for d Δ f. Furthermore, we demonstrate that thermal noise in d Δ f, defining the ultimate limit in NC-AFM signal detection, can be kept low by a proper choice of the cantilever whereby its Q-factor should be given most attention. A system with a low-noise signal detection and a suitable cantilever, operated with appropriate filter and feedback-loop settings allows room temperature NC-AFM measurements at a low thermal-noise limit with a significant bandwidth. PMID:23400758

  18. Thermal noise limit for ultra-high vacuum noncontact atomic force microscopy.

    PubMed

    Lübbe, Jannis; Temmen, Matthias; Rode, Sebastian; Rahe, Philipp; Kühnle, Angelika; Reichling, Michael

    2013-01-01

    The noise of the frequency-shift signal Δf in noncontact atomic force microscopy (NC-AFM) consists of cantilever thermal noise, tip-surface-interaction noise and instrumental noise from the detection and signal processing systems. We investigate how the displacement-noise spectral density d(z) at the input of the frequency demodulator propagates to the frequency-shift-noise spectral density d(Δ) (f) at the demodulator output in dependence of cantilever properties and settings of the signal processing electronics in the limit of a negligible tip-surface interaction and a measurement under ultrahigh-vacuum conditions. For a quantification of the noise figures, we calibrate the cantilever displacement signal and determine the transfer function of the signal-processing electronics. From the transfer function and the measured d(z), we predict d(Δ) (f) for specific filter settings, a given level of detection-system noise spectral density d(z) (ds) and the cantilever-thermal-noise spectral density d(z) (th). We find an excellent agreement between the calculated and measured values for d(Δ) (f). Furthermore, we demonstrate that thermal noise in d(Δ) (f), defining the ultimate limit in NC-AFM signal detection, can be kept low by a proper choice of the cantilever whereby its Q-factor should be given most attention. A system with a low-noise signal detection and a suitable cantilever, operated with appropriate filter and feedback-loop settings allows room temperature NC-AFM measurements at a low thermal-noise limit with a significant bandwidth.

  19. Synaptic control of the shape of the motoneuron pool input-output function

    PubMed Central

    Heckman, Charles J.

    2017-01-01

    Although motoneurons have often been considered to be fairly linear transducers of synaptic input, recent evidence suggests that strong persistent inward currents (PICs) in motoneurons allow neuromodulatory and inhibitory synaptic inputs to induce large nonlinearities in the relation between the level of excitatory input and motor output. To try to estimate the possible extent of this nonlinearity, we developed a pool of model motoneurons designed to replicate the characteristics of motoneuron input-output properties measured in medial gastrocnemius motoneurons in the decerebrate cat with voltage-clamp and current-clamp techniques. We drove the model pool with a range of synaptic inputs consisting of various mixtures of excitation, inhibition, and neuromodulation. We then looked at the relation between excitatory drive and total pool output. Our results revealed that the PICs not only enhance gain but also induce a strong nonlinearity in the relation between the average firing rate of the motoneuron pool and the level of excitatory input. The relation between the total simulated force output and input was somewhat more linear because of higher force outputs in later-recruited units. We also found that the nonlinearity can be increased by increasing neuromodulatory input and/or balanced inhibitory input and minimized by a reciprocal, push-pull pattern of inhibition. We consider the possibility that a flexible input-output function may allow motor output to be tuned to match the widely varying demands of the normal motor repertoire. NEW & NOTEWORTHY Motoneuron activity is generally considered to reflect the level of excitatory drive. However, the activation of voltage-dependent intrinsic conductances can distort the relation between excitatory drive and the total output of a pool of motoneurons. Using a pool of realistic motoneuron models, we show that pool output can be a highly nonlinear function of synaptic input but linearity can be achieved through adjusting the time course of excitatory and inhibitory synaptic inputs. PMID:28053245

  20. Impact Response Characteristics of Polymeric Materials

    DTIC Science & Technology

    1981-11-01

    amplitude-frequency domain. In the language of signal communications an input signal given by some time dependence FAt) is introduced into a " channel ...fixed and not altered by the signal. The channel can be charac- terized by its own function H(t), called the transfer function. This concept can be...rcpresented schematically as follows: Input Signal - [ Channel ] -- Output Signal At) H(t) G(t) In our case the input signal is the impact event, the output

  1. Data-Independent MS/MS Quantification of Neuropeptides for Determination of Putative Feeding-Related Neurohormones in Microdialysate

    PubMed Central

    2015-01-01

    Food consumption is an important behavior that is regulated by an intricate array of neuropeptides (NPs). Although many feeding-related NPs have been identified in mammals, precise mechanisms are unclear and difficult to study in mammals, as current methods are not highly multiplexed and require extensive a priori knowledge about analytes. New advances in data-independent acquisition (DIA) MS/MS and the open-source quantification software Skyline have opened up the possibility to identify hundreds of compounds and quantify them from a single DIA MS/MS run. An untargeted DIA MSE quantification method using Skyline software for multiplexed, discovery-driven quantification was developed and found to produce linear calibration curves for peptides at physiologically relevant concentrations using a protein digest as internal standard. By using this method, preliminary relative quantification of the crab Cancer borealis neuropeptidome (<2 kDa, 137 peptides from 18 families) was possible in microdialysates from 8 replicate feeding experiments. Of these NPs, 55 were detected with an average mass error below 10 ppm. The time-resolved profiles of relative concentration changes for 6 are shown, and there is great potential for the use of this method in future experiments to aid in correlation of NP changes with behavior. This work presents an unbiased approach to winnowing candidate NPs related to a behavior of interest in a functionally relevant manner, and demonstrates the success of such a UPLC-MSE quantification method using the open source software Skyline. PMID:25552291

  2. Data-independent MS/MS quantification of neuropeptides for determination of putative feeding-related neurohormones in microdialysate.

    PubMed

    Schmerberg, Claire M; Liang, Zhidan; Li, Lingjun

    2015-01-21

    Food consumption is an important behavior that is regulated by an intricate array of neuropeptides (NPs). Although many feeding-related NPs have been identified in mammals, precise mechanisms are unclear and difficult to study in mammals, as current methods are not highly multiplexed and require extensive a priori knowledge about analytes. New advances in data-independent acquisition (DIA) MS/MS and the open-source quantification software Skyline have opened up the possibility to identify hundreds of compounds and quantify them from a single DIA MS/MS run. An untargeted DIA MS(E) quantification method using Skyline software for multiplexed, discovery-driven quantification was developed and found to produce linear calibration curves for peptides at physiologically relevant concentrations using a protein digest as internal standard. By using this method, preliminary relative quantification of the crab Cancer borealis neuropeptidome (<2 kDa, 137 peptides from 18 families) was possible in microdialysates from 8 replicate feeding experiments. Of these NPs, 55 were detected with an average mass error below 10 ppm. The time-resolved profiles of relative concentration changes for 6 are shown, and there is great potential for the use of this method in future experiments to aid in correlation of NP changes with behavior. This work presents an unbiased approach to winnowing candidate NPs related to a behavior of interest in a functionally relevant manner, and demonstrates the success of such a UPLC-MS(E) quantification method using the open source software Skyline.

  3. Gap Size Uncertainty Quantification in Advanced Gas Reactor TRISO Fuel Irradiation Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, Binh T.; Einerson, Jeffrey J.; Hawkes, Grant L.

    The Advanced Gas Reactor (AGR)-3/4 experiment is the combination of the third and fourth tests conducted within the tristructural isotropic fuel development and qualification research program. The AGR-3/4 test consists of twelve independent capsules containing a fuel stack in the center surrounded by three graphite cylinders and shrouded by a stainless steel shell. This capsule design enables temperature control of both the fuel and the graphite rings by varying the neon/helium gas mixture flowing through the four resulting gaps. Knowledge of fuel and graphite temperatures is crucial for establishing the functional relationship between fission product release and irradiation thermal conditions.more » These temperatures are predicted for each capsule using the commercial finite-element heat transfer code ABAQUS. Uncertainty quantification reveals that the gap size uncertainties are among the dominant factors contributing to predicted temperature uncertainty due to high input sensitivity and uncertainty. Gap size uncertainty originates from the fact that all gap sizes vary with time due to dimensional changes of the fuel compacts and three graphite rings caused by extended exposure to high temperatures and fast neutron irradiation. Gap sizes are estimated using as-fabricated dimensional measurements at the start of irradiation and post irradiation examination dimensional measurements at the end of irradiation. Uncertainties in these measurements provide a basis for quantifying gap size uncertainty. However, lack of gap size measurements during irradiation and lack of knowledge about the dimension change rates lead to gap size modeling assumptions, which could increase gap size uncertainty. In addition, the dimensional measurements are performed at room temperature, and must be corrected to account for thermal expansion of the materials at high irradiation temperatures. Uncertainty in the thermal expansion coefficients for the graphite materials used in the AGR-3/4 capsules also increases gap size uncertainty. This study focuses on analysis of modeling assumptions and uncertainty sources to evaluate their impacts on the gap size uncertainty.« less

  4. Nuclemeter: A Reaction-Diffusion Column for Quantifying Nucleic Acids Undergoing Enzymatic Amplification

    NASA Astrophysics Data System (ADS)

    Bau, Haim; Liu, Changchun; Killawala, Chitvan; Sadik, Mohamed; Mauk, Michael

    2014-11-01

    Real-time amplification and quantification of specific nucleic acid sequences plays a major role in many medical and biotechnological applications. In the case of infectious diseases, quantification of the pathogen-load in patient specimens is critical to assessing disease progression, effectiveness of drug therapy, and emergence of drug-resistance. Typically, nucleic acid quantification requires sophisticated and expensive instruments, such as real-time PCR machines, which are not appropriate for on-site use and for low resource settings. We describe a simple, low-cost, reactiondiffusion based method for end-point quantification of target nucleic acids undergoing enzymatic amplification. The number of target molecules is inferred from the position of the reaction-diffusion front, analogous to reading temperature in a mercury thermometer. We model the process with the Fisher Kolmogoroff Petrovskii Piscounoff (FKPP) Equation and compare theoretical predictions with experimental observations. The proposed method is suitable for nucleic acid quantification at the point of care, compatible with multiplexing and high-throughput processing, and can function instrument-free. C.L. was supported by NIH/NIAID K25AI099160; M.S. was supported by the Pennsylvania Ben Franklin Technology Development Authority; C.K. and H.B. were funded, in part, by NIH/NIAID 1R41AI104418-01A1.

  5. Uncertainty quantification applied to the radiological characterization of radioactive waste.

    PubMed

    Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P

    2017-09-01

    This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Unconventional barometry and rheometry: new quantification approaches for mechanically-controlled microstructures

    NASA Astrophysics Data System (ADS)

    Tajcmanova, L.; Moulas, E.; Vrijmoed, J.; Podladchikov, Y.

    2016-12-01

    Estimation of pressure-temperature (P-T) from petrographic observations in metamorphic rocks has become a common practice in petrology studies during the last 50 years. This data often serves as a key input in geodynamic reconstructions and thus directly influences our understanding of lithospheric processes. Such an approach might have led the metamorphic geology field to a certain level of quiescence. In the classical view of metamorphic quantification approaches, fast viscous relaxation (and therefore constant pressure across the rock microstructure) is assumed, with chemical diffusion being the limiting factor in equilibration. Recently, we have focused on the other possible scenario - fast chemical diffusion and slow viscous relaxation - and brings an alternative interpretation of chemical zoning found in high-grade rocks. The aim has been to provide insight into the role of mechanically maintained pressure variations on multi-component chemical zoning in minerals. Furthermore, we used the pressure information from the mechanically-controlled microstructure for rheological constrains. We show an unconventional way of relating the direct microstructural observations in rocks to the nonlinearity of rheology at time scales unattainable by laboratory measurements. Our analysis documents that mechanically controlled microstructures that have been preserved over geological times can be used to deduce flow-law parameters and in turn estimate stress levels of minerals in their natural environment. The development of the new quantification approaches has opened new horizons in understanding the phase transformations in the Earth's lithosphere. Furthermore, the new data generated can serve as a food for thought for the next generation of fully coupled numerical codes that involve reacting materials while respecting conservation of mass, momentum and energy.

  7. Uncertainty quantification of overpressure buildup through inverse modeling of compaction processes in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni M.; Ruffo, Paolo; Guadagnini, Alberto

    2017-03-01

    This study illustrates a procedure conducive to a preliminary risk analysis of overpressure development in sedimentary basins characterized by alternating depositional events of sandstone and shale layers. The approach rests on two key elements: (1) forward modeling of fluid flow and compaction, and (2) application of a model-complexity reduction technique based on a generalized polynomial chaos expansion (gPCE). The forward model considers a one-dimensional vertical compaction processes. The gPCE model is then used in an inverse modeling context to obtain efficient model parameter estimation and uncertainty quantification. The methodology is applied to two field settings considered in previous literature works, i.e. the Venture Field (Scotian Shelf, Canada) and the Navarin Basin (Bering Sea, Alaska, USA), relying on available porosity and pressure information for model calibration. It is found that the best result is obtained when porosity and pressure data are considered jointly in the model calibration procedure. Uncertainty propagation from unknown input parameters to model outputs, such as pore pressure vertical distribution, is investigated and quantified. This modeling strategy enables one to quantify the relative importance of key phenomena governing the feedback between sediment compaction and fluid flow processes and driving the buildup of fluid overpressure in stratified sedimentary basins characterized by the presence of low-permeability layers. The results here illustrated (1) allow for diagnosis of the critical role played by the parameters of quantitative formulations linking porosity and permeability in compacted shales and (2) provide an explicit and detailed quantification of the effects of their uncertainty in field settings.

  8. Testing of The Harp Guidelines On A Small Watershed In Finland

    NASA Astrophysics Data System (ADS)

    Granlund, K.; Rekolainen, S.

    TESTING of THE HARP GUIDELINES ON A SMALL WATERSHED IN FIN- LAND K. Granlund, S. Rekolainen Finnish Environment Institute, Research Department kirsti.granlund@vyh.fi Watersheds have emerged as environmental units for assessing, controlling and reduc- ing non-point-source pollution. Within the framework of the international conventions, such as OSPARCOM, HELCOM, and in the implementation of the EU Water Frame- work Directive, the criteria for model selection is of key importance. Harmonized Quantification and Reporting Procedures for Nutrients (HARP) aims at helping the implementation of OSPAR's (Convention for the Protection of the Marine Environ- ment of the North-East Atlantic) strategy in controlling eutrophication and reducing nutrient input to marine ecosystems by 50nitrogen and phosphorus losses from both point and nonpoint sources and help assess the effectiveness of the pollution reduction strategy. The HARP guidelines related respectively to the "Quantification of Nitrogen and Phosphorus Losses from Diffuse Anthropogenic Sources and Natural Background Losses" and to the "Quantification and Reporting of the Retention of Nitrogen and Phosphorus in River Catchments" were tested on a small, well instrumented agricul- tural watershed in Finland. The project was coordinated by the Environment Institute of the Joint Research Centre. Three types of methodologies for estimating nutrient losses to watercourses were eval- uated during the project. Simple methods based on regression equations or loading functions provide a quick method for estimating nutrient losses. Through these meth- ods the pollutant load can be related to parameters such as slope, soil type, land-use, management practices etc. Relevant nutrient loading functions for the study catch- ment were collected during the project. One mid-range model was applied to simulate the nitrogen cycle in a simplified manner in relation to climate, soil properties, land- use and management practices. Physically based models describe in detail the water and nutrient cycle within the watershed. ICECREAM and SWAT models were applied on the study watershed. ICECREAM is a management model based on CREAMS model for predicting field-scale runoff and erosion. The nitrogen and phosphorus sub- models are based on GLEAMS model. SWAT is a continuous time and spatially dis- tributed model, which includes hydrological, sediment and chemical processes in river 1 basins.The simple methods and the mid-range model for nitrogen proved to be fast and easy to apply, but due limited information on crop-specific loading functions and ni- trogen process rates (e.g. mineralisation in soil), only order-of-magnitude estimates for nutrient loads could be calculated. The ICECREAM model was used to estimate crop-specific nutrient losses from the agricultural area. The potential annual nutrient loads for the whole catchment were then calculated by including estimates for nutri- ent loads from other land-use classes (forested area and scattered settlement). Finally, calibration of the SWAT model was started to study in detail the effects of catchment characteristics on nutrient losses. The preliminary results of model testing are pre- sented and the suitability of different methodologies for estimating nutrient losses in Finnish catchments is discussed. 2

  9. Transfer functions for protein signal transduction: application to a model of striatal neural plasticity.

    PubMed

    Scheler, Gabriele

    2013-01-01

    We present a novel formulation for biochemical reaction networks in the context of protein signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of "source" species, which are interpreted as input signals. Signals are transmitted to all other species in the system (the "target" species) with a specific delay and with a specific transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and even recalled to build dynamical models on the basis of state changes. By separating the temporal and the magnitudinal domain we can greatly simplify the computational model, circumventing typical problems of complex dynamical systems. The transfer function transformation of biochemical reaction systems can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant novel insights while remaining a fully testable and executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modularizations that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. Remarkably, we found that overall interconnectedness depends on the magnitude of inputs, with higher connectivity at low input concentrations and significant modularization at moderate to high input concentrations. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing input-dependent signal transmission inactivation.

  10. Development of an analytical method for the determination of anthracyclines in hospital effluents.

    PubMed

    Mahnik, Susanne N; Rizovski, Blanka; Fuerhacker, Maria; Mader, Robert M

    2006-11-01

    Little is known about the fate of cytostatics after their elimination from humans into the environment. Being often very toxic compounds, their quantification in hospital effluents may be necessary to individualise the putative magnitude of pollution problems. We therefore developed a method for the determination of the very important group of anthracyclines (doxorubicin, epirubicin, and daunorubicin) in hospital effluents. Waste water samples were enriched by solid phase extraction (concentration factor 100), analysed by reversed-phase high performance liquid chromatography (RP-HPLC), and monitored by fluorescence detection. This method is reproducible and accurate within a range of 0.1-5 micro g l(-1) for all compounds (limits of quantification: 0.26-0.29 micro g l(-1) ; recoveries >80%). The applicability of the method was proven by chemical analysis of hospital sewage samples (range: 0.1-1.4 micro g l(-1) epirubicin and 0.1-0.5 micro g l(-1) doxorubicin). Obtained over a time period of one month, the results were in line with those calculated by an input-output model. These investigations show that the examined cytostatics are easily detectable and that the presented method is suitable to estimate the dimension of pharmaceutical contamination originating from hospital effluents.

  11. How well do we know the incoming solar infrared radiation?

    NASA Astrophysics Data System (ADS)

    Elsey, Jonathan; Coleman, Marc; Gardiner, Tom; Shine, Keith

    2017-04-01

    The solar spectral irradiance (SSI) has been identified as a key climate variable by the Global Climate Observing System (Bojinski et al. 2014, Bull. Amer. Meteor. Soc.). It is of importance in the modelling of atmospheric radiative transfer, and the quantification of the global energy budget. However, in the near-infrared spectral region (between 2000-10000 cm-1) there exists a discrepancy of 7% between spectra measured from the space-based SOLSPEC instrument (Thuillier et al. 2015, Solar Physics) and those from a ground-based Langley technique (Bolseé et al. 2014, Solar Physics). This same difference is also present between different analyses of the SOLSPEC data. This work aims to reconcile some of these differences by presenting an estimate of the near-infrared SSI obtained from ground-based measurements taken using an absolutely calibrated Fourier transform spectrometer. Spectra are obtained both using the Langley technique and by direct comparison with a radiative transfer model, with appropriate handling of both aerosol scattering and molecular continuum absorption. Particular focus is dedicated to the quantification of uncertainty in these spectra, from both the inherent uncertainty in the measurement setup and that from the use of the radiative transfer code and its inputs.

  12. Uncertainty Quantification and Assessment of CO2 Leakage in Groundwater Aquifers

    NASA Astrophysics Data System (ADS)

    Carroll, S.; Mansoor, K.; Sun, Y.; Jones, E.

    2011-12-01

    Complexity of subsurface aquifers and the geochemical reactions that control drinking water compositions complicate our ability to estimate the impact of leaking CO2 on groundwater quality. We combined lithologic field data from the High Plains Aquifer, numerical simulations, and uncertainty quantification analysis to assess the role of aquifer heterogeneity and physical transport on the extent of CO2 impacted plume over a 100-year period. The High Plains aquifer is a major aquifer over much of the central United States where CO2 may be sequestered in depleted oil and gas reservoirs or deep saline formations. Input parameters considered included, aquifer heterogeneity, permeability, porosity, regional groundwater flow, CO2 and TDS leakage rates over time, and the number of leakage source points. Sensitivity analysis suggest that variations in sand and clay permeability, correlation lengths, van Genuchten parameters, and CO2 leakage rate have the greatest impact on impacted volume or maximum distance from the leak source. A key finding is that relative sensitivity of the parameters changes over the 100-year period. Reduced order models developed from regression of the numerical simulations show that volume of the CO2-impacted aquifer increases over time with 2 order of magnitude variance.

  13. Somatotyping using 3D anthropometry: a cluster analysis.

    PubMed

    Olds, Tim; Daniell, Nathan; Petkov, John; David Stewart, Arthur

    2013-01-01

    Somatotyping is the quantification of human body shape, independent of body size. Hitherto, somatotyping (including the most popular method, the Heath-Carter system) has been based on subjective visual ratings, sometimes supported by surface anthropometry. This study used data derived from three-dimensional (3D) whole-body scans as inputs for cluster analysis to objectively derive clusters of similar body shapes. Twenty-nine dimensions normalised for body size were measured on a purposive sample of 301 adults aged 17-56 years who had been scanned using a Vitus Smart laser scanner. K-means Cluster Analysis with v-fold cross-validation was used to determine shape clusters. Three male and three female clusters emerged, and were visualised using those scans closest to the cluster centroid and a caricature defined by doubling the difference between the average scan and the cluster centroid. The male clusters were decidedly endomorphic (high fatness), ectomorphic (high linearity), and endo-mesomorphic (a mixture of fatness and muscularity). The female clusters were clearly endomorphic, ectomorphic, and the ecto-mesomorphic (a mixture of linearity and muscularity). An objective shape quantification procedure combining 3D scanning and cluster analysis yielded shape clusters strikingly similar to traditional somatotyping.

  14. Population-based input function and image-derived input function for [¹¹C](R)-rolipram PET imaging: methodology, validation and application to the study of major depressive disorder.

    PubMed

    Zanotti-Fregonara, Paolo; Hines, Christina S; Zoghbi, Sami S; Liow, Jeih-San; Zhang, Yi; Pike, Victor W; Drevets, Wayne C; Mallinger, Alan G; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B

    2012-11-15

    Quantitative PET studies of neuroreceptor tracers typically require that arterial input function be measured. The aim of this study was to explore the use of a population-based input function (PBIF) and an image-derived input function (IDIF) for [(11)C](R)-rolipram kinetic analysis, with the goal of reducing - and possibly eliminating - the number of arterial blood samples needed to measure parent radioligand concentrations. A PBIF was first generated using [(11)C](R)-rolipram parent time-activity curves from 12 healthy volunteers (Group 1). Both invasive (blood samples) and non-invasive (body weight, body surface area, and lean body mass) scaling methods for PBIF were tested. The scaling method that gave the best estimate of the Logan-V(T) values was then used to determine the test-retest variability of PBIF in Group 1 and then prospectively applied to another population of 25 healthy subjects (Group 2), as well as to a population of 26 patients with major depressive disorder (Group 3). Results were also compared to those obtained with an image-derived input function (IDIF) from the internal carotid artery. In some subjects, we measured arteriovenous differences in [(11)C](R)-rolipram concentration to see whether venous samples could be used instead of arterial samples. Finally, we assessed the ability of IDIF and PBIF to discriminate depressed patients (MDD) and healthy subjects. Arterial blood-scaled PBIF gave better results than any non-invasive scaling technique. Excellent results were obtained when the blood-scaled PBIF was prospectively applied to the subjects in Group 2 (V(T) ratio 1.02±0.05; mean±SD) and Group 3 (V(T) ratio 1.03±0.04). Equally accurate results were obtained for two subpopulations of subjects drawn from Groups 2 and 3 who had very differently shaped (i.e. "flatter" or "steeper") input functions compared to PBIF (V(T) ratio 1.07±0.04 and 0.99±0.04, respectively). Results obtained via PBIF were equivalent to those obtained via IDIF (V(T) ratio 0.99±0.05 and 1.00±0.04 for healthy subjects and MDD patients, respectively). Retest variability of PBIF was equivalent to that obtained with full input function and IDIF (14.5%, 15.2%, and 14.1%, respectively). Due to [(11)C](R)-rolipram arteriovenous differences, venous samples could not be substituted for arterial samples. With both IDIF and PBIF, depressed patients had a 20% reduction in [(11)C](R)-rolipram binding as compared to control (two-way ANOVA: p=0.008 and 0.005, respectively). These results were almost equivalent to those obtained using 23 arterial samples. Although some arterial samples are still necessary, both PBIF and IDIF are accurate and precise alternatives to full arterial input function for [(11)C](R)-rolipram PET studies. Both techniques give accurate results with low variability, even for clinically different groups of subjects and those with very differently shaped input functions. Published by Elsevier Inc.

  15. Epoxide hydrolases: structure, function, mechanism, and assay.

    PubMed

    Arand, Michael; Cronin, Annette; Adamska, Magdalena; Oesch, Franz

    2005-01-01

    Epoxide hydrolases are a class of enzymes important in the detoxification of genotoxic compounds, as well as in the control of physiological signaling molecules. This chapter gives an overview on the function, structure, and enzymatic mechanism of structurally characterized epoxide hydrolases and describes selected assays for the quantification of epoxide hydrolase activity.

  16. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  17. In Vivo Quantification of Human Serotonin 1A Receptor Using 11C-CUMI-101, an Agonist PET Radiotracer

    PubMed Central

    Milak, Matthew S.; DeLorenzo, Christine; Zanderigo, Francesca; Prabhakaran, Jaya; Kumar, J.S. Dileep; Majo, Vattoly J.; Mann, J. John; Parsey, Ramin V.

    2013-01-01

    The serotonin (5-hydroxytryptamine, or 5-HT) type 1A receptor (5-HT1AR) is implicated in the pathophysiology of numerous neuropsychiatric disorders. We have published the initial evaluation and reproducibility in vivo of [O-methyl-11C]2-(4-(4-(2-methoxyphenyl)piperazin-1-yl)butyl)-4-methyl-1,2,4-triazine-3,5 (2H,4H)dione (11C-CUMI-101), a novel 5-HT1A agonist radiotracer, in Papio anubis. Here, we report the optimal modeling parameters of 11C-CUMI-101 for human PET studies. Methods PET scans were obtained for 7 adult human volunteers. 11C-CUMI-101 was injected as an intravenous bolus, and emission data were collected for 120 min in 3-dimensional mode. We evaluated 10 different models using metabolite-corrected arterial input functions or reference region approaches and several outcome measures. Results When using binding potential (BPF = Bavail/KD [total available receptor concentration divided by the equilibrium dissociation constant]) as the outcome measure, the likelihood estimation in the graphical analysis (LEGA) model performed slightly better than the other methods evaluated at full scan duration. The average test–retest percentage difference was 9.90% ± 5.60%. When using BPND (BPND = fnd × Bavail/KD; BPND equals the product of BPF and fnd [free fraction in the nondisplaceable compartment]), the simplified reference tissue method (SRTM) achieved the lowest percentage difference and smallest bias when compared with nondisplaceable binding potential obtained from LEGA using the metabolite-corrected plasma input function (r2 = 0.99; slope = 0.92). The time–stability analysis indicates that a 120-min scan is sufficient for the stable estimation of outcome measures. Voxel results were comparable to region-of-interest–based analysis, with higher spatial resolution. Conclusion On the basis of its measurable and stable free fraction, high affinity and selectivity, good blood–brain barrier permeability, and plasma and brain kinetics, 11C-CUMI-101 is suitable for the imaging of high-affinity 5-HT1A binding in humans. PMID:21098796

  18. [¹⁸F]fluorothymidine-positron emission tomography in patients with locally advanced breast cancer under bevacizumab treatment: usefulness of different quantitative methods of tumor proliferation.

    PubMed

    Marti-Climent, J M; Dominguez-Prado, I; Garcia-Velloso, M J; Boni, V; Peñuelas, I; Toledo, I; Richter, J A

    2014-01-01

    To investigate quantitative methods of tumor proliferation using 3'-[(18)F]fluoro-3'-deoxythymidine ([(18)F]FLT) PET in patients with breast cancer (BC), studied before and after one bevacizumab administration, and to correlate the [(18)F]FLT-PET uptake with the Ki67 index. Thirty patients with newly diagnosed, untreated BC underwent a [(18)F]FLT-PET before and 14 days after bevacizumab treatment. A dynamic scan centered over the tumor began simultaneously with the injection of [(18)F]FLT (385 ± 56 MBq). Image derived input functions were obtained using regions of interest drawn on the left ventricle (LV) and descending aorta (DA). Metabolite corrected blood curves were used as input functions to obtain the kinetic Ki constant using the Patlak graphical analysis (time interval 10-60 min after injection). Maximum SUV values were derived for the intervals 40-60 min (SUV40) and 50-60 min (SUV50). PET parameters were correlated with the Ki67 index obtained staining tumor biopsies. [(18)F]FLT uptake parameters decreased significantly (p<0.001) after treatment: SUV50=3.09 ± 1.21 vs 2.22 ± 0.96; SUV40=3.00 ± 1.18 vs 2.14 ± 0.95, Ki_LV(10-3)=52[22-116] vs 38[13-80] and Ki_DA(10-3)=49[15-129] vs 33[11-98]. Consistency interclass correlation coefficients within SUV and within Ki were high. Changes of SUV50 and Ki_DA between baseline PET and after one bevacizumab dose PET correlated with changes in Ki67 index (r-Pearson=0.35 and 0.26, p=0.06 and 0.16, respectively). [(18)F]FLT-PET is useful to demonstrate proliferative changes after a dose of bevacizumab in patients with BC. Quantification of tumor proliferation by means of SUV and Ki has shown similar results, but SUV50 obtained better results. A correlation between [(18)F]FLT changes and Ki67 index was observed. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  19. Three-Dimensional Echocardiographic Assessment of Left Heart Chamber Size and Function with Fully Automated Quantification Software in Patients with Atrial Fibrillation.

    PubMed

    Otani, Kyoko; Nakazono, Akemi; Salgo, Ivan S; Lang, Roberto M; Takeuchi, Masaaki

    2016-10-01

    Echocardiographic determination of left heart chamber volumetric parameters by using manual tracings during multiple beats is tedious in atrial fibrillation (AF). The aim of this study was to determine the usefulness of fully automated left chamber quantification software with single-beat three-dimensional transthoracic echocardiographic data sets in patients with AF. Single-beat full-volume three-dimensional transthoracic echocardiographic data sets were prospectively acquired during consecutive multiple cardiac beats (≥10 beats) in 88 patients with AF. In protocol 1, left ventricular volumes, left ventricular ejection fraction, and maximal left atrial volume were validated using automated quantification against the manual tracing method in identical beats in 10 patients. In protocol 2, automated quantification-derived averaged values from multiple beats were compared with the corresponding values obtained from the indexed beat in all patients. Excellent correlations of left chamber parameters between automated quantification and the manual method were observed (r = 0.88-0.98) in protocol 1. The time required for the analysis with the automated quantification method (5 min) was significantly less compared with the manual method (27 min) (P < .0001). In protocol 2, there were excellent linear correlations between the averaged left chamber parameters and the corresponding values obtained from the indexed beat (r = 0.94-0.99), and test-retest variability of left chamber parameters was low (3.5%-4.8%). Three-dimensional transthoracic echocardiography with fully automated quantification software is a rapid and reliable way to measure averaged values of left heart chamber parameters during multiple consecutive beats. Thus, it is a potential new approach for left chamber quantification in patients with AF in daily routine practice. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  20. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  1. Evaluating the Evidence Surrounding Pontine Cholinergic Involvement in REM Sleep Generation

    PubMed Central

    Grace, Kevin P.; Horner, Richard L.

    2015-01-01

    Rapid eye movement (REM) sleep – characterized by vivid dreaming, motor paralysis, and heightened neural activity – is one of the fundamental states of the mammalian central nervous system. Initial theories of REM sleep generation posited that induction of the state required activation of the “pontine REM sleep generator” by cholinergic inputs. Here, we review and evaluate the evidence surrounding cholinergic involvement in REM sleep generation. We submit that: (i) the capacity of pontine cholinergic neurotransmission to generate REM sleep has been firmly established by gain-of-function experiments, (ii) the function of endogenous cholinergic input to REM sleep generating sites cannot be determined by gain-of-function experiments; rather, loss-of-function studies are required, (iii) loss-of-function studies show that endogenous cholinergic input to the PTF is not required for REM sleep generation, and (iv) cholinergic input to the pontine REM sleep generating sites serve an accessory role in REM sleep generation: reinforcing non-REM-to-REM sleep transitions making them quicker and less likely to fail. PMID:26388832

  2. Observations of the directional distribution of the wind energy input function over swell waves

    NASA Astrophysics Data System (ADS)

    Shabani, Behnam; Babanin, Alex V.; Baldock, Tom E.

    2016-02-01

    Field measurements of wind stress over shallow water swell traveling in different directions relative to the wind are presented. The directional distribution of the measured stresses is used to confirm the previously proposed but unverified directional distribution of the wind energy input function. The observed wind energy input function is found to follow a much narrower distribution (β∝cos⁡3.6θ) than the Plant (1982) cosine distribution. The observation of negative stress angles at large wind-wave angles, however, indicates that the onset of negative wind shearing occurs at about θ≈ 50°, and supports the use of the Snyder et al. (1981) directional distribution. Taking into account the reverse momentum transfer from swell to the wind, Snyder's proposed parameterization is found to perform exceptionally well in explaining the observed narrow directional distribution of the wind energy input function, and predicting the wind drag coefficients. The empirical coefficient (ɛ) in Snyder's parameterization is hypothesised to be a function of the wave shape parameter, with ɛ value increasing as the wave shape changes between sinusoidal, sawtooth, and sharp-crested shoaling waves.

  3. LC-MS/MS quantification of next-generation biotherapeutics: a case study for an IgE binding Nanobody in cynomolgus monkey plasma.

    PubMed

    Sandra, Koen; Mortier, Kjell; Jorge, Lucie; Perez, Luis C; Sandra, Pat; Priem, Sofie; Poelmans, Sofie; Bouche, Marie-Paule

    2014-05-01

    Nanobodies(®) are therapeutic proteins derived from the smallest functional fragments of heavy chain-only antibodies. The development and validation of an LC-MS/MS-based method for the quantification of an IgE binding Nanobody in cynomolgus monkey plasma is presented. Nanobody quantification was performed making use of a proteotypic tryptic peptide chromatographically enriched prior to LC-MS/MS analysis. The validated LLOQ at 36 ng/ml was measured with an intra- and inter-assay precision and accuracy <20%. The required sensitivity could be obtained based on the selectivity of 2D LC combined with MS/MS. No analyte specific tools for affinity purification were used. Plasma samples originating from a PK/PD study were analyzed and compared with the results obtained with a traditional ligand-binding assay. Excellent correlations between the two techniques were obtained, and similar PK parameters were estimated. A 2D LC-MS/MS method was successfully developed and validated for the quantification of a next generation biotherapeutic.

  4. Standardless quantification by parameter optimization in electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-11-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.

  5. Observer-Based Adaptive NN Control for a Class of Uncertain Nonlinear Systems With Nonsymmetric Input Saturation.

    PubMed

    Yong-Feng Gao; Xi-Ming Sun; Changyun Wen; Wei Wang

    2017-07-01

    This paper is concerned with the problem of adaptive tracking control for a class of uncertain nonlinear systems with nonsymmetric input saturation and immeasurable states. The radial basis function of neural network (NN) is employed to approximate unknown functions, and an NN state observer is designed to estimate the immeasurable states. To analyze the effect of input saturation, an auxiliary system is employed. By the aid of adaptive backstepping technique, an adaptive tracking control approach is developed. Under the proposed adaptive tracking controller, the boundedness of all the signals in the closed-loop system is achieved. Moreover, distinct from most of the existing references, the tracking error can be bounded by an explicit function of design parameters and saturation input error. Finally, an example is given to show the effectiveness of the proposed method.

  6. How the type of input function affects the dynamic response of conducting polymer actuators

    NASA Astrophysics Data System (ADS)

    Xiang, Xingcan; Alici, Gursel; Mutlu, Rahim; Li, Weihua

    2014-10-01

    There has been a growing interest in smart actuators typified by conducting polymer actuators, especially in their (i) fabrication, modeling and control with minimum external data and (ii) applications in bio-inspired devices, robotics and mechatronics. Their control is a challenging research problem due to the complex and nonlinear properties of these actuators, which cannot be predicted accurately. Based on an input-shaping technique, we propose a new method to improve the conducting polymer actuators’ command-following ability, while minimizing their electric power consumption. We applied four input functions with smooth characteristics to a trilayer conducting polymer actuator to experimentally evaluate its command-following ability under an open-loop control strategy and a simulated feedback control strategy, and, more importantly, to quantify how the type of input function affects the dynamic response of this class of actuators. We have found that the four smooth inputs consume less electrical power than sharp inputs such as a step input with discontinuous higher-order derivatives. We also obtained an improved transient response performance from the smooth inputs, especially under the simulated feedback control strategy, which we have proposed previously [X Xiang, R Mutlu, G Alici, and W Li, 2014 “Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization’, Journal of Smart Materials and Structure, 23]. The idea of using a smooth input command, which results in lower power consumption and better control performance, can be extended to other smart actuators. Consuming less electrical energy or power will have a direct effect on enhancing the operational life of these actuators.

  7. Automated manual transmission clutch controller

    DOEpatents

    Lawrie, Robert E.; Reed, Jr., Richard G.; Rausen, David J.

    1999-11-30

    A powertrain system for a hybrid vehicle. The hybrid vehicle includes a heat engine, such as a diesel engine, and an electric machine, which operates as both an electric motor and an alternator, to power the vehicle. The hybrid vehicle also includes a manual-style transmission configured to operate as an automatic transmission from the perspective of the driver. The engine and the electric machine drive an input shaft which in turn drives an output shaft of the transmission. In addition to driving the transmission, the electric machine regulates the speed of the input shaft in order to synchronize the input shaft during either an upshift or downshift of the transmission by either decreasing or increasing the speed of the input shaft. When decreasing the speed of the input shaft, the electric motor functions as an alternator to produce electrical energy which may be stored by a storage device. Operation of the transmission is controlled by a transmission controller which receives input signals and generates output signals to control shift and clutch motors to effect smooth launch, upshift shifts, and downshifts of the transmission, so that the transmission functions substantially as an automatic transmission from the perspective of the driver, while internally substantially functioning as a manual transmission.

  8. Automated manual transmission shift sequence controller

    DOEpatents

    Lawrie, Robert E.; Reed, Richard G.; Rausen, David J.

    2000-02-01

    A powertrain system for a hybrid vehicle. The hybrid vehicle includes a heat engine, such as a diesel engine, and an electric machine, which operates as both, an electric motor and an alternator, to power the vehicle. The hybrid vehicle also includes a manual-style transmission configured to operate as an automatic transmission from the perspective of the driver. The engine and the electric machine drive an input shaft which in turn drives an output shaft of the transmission. In addition to driving the transmission, the electric machine regulates the speed of the input shaft in order to synchronize the input shaft during either an upshift or downshift of the transmission by either decreasing or increasing the speed of the input shaft. When decreasing the speed of the input shaft, the electric motor functions as an alternator to produce electrical energy which may be stored by a storage device. Operation of the transmission is controlled by a transmission controller which receives input signals and generates output signals to control shift and clutch motors to effect smooth launch, upshift shifts, and downshifts of the transmission, so that the transmission functions substantially as an automatic transmission from the perspective of the driver, while internally substantially functioning as a manual transmission.

  9. Automated manual transmission mode selection controller

    DOEpatents

    Lawrie, Robert E.

    1999-11-09

    A powertrain system for a hybrid vehicle. The hybrid vehicle includes a heat engine, such as a diesel engine, and an electric machine, which operates as both an electric motor and an alternator, to power the vehicle. The hybrid vehicle also includes a manual-style transmission configured to operate as an automatic transmission from the perspective of the driver. The engine and the electric machine drive an input shaft which in turn drives an output shaft of the transmission. In addition to driving the transmission, the electric machine regulates the speed of the input shaft in order to synchronize the input shaft during either an upshift or downshift of the transmission by either decreasing or increasing the speed of the input shaft. When decreasing the speed of the input shaft, the electric motor functions as an alternator to produce electrical energy which may be stored by a storage device. Operation of the transmission is controlled by a transmission controller which receives input signals and generates output signals to control shift and clutch motors to effect smooth launch, upshift shifts, and downshifts of the transmission, so that the transmission functions substantially as an automatic transmission from the perspective of the driver, while internally substantially functioning as a manual transmission.

  10. Automated manual transmission controller

    DOEpatents

    Lawrie, Robert E.; Reed, Jr., Richard G.; Bernier, David R.

    1999-12-28

    A powertrain system for a hybrid vehicle. The hybrid vehicle includes a heat engine, such as a diesel engine, and an electric machine, which operates as both an electric motor and an alternator, to power the vehicle. The hybrid vehicle also includes a manual-style transmission configured to operate as an automatic transmission from the perspective of the driver. The engine and the electric machine drive an input shaft which in turn drives an output shaft of the transmission. In addition to driving the transmission, the electric machine regulates the speed of the input shaft in order to synchronize the input shaft during either an upshift or downshift of the transmission by either decreasing or increasing the speed of the input shaft. When decreasing the speed of the input shaft, the electric motor functions as an alternator to produce electrical energy which may be stored by a storage device. Operation of the transmission is controlled by a transmission controller which receives input signals and generates output signals to control shift and clutch motors to effect smooth launch, upshift shifts, and downshifts of the transmission, so that the transmission functions substantially as an automatic transmission from the perspective of the driver, while internally substantially functioning as a manual transmission.

  11. Operational quantification of continuous-variable correlations.

    PubMed

    Rodó, Carles; Adesso, Gerardo; Sanpera, Anna

    2008-03-21

    We quantify correlations (quantum and/or classical) between two continuous-variable modes as the maximal number of correlated bits extracted via local quadrature measurements. On Gaussian states, such "bit quadrature correlations" majorize entanglement, reducing to an entanglement monotone for pure states. For non-Gaussian states, such as photonic Bell states, photon-subtracted states, and mixtures of Gaussian states, the bit correlations are shown to be a monotonic function of the negativity. This quantification yields a feasible, operational way to measure non-Gaussian entanglement in current experiments by means of direct homodyne detection, without a complete state tomography.

  12. Production Economics of Private Forestry: A Comparison of Industrial and Nonindustrial Forest Owners

    Treesearch

    David H. Newman; David N. Wear

    1993-01-01

    This paper compares the producrion behavior of industrial and nonindustrial private forestland owners in the southeastern U.S. using a restricted profit function. Profits are modeled as a function of two outputs, sawtimber and pulpwood. one variable input, regeneration effort. and two quasi-fixed inputs, land and growing stock. Although an identical profit function is...

  13. Multiple kernel learning using single stage function approximation for binary classification problems

    NASA Astrophysics Data System (ADS)

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  14. The Effects of a Change in the Variability of Irrigation Water

    NASA Astrophysics Data System (ADS)

    Lyon, Kenneth S.

    1983-10-01

    This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."

  15. The determination and quantification of photosynthetic pigments by reverse phase high-performance liquid chromatography, thin-layer chromatography, and spectrophotometry.

    PubMed

    Pocock, Tessa; Król, Marianna; Huner, Norman P A

    2004-01-01

    Chorophylls and carotenoids are functionally important pigment molecules in photosynthetic organisms. Methods for the determination of chlorophylls a and b, beta-carotene, neoxanthin, and the pigments that are involved in photoprotective cycles such as the xanthophylls are discussed. These cycles involve the reversible de-epoxidation of violaxanthin into antheraxanthin and zeaxanthin, as well as the reversible de-epoxidation of lutein-5,6-epoxide into lutein. This chapter describes pigment extraction procedures from higher plants and green algae. Methods for the determination and quantification using high-performance liquid chromatograpy (HPLC) are described as well as methods for the separation and purification of pigments for use as standards using thin-layer chromatography (TLC). In addition, several spectrophotometric methods for the quantification of chlorophylls a and b are described.

  16. Sinusoidal input describing function for hysteresis followed by elementary backlash

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.

    1976-01-01

    The author proposes a new sinusoidal input describing function which accounts for the serial combination of hysteresis followed by elementary backlash in a single nonlinear element. The output of the hysteresis element drives the elementary backlash element. Various analytical forms of the describing function are given, depending on the a/A ratio, where a is the half width of the hysteresis band or backlash gap, and A is the amplitude of the assumed input sinusoid, and on the value of the parameter representing the fraction of a attributed to the backlash characteristic. The negative inverse describing function is plotted on a gain-phase plot, and it is seen that a relatively small amount of backlash leads to domination of the backlash character in the describing function. The extent of the region of the gain-phase plane covered by the describing function is such as to guarantee some form of limit cycle behavior in most closed-loop systems.

  17. Dynamics and fates of trace metals chronically input in a Mediterranean coastal zone impacted by a large urban area.

    PubMed

    Oursel, B; Garnier, C; Durrieu, G; Mounier, S; Omanović, D; Lucas, Y

    2013-04-15

    Quantification and characterization of chronic inputs of trace metals and organic carbon in a coastal Mediterranean area (the city of Marseille) during the dry season was carried out. The 625 km(2) watershed includes two small coastal rivers whose waters are mixed with treated wastewater (TWW) just before their outlet into the sea. Dissolved and particulate Cu, Pb, Cd, Zn, Co, Ni and organic carbon concentrations in the rivers were comparable to those in other Mediterranean coastal areas, whereas at the outlet, 2- to 18-fold higher concentrations reflected the impact of the TWW. A non-conservative behavior observed for most of the studied metals in the mixing zone was validated by a remobilization experiment performed in the laboratory. The results showed that sorption/desorption processes could occur with slow kinetics with respect to the mixing time in the plume, indicating non-equilibrium in the dissolved/particulate metal distribution. Thus, a sample filtration immediately after sampling is strictly required. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Solving iTOUGH2 simulation and optimization problems using the PEST protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.; Zhang, Y.

    2011-02-01

    The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less

  19. Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion

    NASA Astrophysics Data System (ADS)

    Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison

    2016-11-01

    Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.

  20. Quantitative system drift compensates for altered maternal inputs to the gap gene network of the scuttle fly Megaselia abdita

    PubMed Central

    Wotton, Karl R; Jiménez-Guri, Eva; Crombach, Anton; Janssens, Hilde; Alcaine-Colet, Anna; Lemke, Steffen; Schmidt-Ott, Urs; Jaeger, Johannes

    2015-01-01

    The segmentation gene network in insects can produce equivalent phenotypic outputs despite differences in upstream regulatory inputs between species. We investigate the mechanistic basis of this phenomenon through a systems-level analysis of the gap gene network in the scuttle fly Megaselia abdita (Phoridae). It combines quantification of gene expression at high spatio-temporal resolution with systematic knock-downs by RNA interference (RNAi). Initiation and dynamics of gap gene expression differ markedly between M. abdita and Drosophila melanogaster, while the output of the system converges to equivalent patterns at the end of the blastoderm stage. Although the qualitative structure of the gap gene network is conserved, there are differences in the strength of regulatory interactions between species. We term such network rewiring ‘quantitative system drift’. It provides a mechanistic explanation for the developmental hourglass model in the dipteran lineage. Quantitative system drift is likely to be a widespread mechanism for developmental evolution. DOI: http://dx.doi.org/10.7554/eLife.04785.001 PMID:25560971

  1. Performance of an Annular Linear Induction Pump with Applications to Space Nuclear Power Systems

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Schoenfeld, Michael; Pearson, J. Boise; Webster, Kenneth; Godfroy, Thomas; Adkins, Harold E., Jr.; Werner, James E.

    2010-01-01

    Results of performance testing of an annular linear induction pump are presented. The pump electromagnetically pumps liquid metal through a circuit specially designed to allow for quantification of the performance. Testing was conducted over a range of conditions, including frequencies of 33, 36, 39, and 60 Hz, liquid metal temperatures from 125 to 525 C, and input voltages from 5 to 120 V. Pump performance spanned a range of flow rates from roughly 0.16 to 5.7 L/s (2.5 to 90 gpm), and pressure head less than 1 to 90 kPa (less than 0.145 to 13 psi). The maximum efficiency measured during testing was slightly greater than 6%. The efficiency was fairly insensitive to input frequency from 33 to 39 Hz, and was markedly lower at 60 Hz. In addition, the efficiency decreased as the NaK temperature was raised. The performance of the pump operating on a variable frequency drive providing 60 Hz power compared favorably with the same pump operating on 60 Hz power drawn directly from the electrical grid.

  2. Measuring and explaining eco-efficiencies of wastewater treatment plants in China: An uncertainty analysis perspective.

    PubMed

    Dong, Xin; Zhang, Xinyi; Zeng, Siyu

    2017-04-01

    In the context of sustainable development, there has been an increasing requirement for an eco-efficiency assessment of wastewater treatment plants (WWTPs). Data envelopment analysis (DEA), a technique that is widely applied for relative efficiency assessment, is used in combination with the tolerances approach to handle WWTPs' multiple inputs and outputs as well as their uncertainty. The economic cost, energy consumption, contaminant removal, and global warming effect during the treatment processes are integrated to interpret the eco-efficiency of WWTPs. A total of 736 sample plants from across China are assessed, and large sensitivities to variations in inputs and outputs are observed for most samples, with only three WWTPs identified as being stably efficient. Size of plant, overcapacity, climate type, and influent characteristics are proven to have a significant influence on both the mean efficiency and performance sensitivity of WWTPs, while no clear relationships were found between eco-efficiency and technology under the framework of uncertainty analysis. The incorporation of uncertainty quantification and environmental impact consideration has improved the liability and applicability of the assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  4. Flexible and re-configurable optical three-input XOR logic gate of phase-modulated signals with multicast functionality for potential application in optical physical-layer network coding.

    PubMed

    Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru

    2016-02-08

    Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed.

  5. Peak-Seeking Control Using Gradient and Hessian Estimates

    NASA Technical Reports Server (NTRS)

    Ryan, John J.; Speyer, Jason L.

    2010-01-01

    A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.

  6. ENHANCED RECOVERY METHODS FOR 85KR AGE-DATING GROUNDWATER: ROYAL WATERSHED, MAINE

    EPA Science Inventory

    Potential widespread use of 85Kr, having a constant input function in the northern hemisphere, for groundwater age-dating would advance watershed investigations. The current input function of tritium is not sufficient to estimate young modern recharge waters. While tri...

  7. Experimental quantification of the true efficiency of carbon nanotube thin-film thermophones.

    PubMed

    Bouman, Troy M; Barnard, Andrew R; Asgarisabet, Mahsa

    2016-03-01

    Carbon nanotube thermophones can create acoustic waves from 1 Hz to 100 kHz. The thermoacoustic effect that allows for this non-vibrating sound source is naturally inefficient. Prior efforts have not explored their true efficiency (i.e., the ratio of the total acoustic power to the electrical input power). All previous works have used the ratio of sound pressure to input electrical power. A method for true power efficiency measurement is shown using a fully anechoic technique. True efficiency data are presented for three different drive signal processing techniques: standard alternating current (AC), direct current added to alternating current (DCAC), and amplitude modulation of an alternating current (AMAC) signal. These signal processing techniques are needed to limit the frequency doubling non-linear effects inherent to carbon nanotube thermophones. Each type of processing affects the true efficiency differently. Using a 72 W(rms) input signal, the measured efficiency ranges were 4.3 × 10(-6) - 319 × 10(-6), 1.7 × 10(-6) - 308 × 10(-6), and 1.2 × 10(-6) - 228 × 10(-6)% for AC, DCAC, and AMAC, respectively. These data were measured in the frequency range of 100 Hz to 10 kHz. In addition, the effects of these processing techniques relative to sound quality are presented in terms of total harmonic distortion.

  8. Single molecule counting and assessment of random molecular tagging errors with transposable giga-scale error-correcting barcodes.

    PubMed

    Lau, Billy T; Ji, Hanlee P

    2017-09-21

    RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.

  9. The economic value of remote sensing of earth resources from space: An ERTS overview and the value of continuity of service. Volume 9: Oceans

    NASA Technical Reports Server (NTRS)

    Lietzke, K. R.

    1974-01-01

    The impact of remote sensing upon marine activities and oceanography is presented. The present capabilities of the current Earth Resources Technology Satellite (ERTS-1), as demonstrated by the principal investigators are discussed. Cost savings benefits are quantified in the area of nautical and hygrographic mapping and charting. Benefits are found in aiding coastal zone management and in the fields of weather (marine) prediction, fishery harvesting and management, and potential uses for ocean vegetation. Difficulties in quantification are explained, the primary factor being that remotely sensed information will be of greater benefit as input to forecasting models which have not yet been constructed.

  10. Tributyltin--critical pollutant in whole water samples--development of traceable measurement methods for monitoring under the European Water Framework Directive (WFD) 2000/60/EC.

    PubMed

    Richter, Janine; Fettig, Ina; Philipp, Rosemarie; Jakubowski, Norbert

    2015-07-01

    Tributyltin is listed as one of the priority substances in the European Water Framework Directive (WFD). Despite its decreasing input in the environment, it is still present and has to be monitored. In the European Metrology Research Programme project ENV08, a sensitive and reliable analytical method according to the WFD was developed to quantify this environmental pollutant at a very low limit of quantification. With the development of such a primary reference method for tributyltin, the project helped to improve the quality and comparability of monitoring data. An overview of project aims and potential analytical tools is given.

  11. Reconstruction of input functions from a dynamic PET image with sequential administration of 15O2 and [Formula: see text] for noninvasive and ultra-rapid measurement of CBF, OEF, and CMRO2.

    PubMed

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro

    2018-05-01

    CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .

  12. Creation of Synthetic Surface Temperature and Precipitation Ensembles Through A Computationally Efficient, Mixed Method Approach

    NASA Astrophysics Data System (ADS)

    Hartin, C.; Lynch, C.; Kravitz, B.; Link, R. P.; Bond-Lamberty, B. P.

    2017-12-01

    Typically, uncertainty quantification of internal variability relies on large ensembles of climate model runs under multiple forcing scenarios or perturbations in a parameter space. Computationally efficient, standard pattern scaling techniques only generate one realization and do not capture the complicated dynamics of the climate system (i.e., stochastic variations with a frequency-domain structure). In this study, we generate large ensembles of climate data with spatially and temporally coherent variability across a subselection of Coupled Model Intercomparison Project Phase 5 (CMIP5) models. First, for each CMIP5 model we apply a pattern emulation approach to derive the model response to external forcing. We take all the spatial and temporal variability that isn't explained by the emulator and decompose it into non-physically based structures through use of empirical orthogonal functions (EOFs). Then, we perform a Fourier decomposition of the EOF projection coefficients to capture the input fields' temporal autocorrelation so that our new emulated patterns reproduce the proper timescales of climate response and "memory" in the climate system. Through this 3-step process, we derive computationally efficient climate projections consistent with CMIP5 model trends and modes of variability, which address a number of deficiencies inherent in the ability of pattern scaling to reproduce complex climate model behavior.

  13. Quantification of the contribution of nitrogen from septic tanks to ground water in Spanish Springs Valley, Nevada

    USGS Publications Warehouse

    Rosen, Michael R.; Kropf, Christian; Thomas, Karen A.

    2006-01-01

    Analysis of total dissolved nitrogen concentrations from soil water samples collected within the soil zone under septic tank leach fields in Spanish Springs Valley, Nevada, shows a median concentration of approximately 44 milligrams per liter (mg/L) from more than 300 measurements taken from four septic tank systems. Using two simple mass balance calculations, the concentration of total dissolved nitrogen potentially reaching the ground-water table ranges from 25 to 29 mg/L. This indicates that approximately 29 to 32 metric tons of nitrogen enters the aquifer every year from natural recharge and from the 2,070 houses that use septic tanks in the densely populated portion of Spanish Springs Valley. Natural recharge contributes only 0.25 metric tons because the total dissolved nitrogen concentration of natural recharge was estimated to be low (0.8 mg/L). Although there are many uncertainties in this estimate, the sensitivity of these uncertainties to the calculated load is relatively small, indicating that these values likely are accurate to within an order of magnitude. The nitrogen load calculation will be used as an input function for a ground-water flow and transport model that will be used to test management options for controlling nitrogen contamination in the basin.

  14. Occupant behavior models: A critical review of implementation and representation approaches in building performance simulation programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia

    Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less

  15. DIDEM - An integrated model for comparative health damage costs calculation of air pollution

    NASA Astrophysics Data System (ADS)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara

    2018-01-01

    Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.

  16. Occupant behavior models: A critical review of implementation and representation approaches in building performance simulation programs

    DOE PAGES

    Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia; ...

    2017-07-27

    Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less

  17. A Non-Invasive Assessment of Cardiopulmonary Hemodynamics with MRI in Pulmonary Hypertension

    PubMed Central

    Bane, Octavia; Shah, Sanjiv J.; Cuttica, Michael J.; Collins, Jeremy D.; Selvaraj, Senthil; Chatterjee, Neil R.; Guetter, Christoph; Carr, James C.; Carroll, Timothy J.

    2015-01-01

    Purpose We propose a method for non-invasive quantification of hemodynamic changes in the pulmonary arteries resulting from pulmonary hypertension (PH). Methods Using a two-element windkessel model, and input parameters derived from standard MRI evaluation of flow, cardiac function and valvular motion, we derive: pulmonary artery compliance (C), mean pulmonary artery pressure (mPAP), pulmonary vascular resistance (PVR), pulmonary capillary wedge pressure (PCWP), time-averaged intra-pulmonary pressure waveforms and pulmonary artery pressures (systolic (sPAP) and diastolic (dPAP)). MRI results were compared directly to reference standard values from right heart catheterization (RHC) obtained in a series of patients with suspected pulmonary hypertension (PH). Results In 7 patients with suspected PH undergoing RHC, MRI and echocardiography, there was no statistically significant difference (p<0.05) between parameters measured by MRI and RHC. Using standard clinical cutoffs to define PH (mPAP ≥ 25 mmHg), MRI was able to correctly identify all patients as having pulmonary hypertension, and to correctly distinguish between pulmonary arterial (mPAP≥ 25 mmHg, PCWP<15 mmHg) and venous hypertension (mPAP ≥ 25 mmHg, PCWP ≥ 15 mmHg) in 5 of 7 cases. Conclusions We have developed a mathematical model capable of quantifying physiological parameters that reflect the severity of PH. PMID:26283577

  18. Abundances of iron-binding photosynthetic and nitrogen-fixing proteins of Trichodesmium both in culture and in situ from the North Atlantic.

    PubMed

    Richier, Sophie; Macey, Anna I; Pratt, Nicola J; Honey, David J; Moore, C Mark; Bibby, Thomas S

    2012-01-01

    Marine cyanobacteria of the genus Trichodesmium occur throughout the oligotrophic tropical and subtropical oceans, where they can dominate the diazotrophic community in regions with high inputs of the trace metal iron (Fe). Iron is necessary for the functionality of enzymes involved in the processes of both photosynthesis and nitrogen fixation. We combined laboratory and field-based quantifications of the absolute concentrations of key enzymes involved in both photosynthesis and nitrogen fixation to determine how Trichodesmium allocates resources to these processes. We determined that protein level responses of Trichodesmium to iron-starvation involve down-regulation of the nitrogen fixation apparatus. In contrast, the photosynthetic apparatus is largely maintained, although re-arrangements do occur, including accumulation of the iron-stress-induced chlorophyll-binding protein IsiA. Data from natural populations of Trichodesmium spp. collected in the North Atlantic demonstrated a protein profile similar to iron-starved Trichodesmium in culture, suggestive of acclimation towards a minimal iron requirement even within an oceanic region receiving a high iron-flux. Estimates of cellular metabolic iron requirements are consistent with the availability of this trace metal playing a major role in restricting the biomass and activity of Trichodesmium throughout much of the subtropical ocean.

  19. Quantification of Skeletal Blood Flow and Fluoride Metabolism in Rats using PET in a Pre-Clinical Stress Fracture Model

    PubMed Central

    Tomlinson, Ryan E.; Silva, Matthew J.; Shoghi, Kooresh I.

    2013-01-01

    Purpose Blood flow is an important factor in bone production and repair, but its role in osteogenesis induced by mechanical loading is unknown. Here, we present techniques for evaluating blood flow and fluoride metabolism in a pre-clinical stress fracture model of osteogenesis in rats. Procedures Bone formation was induced by forelimb compression in adult rats. 15O water and 18F fluoride PET imaging were used to evaluate blood flow and fluoride kinetics 7 days after loading. 15O water was modeled using a one-compartment, two-parameter model, while a two-compartment, three-parameter model was used to model 18F fluoride. Input functions were created from the heart, and a stochastic search algorithm was implemented to provide initial parameter values in conjunction with a Levenberg–Marquardt optimization algorithm. Results Loaded limbs are shown to have a 26% increase in blood flow rate, 113% increase in fluoride flow rate, 133% increase in fluoride flux, and 13% increase in fluoride incorporation into bone as compared to non-loaded limbs (p < 0.05 for all results). Conclusions The results shown here are consistent with previous studies, confirming this technique is suitable for evaluating the vascular response and mineral kinetics of osteogenic mechanical loading. PMID:21785919

  20. LocExpress: a web server for efficiently estimating expression of novel transcripts.

    PubMed

    Hou, Mei; Tian, Feng; Jiang, Shuai; Kong, Lei; Yang, Dechang; Gao, Ge

    2016-12-22

    The temporal and spatial-specific expression pattern of a transcript in multiple tissues and cell types can indicate key clues about its function. While several gene atlas available online as pre-computed databases for known gene models, it's still challenging to get expression profile for previously uncharacterized (i.e. novel) transcripts efficiently. Here we developed LocExpress, a web server for efficiently estimating expression of novel transcripts across multiple tissues and cell types in human (20 normal tissues/cells types and 14 cell lines) as well as in mouse (24 normal tissues/cell types and nine cell lines). As a wrapper to RNA-Seq quantification algorithm, LocExpress efficiently reduces the time cost by making abundance estimation calls increasingly within the minimum spanning bundle region of input transcripts. For a given novel gene model, such local context-oriented strategy allows LocExpress to estimate its FPKMs in hundreds of samples within minutes on a standard Linux box, making an online web server possible. To the best of our knowledge, LocExpress is the only web server to provide nearly real-time expression estimation for novel transcripts in common tissues and cell types. The server is publicly available at http://loc-express.cbi.pku.edu.cn .

  1. Short-Term Memory in Mathematics-Proficient and Mathematics-Disabled Students as a Function of Input-Modality/Output-Modality Pairings.

    ERIC Educational Resources Information Center

    Webster, Raymond E.

    1980-01-01

    A significant two-way input modality by output modality interaction suggested that short term memory capacity among the groups differed as a function of the modality used to present the items in combination with the output response required. (Author/CL)

  2. Functional Differences between Statistical Learning with and without Explicit Training

    ERIC Educational Resources Information Center

    Batterink, Laura J.; Reber, Paul J.; Paller, Ken A.

    2015-01-01

    Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and…

  3. Myocardial perfusion cardiovascular magnetic resonance: optimized dual sequence and reconstruction for quantification.

    PubMed

    Kellman, Peter; Hansen, Michael S; Nielles-Vallespin, Sonia; Nickander, Jannike; Themudo, Raquel; Ugander, Martin; Xue, Hui

    2017-04-07

    Quantification of myocardial blood flow requires knowledge of the amount of contrast agent in the myocardial tissue and the arterial input function (AIF) driving the delivery of this contrast agent. Accurate quantification is challenged by the lack of linearity between the measured signal and contrast agent concentration. This work characterizes sources of non-linearity and presents a systematic approach to accurate measurements of contrast agent concentration in both blood and myocardium. A dual sequence approach with separate pulse sequences for AIF and myocardial tissue allowed separate optimization of parameters for blood and myocardium. A systems approach to the overall design was taken to achieve linearity between signal and contrast agent concentration. Conversion of signal intensity values to contrast agent concentration was achieved through a combination of surface coil sensitivity correction, Bloch simulation based look-up table correction, and in the case of the AIF measurement, correction of T2* losses. Validation of signal correction was performed in phantoms, and values for peak AIF concentration and myocardial flow are provided for 29 normal subjects for rest and adenosine stress. For phantoms, the measured fits were within 5% for both AIF and myocardium. In healthy volunteers the peak [Gd] was 3.5 ± 1.2 for stress and 4.4 ± 1.2 mmol/L for rest. The T2* in the left ventricle blood pool at peak AIF was approximately 10 ms. The peak-to-valley ratio was 5.6 for the raw signal intensities without correction, and was 8.3 for the look-up-table (LUT) corrected AIF which represents approximately 48% correction. Without T2* correction the myocardial blood flow estimates are overestimated by approximately 10%. The signal-to-noise ratio of the myocardial signal at peak enhancement (1.5 T) was 17.7 ± 6.6 at stress and the peak [Gd] was 0.49 ± 0.15 mmol/L. The estimated perfusion flow was 3.9 ± 0.38 and 1.03 ± 0.19 ml/min/g using the BTEX model and 3.4 ± 0.39 and 0.95 ± 0.16 using a Fermi model, for stress and rest, respectively. A dual sequence for myocardial perfusion cardiovascular magnetic resonance and AIF measurement has been optimized for quantification of myocardial blood flow. A validation in phantoms was performed to confirm that the signal conversion to gadolinium concentration was linear. The proposed sequence was integrated with a fully automatic in-line solution for pixel-wise mapping of myocardial blood flow and evaluated in adenosine stress and rest studies on N = 29 normal healthy subjects. Reliable perfusion mapping was demonstrated and produced estimates with low variability.

  4. Mass spectrometry–based relative quantification of proteins in precatalytic and catalytically active spliceosomes by metabolic labeling (SILAC), chemical labeling (iTRAQ), and label-free spectral count

    PubMed Central

    Schmidt, Carla; Grønborg, Mads; Deckert, Jochen; Bessonov, Sergey; Conrad, Thomas; Lührmann, Reinhard; Urlaub, Henning

    2014-01-01

    The spliceosome undergoes major changes in protein and RNA composition during pre-mRNA splicing. Knowing the proteins—and their respective quantities—at each spliceosomal assembly stage is critical for understanding the molecular mechanisms and regulation of splicing. Here, we applied three independent mass spectrometry (MS)–based approaches for quantification of these proteins: (1) metabolic labeling by SILAC, (2) chemical labeling by iTRAQ, and (3) label-free spectral count for quantification of the protein composition of the human spliceosomal precatalytic B and catalytic C complexes. In total we were able to quantify 157 proteins by at least two of the three approaches. Our quantification shows that only a very small subset of spliceosomal proteins (the U5 and U2 Sm proteins, a subset of U5 snRNP-specific proteins, and the U2 snRNP-specific proteins U2A′ and U2B′′) remains unaltered upon transition from the B to the C complex. The MS-based quantification approaches classify the majority of proteins as dynamically associated specifically with the B or the C complex. In terms of experimental procedure and the methodical aspect of this work, we show that metabolically labeled spliceosomes are functionally active in terms of their assembly and splicing kinetics and can be utilized for quantitative studies. Moreover, we obtain consistent quantification results from all three methods, including the relatively straightforward and inexpensive label-free spectral count technique. PMID:24448447

  5. Novel Methods of Automated Quantification of Gap Junction Distribution and Interstitial Collagen Quantity from Animal and Human Atrial Tissue Sections

    PubMed Central

    Yan, Jiajie; Thomson, Justin K.; Wu, Xiaomin; Zhao, Weiwei; Pollard, Andrew E.; Ai, Xun

    2014-01-01

    Background Gap junctions (GJs) are the principal membrane structures that conduct electrical impulses between cardiac myocytes while interstitial collagen (IC) can physically separate adjacent myocytes and limit cell-cell communication. Emerging evidence suggests that both GJ and interstitial structural remodeling are linked to cardiac arrhythmia development. However, automated quantitative identification of GJ distribution and IC deposition from microscopic histological images has proven to be challenging. Such quantification is required to improve the understanding of functional consequences of GJ and structural remodeling in cardiac electrophysiology studies. Methods and Results Separate approaches were employed for GJ and IC identification in images from histologically stained tissue sections obtained from rabbit and human atria. For GJ identification, we recognized N-Cadherin (N-Cad) as part of the gap junction connexin 43 (Cx43) molecular complex. Because N-Cad anchors Cx43 on intercalated discs (ID) to form functional GJ channels on cell membranes, we computationally dilated N-Cad pixels to create N-Cad units that covered all ID-associated Cx43 pixels on Cx43/N-Cad double immunostained confocal images. This approach allowed segmentation between ID-associated and non-ID-associated Cx43. Additionally, use of N-Cad as a unique internal reference with Z-stack layer-by-layer confocal images potentially limits sample processing related artifacts in Cx43 quantification. For IC quantification, color map thresholding of Masson's Trichrome blue stained sections allowed straightforward and automated segmentation of collagen from non-collagen pixels. Our results strongly demonstrate that the two novel image-processing approaches can minimize potential overestimation or underestimation of gap junction and structural remodeling in healthy and pathological hearts. The results of using the two novel methods will significantly improve our understanding of the molecular and structural remodeling associated functional changes in cardiac arrhythmia development in aged and diseased hearts. PMID:25105669

  6. Smart mobility solution with multiple input Output interface.

    PubMed

    Sethi, Aartika; Deb, Sujay; Ranjan, Prabhat; Sardar, Arghya

    2017-07-01

    Smart wheelchairs are commonly used to provide solution for mobility impairment. However their usage is limited primarily due to high cost owing from sensors required for giving input, lack of adaptability for different categories of input and limited functionality. In this paper we propose a smart mobility solution using smartphone with inbuilt sensors (accelerometer, camera and speaker) as an input interface. An Emotiv EPOC+ is also used for motor imagery based input control synced with facial expressions in cases of extreme disability. Apart from traction, additional functions like home security and automation are provided using Internet of Things (IoT) and web interfaces. Although preliminary, our results suggest that this system can be used as an integrated and efficient solution for people suffering from mobility impairment. The results also indicate a decent accuracy is obtained for the overall system.

  7. Synaptology of physiologically identified ganglion cells in the cat retina: a comparison of retinal X- and Y-cells.

    PubMed

    Weber, A J; Stanford, L R

    1994-05-15

    It has long been known that a number of functionally different types of ganglion cells exist in the cat retina, and that each responds differently to visual stimulation. To determine whether the characteristic response properties of different retinal ganglion cell types might reflect differences in the number and distribution of their bipolar and amacrine cell inputs, we compared the percentages and distributions of the synaptic inputs from bipolar and amacrine cells to the entire dendritic arbors of physiologically characterized retinal X- and Y-cells. Sixty-two percent of the synaptic input to the Y-cell was from amacrine cell terminals, while the X-cells received approximately equal amounts of input from amacrine and bipolar cells. We found no significant difference in the distributions of bipolar or amacrine cell inputs to X- and Y-cells, or ON-center and OFF-center cells, either as a function of dendritic branch order or distance from the origin of the dendritic arbor. While, on the basis of these data, we cannot exclude the possibility that the difference in the proportion of bipolar and amacrine cell input contributes to the functional differences between X- and Y-cells, the magnitude of this difference, and the similarity in the distributions of the input from the two afferent cell types, suggest that mechanisms other than a simple predominance of input from amacrine or bipolar cells underlie the differences in their response properties. More likely, perhaps, is that the specific response features of X- and Y-cells originate in differences in the visual responses of the bipolar and amacrine cells that provide their input, or in the complex synaptic arrangements found among amacrine and bipolar cell terminals and the dendrites of specific types of retinal ganglion cells.

  8. Reshaping Plant Biology: Qualitative and Quantitative Descriptors for Plant Morphology

    PubMed Central

    Balduzzi, Mathilde; Binder, Brad M.; Bucksch, Alexander; Chang, Cynthia; Hong, Lilan; Iyer-Pascuzzi, Anjali S.; Pradal, Christophe; Sparks, Erin E.

    2017-01-01

    An emerging challenge in plant biology is to develop qualitative and quantitative measures to describe the appearance of plants through the integration of mathematics and biology. A major hurdle in developing these metrics is finding common terminology across fields. In this review, we define approaches for analyzing plant geometry, topology, and shape, and provide examples for how these terms have been and can be applied to plants. In leaf morphological quantifications both geometry and shape have been used to gain insight into leaf function and evolution. For the analysis of cell growth and expansion, we highlight the utility of geometric descriptors for understanding sepal and hypocotyl development. For branched structures, we describe how topology has been applied to quantify root system architecture to lend insight into root function. Lastly, we discuss the importance of using morphological descriptors in ecology to assess how communities interact, function, and respond within different environments. This review aims to provide a basic description of the mathematical principles underlying morphological quantifications. PMID:28217137

  9. Oculomotor learning revisited: a model of reinforcement learning in the basal ganglia incorporating an efference copy of motor actions

    PubMed Central

    Fee, Michale S.

    2012-01-01

    In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. While reinforcement learning forms the basis of many current theories of basal ganglia (BG) function, these models do not incorporate distinct computational roles for signals that convey context, and those that convey what action an animal takes. Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. One input is from a cortical region that carries context information about the current “time” in the motor sequence. The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. The signals are integrated by a learning rule in which efference copy inputs gate the potentiation of context inputs (but not efference copy inputs) onto medium spiny neurons in response to a rewarded action. The hypothesis is described in terms of a circuit that implements the learning of visually guided saccades. The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources. PMID:22754501

  10. Oculomotor learning revisited: a model of reinforcement learning in the basal ganglia incorporating an efference copy of motor actions.

    PubMed

    Fee, Michale S

    2012-01-01

    In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. While reinforcement learning forms the basis of many current theories of basal ganglia (BG) function, these models do not incorporate distinct computational roles for signals that convey context, and those that convey what action an animal takes. Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. One input is from a cortical region that carries context information about the current "time" in the motor sequence. The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. The signals are integrated by a learning rule in which efference copy inputs gate the potentiation of context inputs (but not efference copy inputs) onto medium spiny neurons in response to a rewarded action. The hypothesis is described in terms of a circuit that implements the learning of visually guided saccades. The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources.

  11. Altered functional connectivity of the amygdaloid input nuclei in adolescents and young adults with autism spectrum disorder: a resting state fMRI study.

    PubMed

    Rausch, Annika; Zhang, Wei; Haak, Koen V; Mennes, Maarten; Hermans, Erno J; van Oort, Erik; van Wingen, Guido; Beckmann, Christian F; Buitelaar, Jan K; Groen, Wouter B

    2016-01-01

    Amygdala dysfunction is hypothesized to underlie the social deficits observed in autism spectrum disorders (ASD). However, the neurobiological basis of this hypothesis is underspecified because it is unknown whether ASD relates to abnormalities of the amygdaloid input or output nuclei. Here, we investigated the functional connectivity of the amygdaloid social-perceptual input nuclei and emotion-regulation output nuclei in ASD versus controls. We collected resting state functional magnetic resonance imaging (fMRI) data, tailored to provide optimal sensitivity in the amygdala as well as the neocortex, in 20 adolescents and young adults with ASD and 25 matched controls. We performed a regular correlation analysis between the entire amygdala (EA) and the whole brain and used a partial correlation analysis to investigate whole-brain functional connectivity uniquely related to each of the amygdaloid subregions. Between-group comparison of regular EA correlations showed significantly reduced connectivity in visuospatial and superior parietal areas in ASD compared to controls. Partial correlation analysis revealed that this effect was driven by the left superficial and right laterobasal input subregions, but not the centromedial output nuclei. These results indicate reduced connectivity of specifically the amygdaloid sensory input channels in ASD, suggesting that abnormal amygdalo-cortical connectivity can be traced down to the socio-perceptual pathways.

  12. Uncertainty quantification in volumetric Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos

    2016-11-01

    Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.

  13. Modeling qRT-PCR dynamics with application to cancer biomarker quantification.

    PubMed

    Chervoneva, Inna; Freydin, Boris; Hyslop, Terry; Waldman, Scott A

    2017-01-01

    Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is widely used for molecular diagnostics and evaluating prognosis in cancer. The utility of mRNA expression biomarkers relies heavily on the accuracy and precision of quantification, which is still challenging for low abundance transcripts. The critical step for quantification is accurate estimation of efficiency needed for computing a relative qRT-PCR expression. We propose a new approach to estimating qRT-PCR efficiency based on modeling dynamics of polymerase chain reaction amplification. In contrast, only models for fluorescence intensity as a function of polymerase chain reaction cycle have been used so far for quantification. The dynamics of qRT-PCR efficiency is modeled using an ordinary differential equation model, and the fitted ordinary differential equation model is used to obtain effective polymerase chain reaction efficiency estimates needed for efficiency-adjusted quantification. The proposed new qRT-PCR efficiency estimates were used to quantify GUCY2C (Guanylate Cyclase 2C) mRNA expression in the blood of colorectal cancer patients. Time to recurrence and GUCY2C expression ratios were analyzed in a joint model for survival and longitudinal outcomes. The joint model with GUCY2C quantified using the proposed polymerase chain reaction efficiency estimates provided clinically meaningful results for association between time to recurrence and longitudinal trends in GUCY2C expression.

  14. Factorizing the motion sensitivity function into equivalent input noise and calculation efficiency.

    PubMed

    Allard, Rémy; Arleo, Angelo

    2017-01-01

    The photopic motion sensitivity function of the energy-based motion system is band-pass peaking around 8 Hz. Using an external noise paradigm to factorize the sensitivity into equivalent input noise and calculation efficiency, the present study investigated if the variation in photopic motion sensitivity as a function of the temporal frequency is due to a variation of equivalent input noise (e.g., early temporal filtering) or calculation efficiency (ability to select and integrate motion). For various temporal frequencies, contrast thresholds for a direction discrimination task were measured in presence and absence of noise. Up to 15 Hz, the sensitivity variation was mainly due to a variation of equivalent input noise and little variation in calculation efficiency was observed. The sensitivity fall-off at very high temporal frequencies (from 15 to 30 Hz) was due to a combination of a drop of calculation efficiency and a rise of equivalent input noise. A control experiment in which an artificial temporal integration was applied to the stimulus showed that an early temporal filter (generally assumed to affect equivalent input noise, not calculation efficiency) could impair both the calculation efficiency and equivalent input noise at very high temporal frequencies. We conclude that at the photopic luminance intensity tested, the variation of motion sensitivity as a function of the temporal frequency was mainly due to early temporal filtering, not to the ability to select and integrate motion. More specifically, we conclude that photopic motion sensitivity at high temporal frequencies is limited by internal noise occurring after the transduction process (i.e., neural noise), not by quantal noise resulting from the probabilistic absorption of photons by the photoreceptors as previously suggested.

  15. Quantification of functional genes from procaryotes in soil by PCR.

    PubMed

    Sharma, Shilpi; Radl, Viviane; Hai, Brigitte; Kloos, Karin; Fuka, Mirna Mrkonjic; Engel, Marion; Schauss, Kristina; Schloter, Michael

    2007-03-01

    Controlling turnover processes and fluxes in soils and other environments requires information about the gene pool and possibilities for its in situ induction. Therefore in the recent years there has been a growing interest in genes and transcripts coding for metabolic enzymes. Besides questions addressing redundancy and diversity, more and more attention is given on the abundance of specific DNA and mRNA in the different habitats. This review will describe several PCR techniques that are suitable for quantification of functional genes and transcripts such as MPN-PCR, competitive PCR and real-time PCR. The advantages and disadvantages of the mentioned methods are discussed. In addition, the problems of quantitative extraction of nucleic acid and substances that inhibit polymerase are described. Finally, some examples from recent papers are given to demonstrate the applicability and usefulness of the different approaches.

  16. SNP ID-info: SNP ID searching and visualization platform.

    PubMed

    Yang, Cheng-Hong; Chuang, Li-Yeh; Cheng, Yu-Huei; Wen, Cheng-Hao; Chang, Phei-Lang; Chang, Hsueh-Wei

    2008-09-01

    Many association studies provide the relationship between single nucleotide polymorphisms (SNPs), diseases and cancers, without giving a SNP ID, however. Here, we developed the SNP ID-info freeware to provide the SNP IDs within inputting genetic and physical information of genomes. The program provides an "SNP-ePCR" function to generate the full-sequence using primers and template inputs. In "SNPosition," sequence from SNP-ePCR or direct input is fed to match the SNP IDs from SNP fasta-sequence. In "SNP search" and "SNP fasta" function, information of SNPs within the cytogenetic band, contig position, and keyword input are acceptable. Finally, the SNP ID neighboring environment for inputs is completely visualized in the order of contig position and marked with SNP and flanking hits. The SNP identification problems inherent in NCBI SNP BLAST are also avoided. In conclusion, the SNP ID-info provides a visualized SNP ID environment for multiple inputs and assists systematic SNP association studies. The server and user manual are available at http://bio.kuas.edu.tw/snpid-info.

  17. Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex

    PubMed Central

    Wilson, Daniel E.; Whitney, David E.; Scholl, Benjamin; Fitzpatrick, David

    2016-01-01

    The majority of neurons in primary visual cortex are tuned for stimulus orientation, but the factors that account for the range of orientation selectivities exhibited by cortical neurons remain unclear. To address this issue, we used in vivo 2-photon calcium imaging to characterize the orientation tuning and spatial arrangement of synaptic inputs to the dendritic spines of individual pyramidal neurons in layer 2/3 of ferret visual cortex. The summed synaptic input to individual neurons reliably predicted the neuron’s orientation preference, but did not account for differences in orientation selectivity among neurons. These differences reflected a robust input-output nonlinearity that could not be explained by spike threshold alone, and was strongly correlated with the spatial clustering of co-tuned synaptic inputs within the dendritic field. Dendritic branches with more co-tuned synaptic clusters exhibited greater rates of local dendritic calcium events supporting a prominent role for functional clustering of synaptic inputs in dendritic nonlinearities that shape orientation selectivity. PMID:27294510

  18. MetaQuant: a tool for the automatic quantification of GC/MS-based metabolome data.

    PubMed

    Bunk, Boyke; Kucklick, Martin; Jonas, Rochus; Münch, Richard; Schobert, Max; Jahn, Dieter; Hiller, Karsten

    2006-12-01

    MetaQuant is a Java-based program for the automatic and accurate quantification of GC/MS-based metabolome data. In contrast to other programs MetaQuant is able to quantify hundreds of substances simultaneously with minimal manual intervention. The integration of a self-acting calibration function allows the parallel and fast calibration for several metabolites simultaneously. Finally, MetaQuant is able to import GC/MS data in the common NetCDF format and to export the results of the quantification into Systems Biology Markup Language (SBML), Comma Separated Values (CSV) or Microsoft Excel (XLS) format. MetaQuant is written in Java and is available under an open source license. Precompiled packages for the installation on Windows or Linux operating systems are freely available for download. The source code as well as the installation packages are available at http://bioinformatics.org/metaquant

  19. Group refractive index quantification using a Fourier domain short coherence Sagnac interferometer.

    PubMed

    Montonen, Risto; Kassamakov, Ivan; Lehmann, Peter; Österberg, Kenneth; Hæggström, Edward

    2018-02-15

    The group refractive index is important in length calibration of Fourier domain interferometers by transparent transfer standards. We demonstrate accurate group refractive index quantification using a Fourier domain short coherence Sagnac interferometer. Because of a justified linear length calibration function, the calibration constants cancel out in the evaluation of the group refractive index, which is then obtained accurately from two uncalibrated lengths. Measurements of two standard thickness coverslips revealed group indices of 1.5426±0.0042 and 1.5434±0.0046, with accuracies quoted at the 95% confidence level. This agreed with the dispersion data of the coverslip manufacturer and therefore validates our method. Our method provides a sample specific and accurate group refractive index quantification using the same Fourier domain interferometer that is to be calibrated for the length. This reduces significantly the requirements of the calibration transfer standard.

  20. Quantification of cardiolipin by liquid chromatography-electrospray ionization mass spectrometry.

    PubMed

    Garrett, Teresa A; Kordestani, Reza; Raetz, Christian R H

    2007-01-01

    Cardiolipin (CL), a tetra-acylated glycerophospholipid composed of two phosphatidyl moieties linked by a bridging glycerol, plays an important role in mitochondrial function in eukaryotic cells. Alterations to the content and acylation state of CL cause mitochondrial dysfunction and may be associated with pathologies such as ischemia, hypothyrodism, aging, and heart failure. The structure of CL is very complex because of microheterogeneity among its four acyl chains. Here we have developed a method for the quantification of CL molecular species by liquid chromatography-electrospray ionization mass spectrometry. We quantify the [M-2H](2-) ion of a CL of a given molecular formula and identify the CLs by their total number of carbons and unsaturations in the acyl chains. This method, developed using mouse macrophage RAW 264.7 tumor cells, is broadly applicable to other cell lines, tissues, bacteria and yeast. Furthermore, this method could be used for the quantification of lyso-CLs and bis-lyso-CLs.

  1. Fast dictionary generation and searching for magnetic resonance fingerprinting.

    PubMed

    Jun Xie; Mengye Lyu; Jian Zhang; Hui, Edward S; Wu, Ed X; Ze Wang

    2017-07-01

    A super-fast dictionary generation and searching (DGS) algorithm was developed for MR parameter quantification using magnetic resonance fingerprinting (MRF). MRF is a new technique for simultaneously quantifying multiple MR parameters using one temporally resolved MR scan. But it has a multiplicative computation complexity, resulting in a big burden of dictionary generating, saving, and retrieving, which can easily be intractable for any state-of-art computers. Based on retrospective analysis of the dictionary matching object function, a multi-scale ZOOM like DGS algorithm, dubbed as MRF-ZOOM, was proposed. MRF ZOOM is quasi-parameter-separable so the multiplicative computation complexity is broken into additive one. Evaluations showed that MRF ZOOM was hundreds or thousands of times faster than the original MRF parameter quantification method even without counting the dictionary generation time in. Using real data, it yielded nearly the same results as produced by the original method. MRF ZOOM provides a super-fast solution for MR parameter quantification.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  3. Universal Approximation by Using the Correntropy Objective Function.

    PubMed

    Nayyeri, Mojtaba; Sadoghi Yazdi, Hadi; Maskooki, Alaleh; Rouhani, Modjtaba

    2017-10-16

    Several objective functions have been proposed in the literature to adjust the input parameters of a node in constructive networks. Furthermore, many researchers have focused on the universal approximation capability of the network based on the existing objective functions. In this brief, we use a correntropy measure based on the sigmoid kernel in the objective function to adjust the input parameters of a newly added node in a cascade network. The proposed network is shown to be capable of approximating any continuous nonlinear mapping with probability one in a compact input sample space. Thus, the convergence is guaranteed. The performance of our method was compared with that of eight different objective functions, as well as with an existing one hidden layer feedforward network on several real regression data sets with and without impulsive noise. The experimental results indicate the benefits of using a correntropy measure in reducing the root mean square error and increasing the robustness to noise.

  4. Three-input majority function as the unique optimal function for the bias amplification using nonlocal boxes

    NASA Astrophysics Data System (ADS)

    Mori, Ryuhei

    2016-11-01

    Brassard et al. [Phys. Rev. Lett. 96, 250401 (2006), 10.1103/PhysRevLett.96.250401] showed that shared nonlocal boxes with a CHSH (Clauser, Horne, Shimony, and Holt) probability greater than 3/+√{6 } 6 yield trivial communication complexity. There still exists a gap with the maximum CHSH probability 2/+√{2 } 4 achievable by quantum mechanics. It is an interesting open question to determine the exact threshold for the trivial communication complexity. Brassard et al.'s idea is based on recursive bias amplification by the three-input majority function. It was not obvious if another choice of function exhibits stronger bias amplification. We show that the three-input majority function is the unique optimal function, so that one cannot improve the threshold 3/+√{6 } 6 by Brassard et al.'s bias amplification. In this work, protocols for computing the function used for the bias amplification are restricted to be nonadaptive protocols or a particular adaptive protocol inspired by Pawłowski et al.'s protocol for information causality [Nature (London) 461, 1101 (2009), 10.1038/nature08400]. We first show an adaptive protocol inspired by Pawłowski et al.'s protocol, and then show that the adaptive protocol improves upon nonadaptive protocols. Finally, we show that the three-input majority function is the unique optimal function for the bias amplification if we apply the adaptive protocol to each step of the bias amplification.

  5. PyXRD v0.6.7: a free and open-source program to quantify disordered phyllosilicates using multi-specimen X-ray diffraction profile fitting

    NASA Astrophysics Data System (ADS)

    Dumon, M.; Van Ranst, E.

    2016-01-01

    This paper presents a free and open-source program called PyXRD (short for Python X-ray diffraction) to improve the quantification of complex, poly-phasic mixed-layer phyllosilicate assemblages. The validity of the program was checked by comparing its output with Sybilla v2.2.2, which shares the same mathematical formalism. The novelty of this program is the ab initio incorporation of the multi-specimen method, making it possible to share phases and (a selection of) their parameters across multiple specimens. PyXRD thus allows for modelling multiple specimens side by side, and this approach speeds up the manual refinement process significantly. To check the hypothesis that this multi-specimen set-up - as it effectively reduces the number of parameters and increases the number of observations - can also improve automatic parameter refinements, we calculated X-ray diffraction patterns for four theoretical mineral assemblages. These patterns were then used as input for one refinement employing the multi-specimen set-up and one employing the single-pattern set-ups. For all of the assemblages, PyXRD was able to reproduce or approximate the input parameters with the multi-specimen approach. Diverging solutions only occurred in single-pattern set-ups, which do not contain enough information to discern all minerals present (e.g. patterns of heated samples). Assuming a correct qualitative interpretation was made and a single pattern exists in which all phases are sufficiently discernible, the obtained results indicate a good quantification can often be obtained with just that pattern. However, these results from theoretical experiments cannot automatically be extrapolated to all real-life experiments. In any case, PyXRD has proven to be useful when X-ray diffraction patterns are modelled for complex mineral assemblages containing mixed-layer phyllosilicates with a multi-specimen approach.

  6. An Integrated Modeling System for Estimating Glacier and Snow Melt Driven Streamflow from Remote Sensing and Earth System Data Products in the Himalayas

    NASA Technical Reports Server (NTRS)

    Brown, M. E.; Racoviteanu, A. E.; Tarboton, D. G.; Sen Gupta, A.; Nigro, J.; Policelli, F.; Habib, S.; Tokay, M.; Shrestha, M. S.; Bajracharya, S.

    2014-01-01

    Quantification of the contribution of the hydrologic components (snow, ice and rain) to river discharge in the Hindu Kush Himalayan (HKH) region is important for decision-making in water sensitive sectors, and for water resources management and flood risk reduction. In this area, access to and monitoring of the glaciers and their melt outflow is challenging due to difficult access, thus modeling based on remote sensing offers the potential for providing information to improve water resources management and decision making. This paper describes an integrated modeling system developed using downscaled NASA satellite based and earth system data products coupled with in-situ hydrologic data to assess the contribution of snow and glaciers to the flows of the rivers in the HKH region. Snow and glacier melt was estimated using the Utah Energy Balance (UEB) model, further enhanced to accommodate glacier ice melt over clean and debris-covered tongues, then meltwater was input into the USGS Geospatial Stream Flow Model (Geo- SFM). The two model components were integrated into Better Assessment Science Integrating point and Nonpoint Sources modeling framework (BASINS) as a user-friendly open source system and was made available to countries in high Asia. Here we present a case study from the Langtang Khola watershed in the monsoon-influenced Nepal Himalaya, used to validate our energy balance approach and to test the applicability of our modeling system. The snow and glacier melt model predicts that for the eight years used for model evaluation (October 2003-September 2010), the total surface water input over the basin was 9.43 m, originating as 62% from glacier melt, 30% from snowmelt and 8% from rainfall. Measured streamflow for those years were 5.02 m, reflecting a runoff coefficient of 0.53. GeoSFM simulated streamflow was 5.31 m indicating reasonable correspondence between measured and model confirming the capability of the integrated system to provide a quantification of water availability.

  7. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  8. Rail-to-rail differential input amplification stage with main and surrogate differential pairs

    DOEpatents

    Britton, Jr., Charles Lanier; Smith, Stephen Fulton

    2007-03-06

    An operational amplifier input stage provides a symmetrical rail-to-rail input common-mode voltage without turning off either pair of complementary differential input transistors. Secondary, or surrogate, transistor pairs assume the function of the complementary differential transistors. The circuit also maintains essentially constant transconductance, constant slew rate, and constant signal-path supply current as it provides rail-to-rail operation.

  9. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  10. Reliable estimates of predictive uncertainty for an Alpine catchment using a non-parametric methodology

    NASA Astrophysics Data System (ADS)

    Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.

    2017-04-01

    Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.

  11. Objective automated quantification of fluorescence signal in histological sections of rat lens.

    PubMed

    Talebizadeh, Nooshin; Hagström, Nanna Zhou; Yu, Zhaohua; Kronschläger, Martin; Söderberg, Per; Wählby, Carolina

    2017-08-01

    Visual quantification and classification of fluorescent signals is the gold standard in microscopy. The purpose of this study was to develop an automated method to delineate cells and to quantify expression of fluorescent signal of biomarkers in each nucleus and cytoplasm of lens epithelial cells in a histological section. A region of interest representing the lens epithelium was manually demarcated in each input image. Thereafter, individual cell nuclei within the region of interest were automatically delineated based on watershed segmentation and thresholding with an algorithm developed in Matlab™. Fluorescence signal was quantified within nuclei, cytoplasms and juxtaposed backgrounds. The classification of cells as labelled or not labelled was based on comparison of the fluorescence signal within cells with local background. The classification rule was thereafter optimized as compared with visual classification of a limited dataset. The performance of the automated classification was evaluated by asking 11 independent blinded observers to classify all cells (n = 395) in one lens image. Time consumed by the automatic algorithm and visual classification of cells was recorded. On an average, 77% of the cells were correctly classified as compared with the majority vote of the visual observers. The average agreement among visual observers was 83%. However, variation among visual observers was high, and agreement between two visual observers was as low as 71% in the worst case. Automated classification was on average 10 times faster than visual scoring. The presented method enables objective and fast detection of lens epithelial cells and quantification of expression of fluorescent signal with an accuracy comparable with the variability among visual observers. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  12. Modelling Freshwater Resources at the Global Scale: Challenges and Prospects

    NASA Technical Reports Server (NTRS)

    Doll, Petra; Douville, Herve; Guntner, Andreas; Schmied, Hannes Muller; Wada, Yoshihide

    2015-01-01

    Quantification of spatially and temporally resolved water flows and water storage variations for all land areas of the globe is required to assess water resources, water scarcity and flood hazards, and to understand the Earth system. This quantification is done with the help of global hydrological models (GHMs). What are the challenges and prospects in the development and application of GHMs? Seven important challenges are presented. (1) Data scarcity makes quantification of human water use difficult even though significant progress has been achieved in the last decade. (2) Uncertainty of meteorological input data strongly affects model outputs. (3) The reaction of vegetation to changing climate and CO2 concentrations is uncertain and not taken into account in most GHMs that serve to estimate climate change impacts. (4) Reasons for discrepant responses of GHMs to changing climate have yet to be identified. (5) More accurate estimates of monthly time series of water availability and use are needed to provide good indicators of water scarcity. (6) Integration of gradient-based groundwater modelling into GHMs is necessary for a better simulation of groundwater-surface water interactions and capillary rise. (7) Detection and attribution of human interference with freshwater systems by using GHMs are constrained by data of insufficient quality but also GHM uncertainty itself. Regarding prospects for progress, we propose to decrease the uncertainty of GHM output by making better use of in situ and remotely sensed observations of output variables such as river discharge or total water storage variations by multi-criteria validation, calibration or data assimilation. Finally, we present an initiative that works towards the vision of hyper resolution global hydrological modelling where GHM outputs would be provided at a 1-km resolution with reasonable accuracy.

  13. Plastics in soil: Analytical methods and possible sources.

    PubMed

    Bläsing, Melanie; Amelung, Wulf

    2018-01-15

    At least 300 Mio t of plastic are produced annually, from which large parts end up in the environment, where it persists over decades, harms biota and enters the food chain. Yet, almost nothing is known about plastic pollution of soil; hence, the aims of this work are to review current knowledge on i) available methods for the quantification and identification of plastic in soil, ii) the quantity and possible input pathways of plastic into soil, (including first preliminary screening of plastic in compost), and iii) its fate in soil. Methods for plastic analyses in sediments can potentially be adjusted for application to soil; yet, the applicability of these methods for soil needs to be tested. Consequently, the current data base on soil pollution with plastic is still poor. Soils may receive plastic inputs via plastic mulching or the application of plastic containing soil amendments. In compost up to 2.38-1200mg plastic kg -1 have been found so far; the plastic concentration of sewage sludge varies between 1000 and 24,000 plastic items kg -1 . Also irrigation with untreated and treated wastewater (1000-627,000 and 0-125,000 plastic items m -3 , respectively) as well as flooding with lake water (0.82-4.42 plastic items m -3 ) or river water (0-13,751 items km -2 ) can provide major input pathways for plastic into soil. Additional sources comprise littering along roads and trails, illegal waste dumping, road runoff as well as atmospheric input. With these input pathways, plastic concentrations in soil might reach the per mill range of soil organic carbon. Most of plastic (especially >1μm) will presumably be retained in soil, where it persists for decades or longer. Accordingly, further research on the prevalence and fate of such synthetic polymers in soils is urgently warranted. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. The biological function of consciousness

    PubMed Central

    Earl, Brian

    2014-01-01

    This research is an investigation of whether consciousness—one's ongoing experience—influences one's behavior and, if so, how. Analysis of the components, structure, properties, and temporal sequences of consciousness has established that, (1) contrary to one's intuitive understanding, consciousness does not have an active, executive role in determining behavior; (2) consciousness does have a biological function; and (3) consciousness is solely information in various forms. Consciousness is associated with a flexible response mechanism (FRM) for decision-making, planning, and generally responding in nonautomatic ways. The FRM generates responses by manipulating information and, to function effectively, its data input must be restricted to task-relevant information. The properties of consciousness correspond to the various input requirements of the FRM; and when important information is missing from consciousness, functions of the FRM are adversely affected; both of which indicate that consciousness is the input data to the FRM. Qualitative and quantitative information (shape, size, location, etc.) are incorporated into the input data by a qualia array of colors, sounds, and so on, which makes the input conscious. This view of the biological function of consciousness provides an explanation why we have experiences; why we have emotional and other feelings, and why their loss is associated with poor decision-making; why blindsight patients do not spontaneously initiate responses to events in their blind field; why counter-habitual actions are only possible when the intended action is in mind; and the reason for inattentional blindness. PMID:25140159

  15. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  16. Quantification of the effect of cross-shear and applied nominal contact pressure on the wear of moderately cross-linked polyethylene.

    PubMed

    Abdelgaied, Abdellatif; Brockett, Claire L; Liu, Feng; Jennings, Louise M; Fisher, John; Jin, Zhongmin

    2013-01-01

    Polyethylene wear is a great concern in total joint replacement. It is now considered a major limiting factor to the long life of such prostheses. Cross-linking has been introduced to reduce the wear of ultra-high-molecular-weight polyethylene (UHMWPE). Computational models have been used extensively for wear prediction and optimization of artificial knee designs. However, in order to be independent and have general applicability and predictability, computational wear models should be based on inputs from independent experimentally determined wear parameters (wear factors or wear coefficients). The objective of this study was to investigate moderately cross-linked UHMWPE, using a multidirectional pin-on-plate wear test machine, under a wide range of applied nominal contact pressure (from 1 to 11 MPa) and under five different kinematic inputs, varying from a purely linear track to a maximum rotation of +/- 55 degrees. A computational model, based on a direct simulation of the multidirectional pin-on-plate wear tester, was developed to quantify the degree of cross-shear (CS) of the polyethylene pins articulating against the metallic plates. The moderately cross-linked UHMWPE showed wear factors less than half of that reported in the literature for, the conventional UHMWPE, under the same loading and kinematic inputs. In addition, under high applied nominal contact stress, the moderately crosslinked UHMWPE wear showed lower dependence on the degree of CS compared to that under low applied nominal contact stress. The calculated wear coefficients were found to be independent of the applied nominal contact stress, in contrast to the wear factors that were shown to be highly pressure dependent. This study provided independent wear data for inputs into computational models for moderately cross-linked polyethylene and supported the application of wear coefficient-based computational wear models.

  17. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  18. Application of Targeted Mass Spectrometry for the Quantification of Sirtuins in the Central Nervous System

    NASA Astrophysics Data System (ADS)

    Jayasena, T.; Poljak, A.; Braidy, N.; Zhong, L.; Rowlands, B.; Muenchhoff, J.; Grant, R.; Smythe, G.; Teo, C.; Raftery, M.; Sachdev, P.

    2016-10-01

    Sirtuin proteins have a variety of intracellular targets, thereby regulating multiple biological pathways including neurodegeneration. However, relatively little is currently known about the role or expression of the 7 mammalian sirtuins in the central nervous system. Western blotting, PCR and ELISA are the main techniques currently used to measure sirtuin levels. To achieve sufficient sensitivity and selectivity in a multiplex-format, a targeted mass spectrometric assay was developed and validated for the quantification of all seven mammalian sirtuins (SIRT1-7). Quantification of all peptides was by multiple reaction monitoring (MRM) using three mass transitions per protein-specific peptide, two specific peptides for each sirtuin and a stable isotope labelled internal standard. The assay was applied to a variety of samples including cultured brain cells, mammalian brain tissue, CSF and plasma. All sirtuin peptides were detected in the human brain, with SIRT2 being the most abundant. Sirtuins were also detected in human CSF and plasma, and guinea pig and mouse tissues. In conclusion, we have successfully applied MRM mass spectrometry for the detection and quantification of sirtuin proteins in the central nervous system, paving the way for more quantitative and functional studies.

  19. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    PubMed

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  20. The relative degree enhancement problem for MIMO nonlinear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoenwald, D.A.; Oezguener, Ue.

    1995-07-01

    The authors present a result for linearizing a nonlinear MIMO system by employing partial feedback - feedback at all but one input-output channel such that the SISO feedback linearization problem is solvable at the remaining input-output channel. The partial feedback effectively enhances the relative degree at the open input-output channel provided the feedback functions are chosen to satisfy relative degree requirements. The method is useful for nonlinear systems that are not feedback linearizable in a MIMO sense. Several examples are presented to show how these feedback functions can be computed. This strategy can be combined with decentralized observers for amore » completely decentralized feedback linearization result for at least one input-output channel.« less

  1. Production Function Geometry with "Knightian" Total Product

    ERIC Educational Resources Information Center

    Truett, Dale B.; Truett, Lila J.

    2007-01-01

    Authors of principles and price theory textbooks generally illustrate short-run production using a total product curve that displays first increasing and then diminishing marginal returns to employment of the variable input(s). Although it seems reasonable that a temporary range of increasing returns to variable inputs will likely occur as…

  2. AESOP: An interactive computer program for the design of linear quadratic regulators and Kalman filters

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Geyser, L. C.

    1984-01-01

    AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.

  3. Macular pigment optical density measurements: evaluation of a device using heterochromatic flicker photometry

    PubMed Central

    de Kinkelder, R; van der Veen, R L P; Verbaak, F D; Faber, D J; van Leeuwen, T G; Berendschot, T T J M

    2011-01-01

    Purpose Accurate assessment of the amount of macular pigment (MPOD) is necessary to investigate the role of carotenoids and their assumed protective functions. High repeatability and reliability are important to monitor patients in studies investigating the influence of diet and supplements on MPOD. We evaluated the Macuscope (Macuvision Europe Ltd., Lapworth, Solihull, UK), a recently introduced device for measuring MPOD using the technique of heterochromatic flicker photometry (HFP). We determined agreement with another HFP device (QuantifEye; MPS 9000 series: Tinsley Precision Instruments Ltd., Croydon, Essex, UK) and a fundus reflectance method. Methods The right eyes of 23 healthy subjects (mean age 33.9±15.1 years) were measured. We determined agreement with QuantifEye and correlation with a fundus reflectance method. Repeatability of QuantifEye was assessed in 20 other healthy subjects (mean age 32.1±7.3 years). Repeatability was also compared with measurements by a fundus reflectance method in 10 subjects. Results We found low agreement between test and retest measurements with Macuscope. The average difference and the limits of agreement were −0.041±0.32. We found high agreement between test and retest measurements of QuantifEye (−0.02±0.18) and the fundus reflectance method (−0.04±0.18). MPOD data obtained by Macuscope and QuantifEye showed poor agreement: −0.017±0.44. For Macuscope and the fundus reflectance method, the correlation coefficient was r=0.05 (P=0.83). A significant correlation of r=0.87 (P<0.001) was found between QuantifEye and the fundus reflectance method. Conclusions Because repeatability of Macuscope measurements was low (ie, wide limits of agreement) and MPOD values correlated poorly with the fundus reflectance method, and agreed poorly with QuantifEye, the tested Macuscope protocol seems less suitable for studying MPOD. PMID:21057522

  4. A caveat regarding diatom-inferred nitrogen concentrations in oligotrophic lakes

    USGS Publications Warehouse

    Arnett, Heather A.; Saros, Jasmine E.; Mast, M. Alisa

    2012-01-01

    Atmospheric deposition of reactive nitrogen (Nr) has enriched oligotrophic lakes with nitrogen (N) in many regions of the world and elicited dramatic changes in diatom community structure. The lakewater concentrations of nitrate that cause these community changes remain unclear, raising interest in the development of diatom-based transfer functions to infer nitrate. We developed a diatom calibration set using surface sediment samples from 46 high-elevation lakes across the Rocky Mountains of the western US, a region spanning an N deposition gradient from very low to moderate levels (<1 to 3.2 kg Nr ha−1 year−1 in wet deposition). Out of the fourteen measured environmental variables for these 46 lakes, ordination analysis identified that nitrate, specific conductance, total phosphorus, and hypolimnetic water temperature were related to diatom distributions. A transfer function was developed for nitrate and applied to a sedimentary diatom profile from Heart Lake in the central Rockies. The model coefficient of determination (bootstrapping validation) of 0.61 suggested potential for diatom-inferred reconstructions of lakewater nitrate concentrations over time, but a comparison of observed versus diatom-inferred nitrate values revealed the poor performance of this model at low nitrate concentrations. Resource physiology experiments revealed that nitrogen requirements of two key taxa were opposite to nitrate optima defined in the transfer function. Our data set reveals two underlying ecological constraints that impede the development of nitrate transfer functions in oligotrophic lakes: (1) even in lakes with nitrate concentrations below quantification (<1 μg L−1), diatom assemblages were already dominated by species indicative of moderate N enrichment; (2) N-limited oligotrophic lakes switch to P limitation after receiving only modest inputs of reactive N, shifting the controls on diatom species changes along the length of the nitrate gradient. These constraints suggest that quantitative inferences of nitrate from diatom assemblages will likely require experimental approaches.

  5. Ranking Hearing Aid Input-Output Functions for Understanding Low-, Conversational-, and High-Level Speech in Multitalker Babble

    ERIC Educational Resources Information Center

    Chung, King; Killion, Mead C.; Christensen, Laurel A.

    2007-01-01

    Purpose: To determine the rankings of 6 input-output functions for understanding low-level, conversational, and high-level speech in multitalker babble without manipulating volume control for listeners with normal hearing, flat sensorineural hearing loss, and mildly sloping sensorineural hearing loss. Method: Peak clipping, compression limiting,…

  6. Econometric analysis of fire suppression production functions for large wildland fires

    Treesearch

    Thomas P. Holmes; David E. Calkin

    2013-01-01

    In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...

  7. Models for forecasting energy use in the US farm sector

    NASA Astrophysics Data System (ADS)

    Christensen, L. R.

    1981-07-01

    Econometric models were developed and estimated for the purpose of forecasting electricity and petroleum demand in US agriculture. A structural approach is pursued which takes account of the fact that the quantity demanded of any one input is a decision made in conjunction with other input decisions. Three different functional forms of varying degrees of complexity are specified for the structural cost function, which describes the cost of production as a function of the level of output and factor prices. Demand for materials (all purchased inputs) is derived from these models. A separate model which break this demand up into demand for the four components of materials is used to produce forecasts of electricity and petroleum is a stepwise manner.

  8. Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.

    PubMed

    Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D

    2014-01-01

    Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.

  9. Uncertainty quantification for evaluating the impacts of fracture zone on pressure build-up and ground surface uplift during geological CO₂ sequestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Jie; Hou, Zhangshuan; Fang, Yilin

    2015-06-01

    A series of numerical test cases reflecting broad and realistic ranges of geological formation and preexisting fault properties was developed to systematically evaluate the impacts of preexisting faults on pressure buildup and ground surface uplift during CO₂ injection. Numerical test cases were conducted using a coupled hydro-geomechanical simulator, eSTOMP (extreme-scale Subsurface Transport over Multiple Phases). For efficient sensitivity analysis and reliable construction of a reduced-order model, a quasi-Monte Carlo sampling method was applied to effectively sample a high-dimensional input parameter space to explore uncertainties associated with hydrologic, geologic, and geomechanical properties. The uncertainty quantification results show that the impacts onmore » geomechanical response from the pre-existing faults mainly depend on reservoir and fault permeability. When the fault permeability is two to three orders of magnitude smaller than the reservoir permeability, the fault can be considered as an impermeable block that resists fluid transport in the reservoir, which causes pressure increase near the fault. When the fault permeability is close to the reservoir permeability, or higher than 10⁻¹⁵ m² in this study, the fault can be considered as a conduit that penetrates the caprock, connecting the fluid flow between the reservoir and the upper rock.« less

  10. A comprehensive study of the delay vector variance method for quantification of nonlinearity in dynamical systems

    PubMed Central

    Mandic, D. P.; Ryan, K.; Basu, B.; Pakrashi, V.

    2016-01-01

    Although vibration monitoring is a popular method to monitor and assess dynamic structures, quantification of linearity or nonlinearity of the dynamic responses remains a challenging problem. We investigate the delay vector variance (DVV) method in this regard in a comprehensive manner to establish the degree to which a change in signal nonlinearity can be related to system nonlinearity and how a change in system parameters affects the nonlinearity in the dynamic response of the system. A wide range of theoretical situations are considered in this regard using a single degree of freedom (SDOF) system to obtain numerical benchmarks. A number of experiments are then carried out using a physical SDOF model in the laboratory. Finally, a composite wind turbine blade is tested for different excitations and the dynamic responses are measured at a number of points to extend the investigation to continuum structures. The dynamic responses were measured using accelerometers, strain gauges and a Laser Doppler vibrometer. This comprehensive study creates a numerical and experimental benchmark for structurally dynamical systems where output-only information is typically available, especially in the context of DVV. The study also allows for comparative analysis between different systems driven by the similar input. PMID:26909175

  11. High-Throughput Quantification of Bacterial-Cell Interactions Using Virtual Colony Counts

    PubMed Central

    Hoffmann, Stefanie; Walter, Steffi; Blume, Anne-Kathrin; Fuchs, Stephan; Schmidt, Christiane; Scholz, Annemarie; Gerlach, Roman G.

    2018-01-01

    The quantification of bacteria in cell culture infection models is of paramount importance for the characterization of host-pathogen interactions and pathogenicity factors involved. The standard to enumerate bacteria in these assays is plating of a dilution series on solid agar and counting of the resulting colony forming units (CFU). In contrast, the virtual colony count (VCC) method is a high-throughput compatible alternative with minimized manual input. Based on the recording of quantitative growth kinetics, VCC relates the time to reach a given absorbance threshold to the initial cell count using a series of calibration curves. Here, we adapted the VCC method using the model organism Salmonella enterica sv. Typhimurium (S. Typhimurium) in combination with established cell culture-based infection models. For HeLa infections, a direct side-by-side comparison showed a good correlation of VCC with CFU counting after plating. For MDCK cells and RAW macrophages we found that VCC reproduced the expected phenotypes of different S. Typhimurium mutants. Furthermore, we demonstrated the use of VCC to test the inhibition of Salmonella invasion by the probiotic E. coli strain Nissle 1917. Taken together, VCC provides a flexible, label-free, automation-compatible methodology to quantify bacteria in in vitro infection assays. PMID:29497603

  12. Quantitative telomerase enzyme activity determination using droplet digital PCR with single cell resolution

    PubMed Central

    Ludlow, Andrew T.; Robin, Jerome D.; Sayed, Mohammed; Litterst, Claudia M.; Shelton, Dawne N.; Shay, Jerry W.; Wright, Woodring E.

    2014-01-01

    The telomere repeat amplification protocol (TRAP) for the human reverse transcriptase, telomerase, is a PCR-based assay developed two decades ago and is still used for routine determination of telomerase activity. The TRAP assay can only reproducibly detect ∼2-fold differences and is only quantitative when compared to internal standards and reference cell lines. The method generally involves laborious radioactive gel electrophoresis and is not conducive to high-throughput analyzes. Recently droplet digital PCR (ddPCR) technologies have become available that allow for absolute quantification of input deoxyribonucleic acid molecules following PCR. We describe the reproducibility and provide several examples of a droplet digital TRAP (ddTRAP) assay for telomerase activity, including quantitation of telomerase activity in single cells, telomerase activity across several common telomerase positive cancer cells lines and in human primary peripheral blood mononuclear cells following mitogen stimulation. Adaptation of the TRAP assay to digital format allows accurate and reproducible quantification of the number of telomerase-extended products (i.e. telomerase activity; 57.8 ± 7.5) in a single HeLa cell. The tools developed in this study allow changes in telomerase enzyme activity to be monitored on a single cell basis and may have utility in designing novel therapeutic approaches that target telomerase. PMID:24861623

  13. ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning.

    PubMed

    Gandola, Emanuele; Antonioli, Manuela; Traficante, Alessio; Franceschini, Simone; Scardi, Michele; Congestri, Roberta

    2016-05-01

    Toxigenic cyanobacteria are one of the main health risks associated with water resources worldwide, as their toxins can affect humans and fauna exposed via drinking water, aquaculture and recreation. Microscopy monitoring of cyanobacteria in water bodies and massive growth systems is a routine operation for cell abundance and growth estimation. Here we present ACQUA (Automated Cyanobacterial Quantification Algorithm), a new fully automated image analysis method designed for filamentous genera in Bright field microscopy. A pre-processing algorithm has been developed to highlight filaments of interest from background signals due to other phytoplankton and dust. A spline-fitting algorithm has been designed to recombine interrupted and crossing filaments in order to perform accurate morphometric analysis and to extract the surface pattern information of highlighted objects. In addition, 17 specific pattern indicators have been developed and used as input data for a machine-learning algorithm dedicated to the recognition between five widespread toxic or potentially toxic filamentous genera in freshwater: Aphanizomenon, Cylindrospermopsis, Dolichospermum, Limnothrix and Planktothrix. The method was validated using freshwater samples from three Italian volcanic lakes comparing automated vs. manual results. ACQUA proved to be a fast and accurate tool to rapidly assess freshwater quality and to characterize cyanobacterial assemblages in aquatic environments. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Differential Signalling and Kinetics of Neutrophil Extracellular Trap Release Revealed by Quantitative Live Imaging.

    PubMed

    van der Linden, Maarten; Westerlaken, Geertje H A; van der Vlist, Michiel; van Montfrans, Joris; Meyaard, Linde

    2017-07-26

    A wide variety of microbial and inflammatory factors induce DNA release from neutrophils as neutrophil extracellular traps (NETs). Consensus on the kinetics and mechanism of NET release has been hindered by the lack of distinctive methods to specifically quantify NET release in time. Here, we validate and refine a semi-automatic live imaging approach for quantification of NET release. Importantly, our approach is able to correct for neutrophil input and distinguishes NET release from neutrophil death by other means, aspects that are lacking in many NET quantification methods. Real time visualization shows that opsonized S. aureus rapidly induces cell death by toxins, while actual NET formation occurs after 90 minutes, similar to the kinetics of NET release by immune complexes and PMA. Inhibition of SYK, PI3K and mTORC2 attenuates NET release upon challenge with physiological stimuli but not with PMA. In contrast, neutrophils from chronic granulomatous disease patients show decreased NET release only in response to PMA. With this refined method, we conclude that NET release in primary human neutrophils is dependent on the SYK-PI3K-mTORC2 pathway and that PMA stimulation should be regarded as mechanistically distinct from NET formation induced by natural triggers.

  15. A new polytopic approach for the unknown input functional observer design

    NASA Astrophysics Data System (ADS)

    Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed

    2018-03-01

    In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.

  16. Applications of information theory, genetic algorithms, and neural models to predict oil flow

    NASA Astrophysics Data System (ADS)

    Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto

    2009-07-01

    This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.

  17. Versatile tunable current-mode universal biquadratic filter using MO-DVCCs and MOSFET-based electronic resistors.

    PubMed

    Chen, Hua-Pin

    2014-01-01

    This paper presents a versatile tunable current-mode universal biquadratic filter with four-input and three-output employing only two multioutput differential voltage current conveyors (MO-DVCCs), two grounded capacitors, and a well-known method for replacement of three grounded resistors by MOSFET-based electronic resistors. The proposed configuration exhibits high-output impedance which is important for easy cascading in the current-mode operations. The proposed circuit can be used as either a two-input three-output circuit or a three-input single-output circuit. In the operation of two-input three-output circuit, the bandpass, highpass, and bandreject filtering responses can be realized simultaneously while the allpass filtering response can be easily obtained by connecting appropriated output current directly without using additional stages. In the operation of three-input single-output circuit, all five generic filtering functions can be easily realized by selecting different three-input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no inverting-type input current signals are imposed. All the passive and active sensitivities are low. Postlayout simulations were carried out to verify the functionality of the design.

  18. Versatile Tunable Current-Mode Universal Biquadratic Filter Using MO-DVCCs and MOSFET-Based Electronic Resistors

    PubMed Central

    2014-01-01

    This paper presents a versatile tunable current-mode universal biquadratic filter with four-input and three-output employing only two multioutput differential voltage current conveyors (MO-DVCCs), two grounded capacitors, and a well-known method for replacement of three grounded resistors by MOSFET-based electronic resistors. The proposed configuration exhibits high-output impedance which is important for easy cascading in the current-mode operations. The proposed circuit can be used as either a two-input three-output circuit or a three-input single-output circuit. In the operation of two-input three-output circuit, the bandpass, highpass, and bandreject filtering responses can be realized simultaneously while the allpass filtering response can be easily obtained by connecting appropriated output current directly without using additional stages. In the operation of three-input single-output circuit, all five generic filtering functions can be easily realized by selecting different three-input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no inverting-type input current signals are imposed. All the passive and active sensitivities are low. Postlayout simulations were carried out to verify the functionality of the design. PMID:24982963

  19. Trigeminal, Visceral and Vestibular Inputs May Improve Cognitive Functions by Acting through the Locus Coeruleus and the Ascending Reticular Activating System: A New Hypothesis

    PubMed Central

    De Cicco, Vincenzo; Tramonti Fantozzi, Maria P.; Cataldo, Enrico; Barresi, Massimo; Bruschini, Luca; Faraguna, Ugo; Manzoni, Diego

    2018-01-01

    It is known that sensory signals sustain the background discharge of the ascending reticular activating system (ARAS) which includes the noradrenergic locus coeruleus (LC) neurons and controls the level of attention and alertness. Moreover, LC neurons influence brain metabolic activity, gene expression and brain inflammatory processes. As a consequence of the sensory control of ARAS/LC, stimulation of a sensory channel may potential influence neuronal activity and trophic state all over the brain, supporting cognitive functions and exerting a neuroprotective action. On the other hand, an imbalance of the same input on the two sides may lead to an asymmetric hemispheric excitability, leading to an impairment in cognitive functions. Among the inputs that may drive LC neurons and ARAS, those arising from the trigeminal region, from visceral organs and, possibly, from the vestibular system seem to be particularly relevant in regulating their activity. The trigeminal, visceral and vestibular control of ARAS/LC activity may explain why these input signals: (1) affect sensorimotor and cognitive functions which are not directly related to their specific informational content; and (2) are effective in relieving the symptoms of some brain pathologies, thus prompting peripheral activation of these input systems as a complementary approach for the treatment of cognitive impairments and neurodegenerative disorders. PMID:29358907

  20. Quantifying circular RNA expression from RNA-seq data using model-based framework.

    PubMed

    Li, Musheng; Xie, Xueying; Zhou, Jing; Sheng, Mengying; Yin, Xiaofeng; Ko, Eun-A; Zhou, Tong; Gu, Wanjun

    2017-07-15

    Circular RNAs (circRNAs) are a class of non-coding RNAs that are widely expressed in various cell lines and tissues of many organisms. Although the exact function of many circRNAs is largely unknown, the cell type-and tissue-specific circRNA expression has implicated their crucial functions in many biological processes. Hence, the quantification of circRNA expression from high-throughput RNA-seq data is becoming important to ascertain. Although many model-based methods have been developed to quantify linear RNA expression from RNA-seq data, these methods are not applicable to circRNA quantification. Here, we proposed a novel strategy that transforms circular transcripts to pseudo-linear transcripts and estimates the expression values of both circular and linear transcripts using an existing model-based algorithm, Sailfish. The new strategy can accurately estimate transcript expression of both linear and circular transcripts from RNA-seq data. Several factors, such as gene length, amount of expression and the ratio of circular to linear transcripts, had impacts on quantification performance of circular transcripts. In comparison to count-based tools, the new computational framework had superior performance in estimating the amount of circRNA expression from both simulated and real ribosomal RNA-depleted (rRNA-depleted) RNA-seq datasets. On the other hand, the consideration of circular transcripts in expression quantification from rRNA-depleted RNA-seq data showed substantial increased accuracy of linear transcript expression. Our proposed strategy was implemented in a program named Sailfish-cir. Sailfish-cir is freely available at https://github.com/zerodel/Sailfish-cir . tongz@medicine.nevada.edu or wanjun.gu@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  1. The role of PET quantification in cardiovascular imaging.

    PubMed

    Slomka, Piotr; Berman, Daniel S; Alexanderson, Erick; Germano, Guido

    2014-08-01

    Positron Emission Tomography (PET) has several clinical and research applications in cardiovascular imaging. Myocardial perfusion imaging with PET allows accurate global and regional measurements of myocardial perfusion, myocardial blood flow and function at stress and rest in one exam. Simultaneous assessment of function and perfusion by PET with quantitative software is currently the routine practice. Combination of ejection fraction reserve with perfusion information may improve the identification of severe disease. The myocardial viability can be estimated by quantitative comparison of fluorodeoxyglucose ( 18 FDG) and rest perfusion imaging. The myocardial blood flow and coronary flow reserve measurements are becoming routinely included in the clinical assessment due to enhanced dynamic imaging capabilities of the latest PET/CT scanners. Absolute flow measurements allow evaluation of the coronary microvascular dysfunction and provide additional prognostic and diagnostic information for coronary disease. Standard quantitative approaches to compute myocardial blood flow from kinetic PET data in automated and rapid fashion have been developed for 13 N-ammonia, 15 O-water and 82 Rb radiotracers. The agreement between software methods available for such analysis is excellent. Relative quantification of 82 Rb PET myocardial perfusion, based on comparisons to normal databases, demonstrates high performance for the detection of obstructive coronary disease. New tracers, such as 18 F-flurpiridaz may allow further improvements in the disease detection. Computerized analysis of perfusion at stress and rest reduces the variability of the assessment as compared to visual analysis. PET quantification can be enhanced by precise coregistration with CT angiography. In emerging clinical applications, the potential to identify vulnerable plaques by quantification of atherosclerotic plaque uptake of 18 FDG and 18 F-sodium fluoride tracers in carotids, aorta and coronary arteries has been demonstrated.

  2. Proteomic Identification and Quantification of S-glutathionylation in Mouse Macrophages Using Resin-Assisted Enrichment and Isobaric Labeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Dian; Gaffrey, Matthew J.; Guo, Jia

    2014-02-11

    Protein S-glutathionylation (SSG) is an important regulatory posttranslational modification of protein cysteine (Cys) thiol redox switches, yet the role of specific cysteine residues as targets of modification is poorly understood. We report a novel quantitative mass spectrometry (MS)-based proteomic method for site-specific identification and quantification of S-glutathionylation across different conditions. Briefly, this approach consists of initial blocking of free thiols by alkylation, selective reduction of glutathionylated thiols and enrichment using thiol affinity resins, followed by on-resin tryptic digestion and isobaric labeling with iTRAQ (isobaric tags for relative and absolute quantitation) for MS-based identification and quantification. The overall approach was validatedmore » by application to RAW 264.7 mouse macrophages treated with different doses of diamide to induce glutathionylation. A total of 1071 Cys-sites from 690 proteins were identified in response to diamide treatment, with ~90% of the sites displaying >2-fold increases in SSG-modification compared to controls.. This approach was extended to identify potential SSG modified Cys-sites in response to H2O2, an endogenous oxidant produced by activated macrophages and many pathophysiological stimuli. The results revealed 364 Cys-sites from 265 proteins that were sensitive to S-glutathionylation in response to H2O2 treatment. These proteins covered a range of molecular types and molecular functions with free radical scavenging, and cell death and survival included as the most significantly enriched functional categories. Overall the results demonstrate that our approach is effective for site-specific identification and quantification of S-glutathionylated proteins. The analytical strategy also provides a unique approach to determining the major pathways and cell processes most susceptible to glutathionylation at a proteome-wide scale.« less

  3. Consideration of plant behaviour in optimal servo-compensator design

    NASA Astrophysics Data System (ADS)

    Moase, W. H.; Manzie, C.

    2016-07-01

    Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.

  4. Performances estimation of a rotary traveling wave ultrasonic motor based on two-dimension analytical model.

    PubMed

    Ming, Y; Peiwen, Q

    2001-03-01

    The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.

  5. Cell type-specific long-range connections of basal forebrain circuit.

    PubMed

    Do, Johnny Phong; Xu, Min; Lee, Seung-Hee; Chang, Wei-Cheng; Zhang, Siyu; Chung, Shinjae; Yung, Tyler J; Fan, Jiang Lan; Miyamichi, Kazunari; Luo, Liqun; Dan, Yang

    2016-09-19

    The basal forebrain (BF) plays key roles in multiple brain functions, including sleep-wake regulation, attention, and learning/memory, but the long-range connections mediating these functions remain poorly characterized. Here we performed whole-brain mapping of both inputs and outputs of four BF cell types - cholinergic, glutamatergic, and parvalbumin-positive (PV+) and somatostatin-positive (SOM+) GABAergic neurons - in the mouse brain. Using rabies virus -mediated monosynaptic retrograde tracing to label the inputs and adeno-associated virus to trace axonal projections, we identified numerous brain areas connected to the BF. The inputs to different cell types were qualitatively similar, but the output projections showed marked differences. The connections to glutamatergic and SOM+ neurons were strongly reciprocal, while those to cholinergic and PV+ neurons were more unidirectional. These results reveal the long-range wiring diagram of the BF circuit with highly convergent inputs and divergent outputs and point to both functional commonality and specialization of different BF cell types.

  6. Analysis of nystagmus response to a pseudorandom velocity input

    NASA Technical Reports Server (NTRS)

    Lessard, C. S.

    1986-01-01

    Space motion sickness was not reported during the first Apollo missions; however, since Apollo 8 through the current Shuttle and Skylab missions, approximately 50% of the crewmembers have experienced instances of space motion sickness. Space motion sickness, renamed space adaptation syndrome, occurs primarily during the initial period of a mission until habilation takes place. One of NASA's efforts to resolve the space adaptation syndrome is to model the individual's vestibular response for basis knowledge and as a possible predictor of an individual's susceptibility to the disorder. This report describes a method to analyse the vestibular system when subjected to a pseudorandom angular velocity input. A sum of sinusoids (pseudorandom) input lends itself to analysis by linear frequency methods. Resultant horizontal ocular movements were digitized, filtered and transformed into the frequency domain. Programs were developed and evaluated to obtain the (1) auto spectra of input stimulus and resultant ocular resonse, (2) cross spectra, (3) the estimated vestibular-ocular system transfer function gain and phase, and (4) coherence function between stimulus and response functions.

  7. Improved prescribed performance control for air-breathing hypersonic vehicles with unknown deadzone input nonlinearity.

    PubMed

    Wang, Yingyang; Hu, Jianbo

    2018-05-19

    An improved prescribed performance controller is proposed for the longitudinal model of an air-breathing hypersonic vehicle (AHV) subject to uncertain dynamics and input nonlinearity. Different from the traditional non-affine model requiring non-affine functions to be differentiable, this paper utilizes a semi-decomposed non-affine model with non-affine functions being locally semi-bounded and possibly in-differentiable. A new error transformation combined with novel prescribed performance functions is proposed to bypass complex deductions caused by conventional error constraint approaches and circumvent high frequency chattering in control inputs. On the basis of backstepping technique, the improved prescribed performance controller with low structural and computational complexity is designed. The methodology guarantees the altitude and velocity tracking error within transient and steady state performance envelopes and presents excellent robustness against uncertain dynamics and deadzone input nonlinearity. Simulation results demonstrate the efficacy of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Design of High Quality Chemical XOR Gates with Noise Reduction.

    PubMed

    Wood, Mackenna L; Domanskyi, Sergii; Privman, Vladimir

    2017-07-05

    We describe a chemical XOR gate design that realizes gate-response function with filtering properties. Such gate-response function is flat (has small gradients) at and in the vicinity of all the four binary-input logic points, resulting in analog noise suppression. The gate functioning involves cross-reaction of the inputs represented by pairs of chemicals to produce a practically zero output when both are present and nearly equal. This cross-reaction processing step is also designed to result in filtering at low output intensities by canceling out the inputs if one of the latter has low intensity compared with the other. The remaining inputs, which were not reacted away, are processed to produce the output XOR signal by chemical steps that result in filtering at large output signal intensities. We analyze the tradeoff resulting from filtering, which involves loss of signal intensity. We also discuss practical aspects of realizations of such XOR gates. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. A reversible fluorescent probe for real-time live-cell imaging and quantification of endogenous hydropolysulfides.

    PubMed

    Umezawa, Keitaro; Kamiya, Mako; Urano, Yasuteru

    2018-05-23

    The chemical biology of reactive sulfur species, including hydropolysulfides, has been a subject undergoing intense study in recent years, but further understanding of their 'intact' function in living cells has been limited due to a lack of appropriate analytical tools. In order to overcome this limitation, we developed a new type of fluorescent probe which reversibly and selectively reacts to hydropolysulfides. The probe enables live-cell visualization and quantification of endogenous hydropolysulfides without interference from intrinsic thiol species such as glutathione. Additionally, real-time reversible monitoring of oxidative-stress-induced fluctuation of intrinsic hydropolysulfides has been achieved with a temporal resolution in the order of seconds, a result which has not yet been realized using conventional methods. These results reveal the probe's versatility as a new fluorescence imaging tool to understand the function of intracellular hydropolysulfides. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Covalent functionalization of single-walled carbon nanotubes with polytyrosine: Characterization and analytical applications for the sensitive quantification of polyphenols.

    PubMed

    Eguílaz, Marcos; Gutiérrez, Alejandro; Gutierrez, Fabiana; González-Domínguez, Jose Miguel; Ansón-Casaos, Alejandro; Hernández-Ferrer, Javier; Ferreyra, Nancy F; Martínez, María T; Rivas, Gustavo

    2016-02-25

    This work reports the synthesis and characterization of single-walled carbon nanotubes (SWCNT) covalently functionalized with polytyrosine (Polytyr); the critical analysis of the experimental conditions to obtain the efficient dispersion of the modified carbon nanotubes; and the analytical performance of glassy carbon electrodes (GCE) modified with the dispersion (GCE/SWCNT-Polytyr) for the highly sensitive quantification of polyphenols. Under the optimal conditions, the calibration plot for the amperometric response of gallic acid (GA) shows a linear range between 5.0 × 10(-7) and 1.7 × 10(-4) M, with a sensitivity of (518 ± 5) m AM(-1) cm(-2), and a detection limit of 8.8 nM. The proposed sensor was successfully used for the determination of total polyphenolic content in tea extracts. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Skeletal Muscle Ultrasound in Critical Care: A Tool in Need of Translation.

    PubMed

    Mourtzakis, Marina; Parry, Selina; Connolly, Bronwen; Puthucheary, Zudin

    2017-10-01

    With the emerging interest in documenting and understanding muscle atrophy and function in critically ill patients and survivors, ultrasonography has transformational potential for measurement of muscle quantity and quality. We discuss the importance of quantifying skeletal muscle in the intensive care unit setting. We also identify the merits and limitations of various modalities that are capable of accurately and precisely measuring muscularity. Ultrasound is emerging as a potentially powerful tool for skeletal muscle quantification; however, there are key challenges that need to be addressed in future work to ensure useful interpretation and comparability of results across diverse observational and interventional studies. Ultrasound presents several methodological challenges, and ultimately muscle quantification combined with metabolic, nutritional, and functional markers will allow optimal patient assessment and prognosis. Moving forward, we recommend that publications include greater detail on landmarking, repeated measures, identification of muscle that was not assessable, and reproducible protocols to more effectively compare results across different studies.

  12. Functional transformations of odor inputs in the mouse olfactory bulb.

    PubMed

    Adam, Yoav; Livneh, Yoav; Miyamichi, Kazunari; Groysman, Maya; Luo, Liqun; Mizrahi, Adi

    2014-01-01

    Sensory inputs from the nasal epithelium to the olfactory bulb (OB) are organized as a discrete map in the glomerular layer (GL). This map is then modulated by distinct types of local neurons and transmitted to higher brain areas via mitral and tufted cells. Little is known about the functional organization of the circuits downstream of glomeruli. We used in vivo two-photon calcium imaging for large scale functional mapping of distinct neuronal populations in the mouse OB, at single cell resolution. Specifically, we imaged odor responses of mitral cells (MCs), tufted cells (TCs) and glomerular interneurons (GL-INs). Mitral cells population activity was heterogeneous and only mildly correlated with the olfactory receptor neuron (ORN) inputs, supporting the view that discrete input maps undergo significant transformations at the output level of the OB. In contrast, population activity profiles of TCs were dense, and highly correlated with the odor inputs in both space and time. Glomerular interneurons were also highly correlated with the ORN inputs, but showed higher activation thresholds suggesting that these neurons are driven by strongly activated glomeruli. Temporally, upon persistent odor exposure, TCs quickly adapted. In contrast, both MCs and GL-INs showed diverse temporal response patterns, suggesting that GL-INs could contribute to the transformations MCs undergo at slow time scales. Our data suggest that sensory odor maps are transformed by TCs and MCs in different ways forming two distinct and parallel information streams.

  13. Principal Components of Recurrence Quantification Analysis of EMG

    DTIC Science & Technology

    2001-10-25

    Springer, 1981, pp. 366-381. 4. M. Fraser and H. L. Swinney, “ Independent coordinates for strange attractors from mutual information ,” Phys. Rev. A...autocorrelation function of s(n), although it has also been argued that the first local minimum of the auto mutual information function is more appropriate [4...recordings from a given subject. T was taken as the lag corresponding to the first minimum of the auto mutual information function, calculated as

  14. Design Sensitivity Method for Sampling-Based RBDO with Fixed COV

    DTIC Science & Technology

    2015-04-29

    contours of the input model at initial design d0 and RBDO optimum design dopt are shown. As the limit state functions are not linear and some input...Glasser, M. L., Moore, R. A., and Scott, T. C., 1990, "Evaluation of Classes of Definite Integrals Involving Elementary Functions via...Differentiation of Special Functions," Applicable Algebra in Engineering, Communication and Computing, 1(2), pp. 149-165. [25] Cho, H., Bae, S., Choi, K. K

  15. Quantification of pelvic floor muscle strength in female urinary incontinence: A systematic review and comparison of contemporary methodologies.

    PubMed

    Deegan, Emily G; Stothers, Lynn; Kavanagh, Alex; Macnab, Andrew J

    2018-01-01

    There remains no gold standard for quantification of voluntary pelvic floor muscle (PFM) strength, despite international guidelines that recommend PFM assessment in females with urinary incontinence (UI). Methods currently reported for quantification of skeletal muscle strength across disciplines are systematically reviewed and their relevance for clinical and academic use related to the pelvic floor are described. A systematic review via Medline, PubMed, CINHAL, and the Cochrane database using key terms for pelvic floor anatomy and function were cross referenced with skeletal muscle strength quantification from 1946 to 2016. Full text peer-reviewed articles in English having female subjects with incontinence were identified. Each study was analyzed for use of controls, type of methodology as direct or indirect measures, benefits, and limitations of the technique. A total of 1586 articles were identified of which 50 met the inclusion criteria. Nine methodologies of determining PFM strength were described including: digital palpation, perineometer, dynamometry, EMG, vaginal cones, ultrasonography, magnetic resonance imaging, urine stream interruption test, and the Colpexin pull test. Thirty-two percent lacked a control group. Technical refinements in both direct and indirect instrumentation for PFM strength measurement are allowing for sensitivity. However, the most common methods of quantification remain digital palpation and perineometry; techniques that pose limitations and yield subjective or indirect measures of muscular strength. Dynamometry has potential as an accurate and sensitive tool, but is limited by inability to assess PFM strength during dynamic movements. © 2017 Wiley Periodicals, Inc.

  16. Development and validation of an open source quantification tool for DSC-MRI studies.

    PubMed

    Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J

    2015-03-01

    This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Exploiting multicompartment effects in triple-echo steady-state T2 mapping for fat fraction quantification.

    PubMed

    Liu, Dian; Steingoetter, Andreas; Curcic, Jelena; Kozerke, Sebastian

    2018-01-01

    To investigate and exploit the effect of intravoxel off-resonance compartments in the triple-echo steady-state (TESS) sequence without fat suppression for T 2 mapping and to leverage the results for fat fraction quantification. In multicompartment tissue, where at least one compartment is excited off-resonance, the total signal exhibits periodic modulations as a function of echo time (TE). Simulated multicompartment TESS signals were synthesized at various TEs. Fat emulsion phantoms were prepared and scanned at the same TE combinations using TESS. In vivo knee data were obtained with TESS to validate the simulations. The multicompartment effect was exploited for fat fraction quantification in the stomach by acquiring TESS signals at two TE combinations. Simulated and measured multicompartment signal intensities were in good agreement. Multicompartment effects caused erroneous T 2 offsets, even at low water-fat ratios. The choice of TE caused T 2 variations of as much as 28% in cartilage. The feasibility of fat fraction quantification to monitor the decrease of fat content in the stomach during digestion is demonstrated. Intravoxel off-resonance compartments are a confounding factor for T 2 quantification using TESS, causing errors that are dependent on the TE. At the same time, off-resonance effects may allow for efficient fat fraction mapping using steady-state imaging. Magn Reson Med 79:423-429, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Inverse optimal design of input-to-state stabilisation for affine nonlinear systems with input delays

    NASA Astrophysics Data System (ADS)

    Cai, Xiushan; Meng, Lingxin; Zhang, Wei; Liu, Leipo

    2018-03-01

    We establish robustness of the predictor feedback control law to perturbations appearing at the system input for affine nonlinear systems with time-varying input delay and additive disturbances. Furthermore, it is shown that it is inverse optimal with respect to a differential game problem. All of the stability and inverse optimality proofs are based on the infinite-dimensional backstepping transformation and an appropriate Lyapunov functional. A single-link manipulator subject to input delays and disturbances is given to illustrate the validity of the proposed method.

  19. Artificial neural network modeling of dissolved oxygen in the Heihe River, Northwestern China.

    PubMed

    Wen, Xiaohu; Fang, Jing; Diao, Meina; Zhang, Chuanqi

    2013-05-01

    Identification and quantification of dissolved oxygen (DO) profiles of river is one of the primary concerns for water resources managers. In this research, an artificial neural network (ANN) was developed to simulate the DO concentrations in the Heihe River, Northwestern China. A three-layer back-propagation ANN was used with the Bayesian regularization training algorithm. The input variables of the neural network were pH, electrical conductivity, chloride (Cl(-)), calcium (Ca(2+)), total alkalinity, total hardness, nitrate nitrogen (NO3-N), and ammonical nitrogen (NH4-N). The ANN structure with 14 hidden neurons obtained the best selection. By making comparison between the results of the ANN model and the measured data on the basis of correlation coefficient (r) and root mean square error (RMSE), a good model-fitting DO values indicated the effectiveness of neural network model. It is found that the coefficient of correlation (r) values for the training, validation, and test sets were 0.9654, 0.9841, and 0.9680, respectively, and the respective values of RMSE for the training, validation, and test sets were 0.4272, 0.3667, and 0.4570, respectively. Sensitivity analysis was used to determine the influence of input variables on the dependent variable. The most effective inputs were determined as pH, NO3-N, NH4-N, and Ca(2+). Cl(-) was found to be least effective variables on the proposed model. The identified ANN model can be used to simulate the water quality parameters.

  20. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  1. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  2. Designable DNA-binding domains enable construction of logic circuits in mammalian cells.

    PubMed

    Gaber, Rok; Lebar, Tina; Majerle, Andreja; Šter, Branko; Dobnikar, Andrej; Benčina, Mojca; Jerala, Roman

    2014-03-01

    Electronic computer circuits consisting of a large number of connected logic gates of the same type, such as NOR, can be easily fabricated and can implement any logic function. In contrast, designed genetic circuits must employ orthogonal information mediators owing to free diffusion within the cell. Combinatorial diversity and orthogonality can be provided by designable DNA- binding domains. Here, we employed the transcription activator-like repressors to optimize the construction of orthogonal functionally complete NOR gates to construct logic circuits. We used transient transfection to implement all 16 two-input logic functions from combinations of the same type of NOR gates within mammalian cells. Additionally, we present a genetic logic circuit where one input is used to select between an AND and OR function to process the data input using the same circuit. This demonstrates the potential of designable modular transcription factors for the construction of complex biological information-processing devices.

  3. User's manual for a parameter identification technique. [with options for model simulation for fixed input forcing functions and identification from wind tunnel and flight measurements

    NASA Technical Reports Server (NTRS)

    Kanning, G.

    1975-01-01

    A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.

  4. Input and output constraints-based stabilisation of switched nonlinear systems with unstable subsystems and its application

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Liu, Qian; Zhao, Jun

    2018-01-01

    This paper studies the problem of stabilisation of switched nonlinear systems with output and input constraints. We propose a recursive approach to solve this issue. None of the subsystems are assumed to be stablisable while the switched system is stabilised by dual design of controllers for subsystems and a switching law. When only dealing with bounded input, we provide nested switching controllers using an extended backstepping procedure. If both input and output constraints are taken into consideration, a Barrier Lyapunov Function is employed during operation to construct multiple Lyapunov functions for switched nonlinear system in the backstepping procedure. As a practical example, the control design of an equilibrium manifold expansion model of aero-engine is given to demonstrate the effectiveness of the proposed design method.

  5. Hydrological models as web services: Experiences from the Environmental Virtual Observatory project

    NASA Astrophysics Data System (ADS)

    Buytaert, W.; Vitolo, C.; Reaney, S. M.; Beven, K.

    2012-12-01

    Data availability in environmental sciences is expanding at a rapid pace. From the constant stream of high-resolution satellite images to the local efforts of citizen scientists, there is an increasing need to process the growing stream of heterogeneous data and turn it into useful information for decision-making. Environmental models, ranging from simple rainfall - runoff relations to complex climate models, can be very useful tools to process data, identify patterns, and help predict the potential impact of management scenarios. Recent technological innovations in networking, computing and standardization may bring a new generation of interactive models plugged into virtual environments closer to the end-user. They are the driver of major funding initiatives such as the UK's Virtual Observatory program, and the U.S. National Science Foundation's Earth Cube. In this study we explore how hydrological models, being an important subset of environmental models, have to be adapted in order to function within a broader environment of web-services and user interactions. Historically, hydrological models have been developed for very different purposes. Typically they have a rigid model structure, requiring a very specific set of input data and parameters. As such, the process of implementing a model for a specific catchment requires careful collection and preparation of the input data, extensive calibration and subsequent validation. This procedure seems incompatible with a web-environment, where data availability is highly variable, heterogeneous and constantly changing in time, and where the requirements of end-users may be not necessarily align with the original intention of the model developer. We present prototypes of models that are web-enabled using the web standards of the Open Geospatial Consortium, and implemented in online decision-support systems. We identify issues related to (1) optimal use of available data; (2) the need for flexible and adaptive structures; (3) quantification and communication of uncertainties. Lastly, we present some road maps to address these issues and discuss them in the broader context of web-based data processing and "big data" science.

  6. DEM-based analysis of landscape organization: 2) Application to catchment comparison

    NASA Astrophysics Data System (ADS)

    Seibert, J.; McGlynn, B.

    2003-04-01

    The delineation of homogeneous landscape elements (or "hydrologic response units") is often a prerequisite in field investigations and the application of semi-distributed hydrologic (or coupled hydrologic and biogeochemical) models. Delineation and quantification of dominant landscape elements requires methods to extract the features from digital elevation data or other readily available information. It is often assumed that hillslope and riparian areas constitute the two most important and identifiable landscape units contributing to catchment runoff in upland humid catchments. In addition, we have found that that the degree of hillslope water expression in stormflow is partially a function of riparian to hillslope reservoir ratios and landscape organization. Therefore, we developed a simple approach for quantifying landscape organization and distributed riparian to hillslope area ratios (riparian buffer ratios), as described in the accompanying contribution. Here we use this method as a framework for comparing and classifying diverse catchments located in Europe, the U.S., and New Zealand. Based on the three catchments Maimai (New Zealand), Panola (Georgia) and Sleepers (Vermont) we obtained the following preliminary results: (1) Local area entering the stream channels was most variable at Maimai and consistently diffuse at Sleepers and Panola. Also the median local area entering the channel network was largest at Maimai and smallest at Sleepers and Panola. This demonstrates the degree of landscape dissection (highest for Maimai) and the concentration of hillslope inputs along the stream network. (2) Riparian areas were smallest at Maimai, larger at Sleepers, and largest at Panola. The combination of riparian zone extent and focused (Maimai) versus diffuse (Sleepers and Panola) hillslope inputs to riparian zones controls local riparian to hillslope area ratios (riparian buffer capacities). (3) Area was accumulated to a large extend in the channel heads in all catchments. At Sleepers about 75 percent of all area originated from sub-catchments of less than 5 ha, whereas this proportion was 50 and 40 percent at Panola and Maimai respectively.

  7. The Influence of Prosodic Input in the Second Language Classroom: Does It Stimulate Child Acquisition of Word Order and Function Words?

    ERIC Educational Resources Information Center

    Campfield, Dorota E.; Murphy, Victoria A.

    2017-01-01

    This paper reports on an intervention study with young Polish beginners (mean age: 8 years, 3 months) learning English at school. It seeks to identify whether exposure to rhythmic input improves knowledge of word order and function words. The "prosodic bootstrapping hypothesis", relevant in developmental psycholinguistics, provided the…

  8. Full wave modulator-demodulator amplifier apparatus. [for generating rectified output signal

    NASA Technical Reports Server (NTRS)

    Black, J. M. (Inventor)

    1974-01-01

    A full-wave modulator-demodulator apparatus is described including an operational amplifier having a first input terminal coupled to a circuit input terminal, and a second input terminal alternately coupled to the circuit input terminal. A circuit is ground by a switching circuit responsive to a phase reference signal and the operational amplifier is alternately switched between a non-inverting mode and an inverting mode. The switching circuit includes three field-effect transistors operatively associated to provide the desired switching function in response to an alternating reference signal of the same frequency as an AC input signal applied to the circuit input terminal.

  9. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  10. Multiclassifier information fusion methods for microarray pattern recognition

    NASA Astrophysics Data System (ADS)

    Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel

    2004-04-01

    This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.

  11. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  12. The impact of personalized probabilistic wall thickness models on peak wall stress in abdominal aortic aneurysms.

    PubMed

    Biehler, J; Wall, W A

    2018-02-01

    If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Quantum non-Gaussianity and quantification of nonclassicality

    NASA Astrophysics Data System (ADS)

    Kühn, B.; Vogel, W.

    2018-05-01

    The algebraic quantification of nonclassicality, which naturally arises from the quantum superposition principle, is related to properties of regular nonclassicality quasiprobabilities. The latter are obtained by non-Gaussian filtering of the Glauber-Sudarshan P function. They yield lower bounds for the degree of nonclassicality. We also derive bounds for convex combinations of Gaussian states for certifying quantum non-Gaussianity directly from the experimentally accessible nonclassicality quasiprobabilities. Other quantum-state representations, such as s -parametrized quasiprobabilities, insufficiently indicate or even fail to directly uncover detailed information on the properties of quantum states. As an example, our approach is applied to multi-photon-added squeezed vacuum states.

  14. Quantification of breast density with spectral mammography based on a scanned multi-slit photon-counting detector: a feasibility study.

    PubMed

    Ding, Huanjun; Molloi, Sabee

    2012-08-07

    A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio of the dual energy image with respect to the square root of mean glandular dose, was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. For an average sized 4.5 cm thick breast, the FOM was maximized with a tube voltage of 46 kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (∼32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique.

  15. Numerical Function Generators Using LUT Cascades

    DTIC Science & Technology

    2007-06-01

    either algebraically (for example, sinðxÞ) or as a table of input/ output values. The user defines the numerical function by using the syntax of Scilab ...defined function in Scilab or specify it directly. Note that, by changing the parser of our system, any format can be used for the design entry. First...Methods for Multiple-Valued Input Address Generators,” Proc. 36th IEEE Int’l Symp. Multiple-Valued Logic (ISMVL ’06), May 2006. [29] Scilab 3.0, INRIA-ENPC

  16. Quantification of functional near infrared spectroscopy to assess cortical reorganization in children with cerebral palsy

    PubMed Central

    Tian, Fenghua; Delgado, Mauricio R.; Dhamne, Sameer C.; Khan, Bilal; Alexandrakis, George; Romero, Mario I.; Smith, Linsley; Reid, Dahlia; Clegg, Nancy J.; Liu, Hanli

    2013-01-01

    Cerebral palsy (CP) is the most common motor disorder in children. Currently available neuroimaging techniques require complete body confinement and steadiness and thus are extremely difficult for pediatric patients. Here, we report the use and quantification of functional near infrared spectroscopy (fNIRS) to investigate the functional reorganization of the sensorimotor cortex in children with hemiparetic CP. Ten of sixteen children with congenital hemiparesis were measured during finger tapping tasks and compared with eight of sixteen age-matched healthy children, with an overall measurement success rate of 60%. Spatiotemporal analysis was introduced to quantify the motor activation and brain laterality. Such a quantitative approach reveals a consistent, contralateral motor activation in healthy children at 7 years of age or older. In sharp contrast, children with congenital hemiparesis exhibit all three of contralateral, bilateral and ipsilateral motor activations, depending on specific ages of the pediatric subjects. This study clearly demonstrates the feasibility of fNIRS to be utilized for investigating cortical reorganization in children with CP or other cortical disorders. PMID:21164944

  17. The speciation of the proteome

    PubMed Central

    Jungblut, Peter R; Holzhütter, Hermann G; Apweiler, Rolf; Schlüter, Hartmut

    2008-01-01

    Introduction In proteomics a paradox situation developed in the last years. At one side it is basic knowledge that proteins are post-translationally modified and occur in different isoforms. At the other side the protein expression concept disclaims post-translational modifications by connecting protein names directly with function. Discussion Optimal proteome coverage is today reached by bottom-up liquid chromatography/mass spectrometry. But quantification at the peptide level in shotgun or bottom-up approaches by liquid chromatography and mass spectrometry is completely ignoring that a special peptide may exist in an unmodified form and in several-fold modified forms. The acceptance of the protein species concept is a basic prerequisite for meaningful quantitative analyses in functional proteomics. In discovery approaches only top-down analyses, separating the protein species before digestion, identification and quantification by two-dimensional gel electrophoresis or protein liquid chromatography, allow the correlation between changes of a biological situation and function. Conclusion To obtain biological relevant information kinetics and systems biology have to be performed at the protein species level, which is the major challenge in proteomics today. PMID:18638390

  18. A quantitative PCR approach for quantification of functional genes involved in the degradation of polycyclic aromatic hydrocarbons in contaminated soils.

    PubMed

    Shahsavari, Esmaeil; Aburto-Medina, Arturo; Taha, Mohamed; Ball, Andrew S

    2016-01-01

    Polycyclic aromatic hydrocarbons (PAHs) are major pollutants globally and due to their carcinogenic and mutagenic properties their clean-up is paramount. Bioremediation or using PAH degrading microorganisms (mainly bacteria) to degrade the pollutants represents cheap, effective methods. These PAH degraders harbor functional genes which help microorganisms use PAHs as source of food and energy. Most probable number (MPN) and plate counting methods are widely used for counting PAHs degraders; however, as culture based methods only count a small fraction (<1%) of microorganisms capable of carrying out PAH degradation, the use of culture-independent methodologies is desirable.•This protocol presents a robust, rapid and sensitive qPCR method for the quantification of the functional genes involved in the degradation of PAHs in soil samples.•This protocol enables us to screen a vast number of PAH contaminated soil samples in few hours.•This protocol provides valuable information about the natural attenuation potential of contaminated soil and can be used to monitor the bioremediation process.

  19. The use of discontinuities and functional groups to assess relative resilience in complex systems

    USGS Publications Warehouse

    Allen, Craig R.; Gunderson, Lance; Johnson, A.R.

    2005-01-01

    It is evident when the resilience of a system has been exceeded and the system qualitatively changed. However, it is not clear how to measure resilience in a system prior to the demonstration that the capacity for resilient response has been exceeded. We argue that self-organizing human and natural systems are structured by a relatively small set of processes operating across scales in time and space. These structuring processes should generate a discontinuous distribution of structures and frequencies, where discontinuities mark the transition from one scale to another. Resilience is not driven by the identity of elements of a system, but rather by the functions those elements provide, and their distribution within and across scales. A self-organizing system that is resilient should maintain patterns of function within and across scales despite the turnover of specific elements (for example, species, cities). However, the loss of functions, or a decrease in functional representation at certain scales will decrease system resilience. It follows that some distributions of function should be more resilient than others. We propose that the determination of discontinuities, and the quantification of function both within and across scales, produce relative measures of resilience in ecological and other systems. We describe a set of methods to assess the relative resilience of a system based upon the determination of discontinuities and the quantification of the distribution of functions in relation to those discontinuities. ?? 2005 Springer Science+Business Media, Inc.

  20. Effect of Increased Intensity of Physiotherapy on Patient Outcomes After Stroke: An Economic Literature Review and Cost-Effectiveness Analysis

    PubMed Central

    Chan, B

    2015-01-01

    Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241

Top