Sample records for information theoretical quantification

  1. Information theoretic quantification of diagnostic uncertainty.

    PubMed

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  2. Application of information-theoretic measures to quantitative analysis of immunofluorescent microscope imaging.

    PubMed

    Shutin, Dmitriy; Zlobinskaya, Olga

    2010-02-01

    The goal of this contribution is to apply model-based information-theoretic measures to the quantification of relative differences between immunofluorescent signals. Several models for approximating the empirical fluorescence intensity distributions are considered, namely Gaussian, Gamma, Beta, and kernel densities. As a distance measure the Hellinger distance and the Kullback-Leibler divergence are considered. For the Gaussian, Gamma, and Beta models the closed-form expressions for evaluating the distance as a function of the model parameters are obtained. The advantages of the proposed quantification framework as compared to simple mean-based approaches are analyzed with numerical simulations. Two biological experiments are also considered. The first is the functional analysis of the p8 subunit of the TFIIH complex responsible for a rare hereditary multi-system disorder--trichothiodystrophy group A (TTD-A). In the second experiment the proposed methods are applied to assess the UV-induced DNA lesion repair rate. A good agreement between our in vivo results and those obtained with an alternative in vitro measurement is established. We believe that the computational simplicity and the effectiveness of the proposed quantification procedure will make it very attractive for different analysis tasks in functional proteomics, as well as in high-content screening. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    PubMed

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  4. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE PAGES

    McDonnell, J. D.; Schunck, N.; Higdon, D.; ...

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  5. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonnell, J. D.; Schunck, N.; Higdon, D.

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  6. Quantification of crew workload imposed by communications-related tasks in commercial transport aircraft

    NASA Technical Reports Server (NTRS)

    Acton, W. H.; Crabtree, M. S.; Simons, J. C.; Gomer, F. E.; Eckel, J. S.

    1983-01-01

    Information theoretic analysis and subjective paired-comparison and task ranking techniques were employed in order to scale the workload of 20 communications-related tasks frequently performed by the captain and first officer of transport category aircraft. Tasks were drawn from taped conversations between aircraft and air traffic controllers (ATC). Twenty crewmembers performed subjective message comparisons and task rankings on the basis of workload. Information theoretic results indicated a broad range of task difficulty levels, and substantial differences between captain and first officer workload levels. Preliminary subjective data tended to corroborate these results. A hybrid scale reflecting the results of both the analytical and the subjective techniques is currently being developed. The findings will be used to select representative sets of communications for use in high fidelity simulation.

  7. Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Ravela, S.

    2015-12-01

    Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.

  8. Enhancing Analytical Separations Using Super-Resolution Microscopy

    NASA Astrophysics Data System (ADS)

    Moringo, Nicholas A.; Shen, Hao; Bishop, Logan D. C.; Wang, Wenxiao; Landes, Christy F.

    2018-04-01

    Super-resolution microscopy is becoming an invaluable tool to investigate structure and dynamics driving protein interactions at interfaces. In this review, we highlight the applications of super-resolution microscopy for quantifying the physics and chemistry that occur between target proteins and stationary-phase supports during chromatographic separations. Our discussion concentrates on the newfound ability of super-resolved single-protein spectroscopy to inform theoretical parameters via quantification of adsorption-desorption dynamics, protein unfolding, and nanoconfined transport.

  9. Quantitative pathology in virtual microscopy: history, applications, perspectives.

    PubMed

    Kayser, Gian; Kayser, Klaus

    2013-07-01

    With the emerging success of commercially available personal computers and the rapid progress in the development of information technologies, morphometric analyses of static histological images have been introduced to improve our understanding of the biology of diseases such as cancer. First applications have been quantifications of immunohistochemical expression patterns. In addition to object counting and feature extraction, laws of thermodynamics have been applied in morphometric calculations termed syntactic structure analysis. Here, one has to consider that the information of an image can be calculated for separate hierarchical layers such as single pixels, cluster of pixels, segmented small objects, clusters of small objects, objects of higher order composed of several small objects. Using syntactic structure analysis in histological images, functional states can be extracted and efficiency of labor in tissues can be quantified. Image standardization procedures, such as shading correction and color normalization, can overcome artifacts blurring clear thresholds. Morphometric techniques are not only useful to learn more about biological features of growth patterns, they can also be helpful in routine diagnostic pathology. In such cases, entropy calculations are applied in analogy to theoretical considerations concerning information content. Thus, regions with high information content can automatically be highlighted. Analysis of the "regions of high diagnostic value" can deliver in the context of clinical information, site of involvement and patient data (e.g. age, sex), support in histopathological differential diagnoses. It can be expected that quantitative virtual microscopy will open new possibilities for automated histological support. Automated integrated quantification of histological slides also serves for quality assurance. The development and theoretical background of morphometric analyses in histopathology are reviewed, as well as their application and potential future implementation in virtual microscopy. Copyright © 2012 Elsevier GmbH. All rights reserved.

  10. Spectral Entropies as Information-Theoretic Tools for Complex Network Comparison

    NASA Astrophysics Data System (ADS)

    De Domenico, Manlio; Biamonte, Jacob

    2016-10-01

    Any physical system can be viewed from the perspective that information is implicitly represented in its state. However, the quantification of this information when it comes to complex networks has remained largely elusive. In this work, we use techniques inspired by quantum statistical mechanics to define an entropy measure for complex networks and to develop a set of information-theoretic tools, based on network spectral properties, such as Rényi q entropy, generalized Kullback-Leibler and Jensen-Shannon divergences, the latter allowing us to define a natural distance measure between complex networks. First, we show that by minimizing the Kullback-Leibler divergence between an observed network and a parametric network model, inference of model parameter(s) by means of maximum-likelihood estimation can be achieved and model selection can be performed with appropriate information criteria. Second, we show that the information-theoretic metric quantifies the distance between pairs of networks and we can use it, for instance, to cluster the layers of a multilayer system. By applying this framework to networks corresponding to sites of the human microbiome, we perform hierarchical cluster analysis and recover with high accuracy existing community-based associations. Our results imply that spectral-based statistical inference in complex networks results in demonstrably superior performance as well as a conceptual backbone, filling a gap towards a network information theory.

  11. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yueqi; Lava, Pascal; Reu, Phillip

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  12. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE PAGES

    Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...

    2015-12-23

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  13. Statistical image quantification toward optimal scan fusion and change quantification

    NASA Astrophysics Data System (ADS)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  14. Optimal Information Processing in Biochemical Networks

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2012-02-01

    A variety of experimental results over the past decades provide examples of near-optimal information processing in biological networks, including in biochemical and transcriptional regulatory networks. Computing information-theoretic quantities requires first choosing or computing the joint probability distribution describing multiple nodes in such a network --- for example, representing the probability distribution of finding an integer copy number of each of two interacting reactants or gene products while respecting the `intrinsic' small copy number noise constraining information transmission at the scale of the cell. I'll given an overview of some recent analytic and numerical work facilitating calculation of such joint distributions and the associated information, which in turn makes possible numerical optimization of information flow in models of noisy regulatory and biochemical networks. Illustrating cases include quantification of form-function relations, ideal design of regulatory cascades, and response to oscillatory driving.

  15. Unified framework for information integration based on information geometry

    PubMed Central

    Oizumi, Masafumi; Amari, Shun-ichi

    2016-01-01

    Assessment of causal influences is a ubiquitous and important subject across diverse research fields. Drawn from consciousness studies, integrated information is a measure that defines integration as the degree of causal influences among elements. Whereas pairwise causal influences between elements can be quantified with existing methods, quantifying multiple influences among many elements poses two major mathematical difficulties. First, overestimation occurs due to interdependence among influences if each influence is separately quantified in a part-based manner and then simply summed over. Second, it is difficult to isolate causal influences while avoiding noncausal confounding influences. To resolve these difficulties, we propose a theoretical framework based on information geometry for the quantification of multiple causal influences with a holistic approach. We derive a measure of integrated information, which is geometrically interpreted as the divergence between the actual probability distribution of a system and an approximated probability distribution where causal influences among elements are statistically disconnected. This framework provides intuitive geometric interpretations harmonizing various information theoretic measures in a unified manner, including mutual information, transfer entropy, stochastic interaction, and integrated information, each of which is characterized by how causal influences are disconnected. In addition to the mathematical assessment of consciousness, our framework should help to analyze causal relationships in complex systems in a complete and hierarchical manner. PMID:27930289

  16. Matrix suppression as a guideline for reliable quantification of peptides by matrix-assisted laser desorption ionization.

    PubMed

    Ahn, Sung Hee; Bae, Yong Jin; Moon, Jeong Hee; Kim, Myung Soo

    2013-09-17

    We propose to divide matrix suppression in matrix-assisted laser desorption ionization into two parts, normal and anomalous. In quantification of peptides, the normal effect can be accounted for by constructing the calibration curve in the form of peptide-to-matrix ion abundance ratio versus concentration. The anomalous effect forbids reliable quantification and is noticeable when matrix suppression is larger than 70%. With this 70% rule, matrix suppression becomes a guideline for reliable quantification, rather than a nuisance. A peptide in a complex mixture can be quantified even in the presence of large amounts of contaminants, as long as matrix suppression is below 70%. The theoretical basis for the quantification method using a peptide as an internal standard is presented together with its weaknesses. A systematic method to improve quantification of high concentration analytes has also been developed.

  17. Quantification of Degeneracy in Biological Systems for Characterization of Functional Interactions Between Modules

    PubMed Central

    Li, Yao; Dwivedi, Gaurav; Huang, Wen; Yi, Yingfei

    2012-01-01

    There is an evolutionary advantage in having multiple components with overlapping functionality (i.e degeneracy) in organisms. While theoretical considerations of degeneracy have been well established in neural networks using information theory, the same concepts have not been developed for differential systems, which form the basis of many biochemical reaction network descriptions in systems biology. Here we establish mathematical definitions of degeneracy, complexity and robustness that allow for the quantification of these properties in a system. By exciting a dynamical system with noise, the mutual information associated with a selected observable output and the interacting subspaces of input components can be used to define both complexity and degeneracy. The calculation of degeneracy in a biological network is a useful metric for evaluating features such as the sensitivity of a biological network to environmental evolutionary pressure. Using a two-receptor signal transduction network, we find that redundant components will not yield high degeneracy whereas compensatory mechanisms established by pathway crosstalk will. This form of analysis permits interrogation of large-scale differential systems for non-identical, functionally equivalent features that have evolved to maintain homeostasis during disruption of individual components. PMID:22619750

  18. Graph-theoretic analysis of discrete-phase-space states for condition change detection and quantification of information

    DOEpatents

    Hively, Lee M.

    2014-09-16

    Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By monitoring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (unstructured data) into discrete-phase-space states, and hence into a graph (structured data) for extraction of condition change.

  19. Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize

    PubMed Central

    Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto

    2014-01-01

    Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911

  20. Single Cell Proteomics in Biomedicine: High-dimensional Data Acquisition, Visualization and Analysis

    PubMed Central

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-01-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. PMID:28128880

  1. The Difference between Uncertainty and Information, and Why This Matters

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.

    2016-12-01

    Earth science investigation and arbitration (for decision making) is very often organized around a concept of uncertainty. It seems relatively straightforward that the purpose of our science is to reduce uncertainty about how environmental systems will react and evolve under different conditions. I propose here that approaching a science of complex systems as a process of quantifying and reducing uncertainty is a mistake, and specifically a mistake that is rooted in certain rather hisoric logical errors. Instead I propose that we should be asking questions about information. I argue here that an information-based perspective facilitates almost trivial answers to environmental science questions that are either difficult or theoretically impossible to answer when posed as questions about uncertainty. In particular, I propose that an information-centric perspective leads to: Coherent and non-subjective hypothesis tests for complex system models. Process-level diagnostics for complex systems models. Methods for building complex systems models that allow for inductive inference without the need for a priori specification of likelihood functions or ad hoc error metrics. Asymptotically correct quantification of epistemic uncertainty. To put this in slightly more basic terms, I propose that an information-theoretic philosophy of science has the potential to resolve certain important aspects of the Demarcation Problem and the Duhem-Quine Problem, and that Hydrology and other Earth Systems Sciences can immediately capitalize on this to address some of our most difficult and persistent problems.

  2. The Generalization of Mutual Information as the Information between a Set of Variables: The Information Correlation Function Hierarchy and the Information Structure of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Wolf, David R.

    2004-01-01

    The topic of this paper is a hierarchy of information-like functions, here named the information correlation functions, where each function of the hierarchy may be thought of as the information between the variables it depends upon. The information correlation functions are particularly suited to the description of the emergence of complex behaviors due to many- body or many-agent processes. They are particularly well suited to the quantification of the decomposition of the information carried among a set of variables or agents, and its subsets. In more graphical language, they provide the information theoretic basis for understanding the synergistic and non-synergistic components of a system, and as such should serve as a forceful toolkit for the analysis of the complexity structure of complex many agent systems. The information correlation functions are the natural generalization to an arbitrary number of sets of variables of the sequence starting with the entropy function (one set of variables) and the mutual information function (two sets). We start by describing the traditional measures of information (entropy) and mutual information.

  3. Theoretical and experimental quantification of doubly and singly differential cross sections for electron-induced ionization of isolated tetrahydrofuran molecules

    DOE PAGES

    Champion, Christophe; Quinto, Michele A.; Bug, Marion U.; ...

    2014-07-29

    Electron-induced ionization of the commonly used surrogate of the DNA sugar-phosphate backbone, namely, the tetrahydrofuran molecule, is here theoretically described within the 1 st Born approximation by means of quantum-mechanical approach. Comparisons between theory and recent experiments are reported in terms of doubly and singly differential cross sections.

  4. Quantification of synthetic cannabinoids in herbal smoking blends using NMR.

    PubMed

    Dunne, Simon J; Rosengren-Holmberg, Jenny P

    2017-05-01

    Herbal smoking blends containing synthetic cannabinoids have become popular alternatives to marijuana. These products were previously sold in pre-packaged foil bags, but nowadays seizures usually contain synthetic cannabinoid powders together with unprepared plant materials. A question often raised by the Swedish police is how much smoking blend can be prepared from certain amounts of banned substance, in order to establish the severity of the crime. To address this question, information about the synthetic cannabinoid content in both the powder and the prepared herbal blends is necessary. In this work, an extraction procedure compatible with direct NMR quantification of synthetic cannabinoids in herbal smoking blends was developed. Extraction media, time and efficiency were tested for different carrier materials containing representative synthetic cannabinoids. The developed protocol utilizes a 30 min extraction step in d 4 -methanol in presence of internal standard allowing direct quantitation of the extract using NMR. The accuracy of the developed method was tested using in-house prepared herbal smoking blends. The results showed deviations less than 0.2% from the actual content, proving that the method is sufficiently accurate for these quantifications. Using this method, ten synthetic cannabinoids present in sixty-three different herbal blends seized by the Swedish police between October 2012 and April 2015 were quantified. Obtained results showed a variation in cannabinoid contents from 1.5% (w/w) for mixtures containing MDMB-CHMICA to over 5% (w/w) for mixtures containing 5F-AKB-48. This is important information for forensic experts when making theoretical calculations of production quantities in legal cases regarding "home-made" herbal smoking blends. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Functional analysis of ultra high information rates conveyed by rat vibrissal primary afferents

    PubMed Central

    Chagas, André M.; Theis, Lucas; Sengupta, Biswa; Stüttgen, Maik C.; Bethge, Matthias; Schwarz, Cornelius

    2013-01-01

    Sensory receptors determine the type and the quantity of information available for perception. Here, we quantified and characterized the information transferred by primary afferents in the rat whisker system using neural system identification. Quantification of “how much” information is conveyed by primary afferents, using the direct method (DM), a classical information theoretic tool, revealed that primary afferents transfer huge amounts of information (up to 529 bits/s). Information theoretic analysis of instantaneous spike-triggered kinematic stimulus features was used to gain functional insight on “what” is coded by primary afferents. Amongst the kinematic variables tested—position, velocity, and acceleration—primary afferent spikes encoded velocity best. The other two variables contributed to information transfer, but only if combined with velocity. We further revealed three additional characteristics that play a role in information transfer by primary afferents. Firstly, primary afferent spikes show preference for well separated multiple stimuli (i.e., well separated sets of combinations of the three instantaneous kinematic variables). Secondly, neurons are sensitive to short strips of the stimulus trajectory (up to 10 ms pre-spike time), and thirdly, they show spike patterns (precise doublet and triplet spiking). In order to deal with these complexities, we used a flexible probabilistic neuron model fitting mixtures of Gaussians to the spike triggered stimulus distributions, which quantitatively captured the contribution of the mentioned features and allowed us to achieve a full functional analysis of the total information rate indicated by the DM. We found that instantaneous position, velocity, and acceleration explained about 50% of the total information rate. Adding a 10 ms pre-spike interval of stimulus trajectory achieved 80–90%. The final 10–20% were found to be due to non-linear coding by spike bursts. PMID:24367295

  6. Multisensory integration processing during olfactory-visual stimulation-An fMRI graph theoretical network analysis.

    PubMed

    Ripp, Isabelle; Zur Nieden, Anna-Nora; Blankenagel, Sonja; Franzmeier, Nicolai; Lundström, Johan N; Freiherr, Jessica

    2018-05-07

    In this study, we aimed to understand how whole-brain neural networks compute sensory information integration based on the olfactory and visual system. Task-related functional magnetic resonance imaging (fMRI) data was obtained during unimodal and bimodal sensory stimulation. Based on the identification of multisensory integration processing (MIP) specific hub-like network nodes analyzed with network-based statistics using region-of-interest based connectivity matrices, we conclude the following brain areas to be important for processing the presented bimodal sensory information: right precuneus connected contralaterally to the supramarginal gyrus for memory-related imagery and phonology retrieval, and the left middle occipital gyrus connected ipsilaterally to the inferior frontal gyrus via the inferior fronto-occipital fasciculus including functional aspects of working memory. Applied graph theory for quantification of the resulting complex network topologies indicates a significantly increased global efficiency and clustering coefficient in networks including aspects of MIP reflecting a simultaneous better integration and segregation. Graph theoretical analysis of positive and negative network correlations allowing for inferences about excitatory and inhibitory network architectures revealed-not significant, but very consistent-that MIP-specific neural networks are dominated by inhibitory relationships between brain regions involved in stimulus processing. © 2018 Wiley Periodicals, Inc.

  7. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    PubMed

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Substitution effect on a hydroxylated chalcone: Conformational, topological and theoretical studies

    NASA Astrophysics Data System (ADS)

    Custodio, Jean M. F.; Vaz, Wesley F.; de Andrade, Fabiano M.; Camargo, Ademir J.; Oliveira, Guilherme R.; Napolitano, Hamilton B.

    2017-05-01

    The effect of substituents on two hydroxylated chalcones was studied in this work. The first chalcone, with a dimethylamine group (HY-DAC) and the second, with three methoxy groups (HY-TRI) were synthesized and crystallized from ethanol on centrosymmetric space group P21/c. The geometric parameters and supramolecular arrangement for both structures obtained from single crystal X-ray diffraction data were analyzed. The intermolecular interactions were investigated by Hirshfeld surfaces with their respective 2D plot for quantification of each type of contact. Additionally, the observed interactions were characterized by QTAIM analysis, and DFT calculations were applied for theoretical vibrational spectra, localization and quantification of frontier orbitals and potential electrostatic map. The flatness of both structures was affected by the substituents, which led to different monoclinic crystalline packing. The calculated harmonic vibrational frequencies and homo-lumo gap confirmed the stability of the structures, while intermolecular interactions were confirmed by potential electrostatic map and QTAIM analysis.

  9. Uncertainty Quantification and Statistical Engineering for Hypersonic Entry Applications

    NASA Technical Reports Server (NTRS)

    Cozmuta, Ioana

    2011-01-01

    NASA has invested significant resources in developing and validating a mathematical construct for TPS margin management: a) Tailorable for low/high reliability missions; b) Tailorable for ablative/reusable TPS; c) Uncertainty Quantification and Statistical Engineering are valuable tools not exploited enough; and d) Need to define strategies combining both Theoretical Tools and Experimental Methods. The main reason for this lecture is to give a flavor of where UQ and SE could contribute and hope that the broader community will work with us to improve in these areas.

  10. Collaborative Study of Analysis of High Resolution Infrared Atmospheric Spectra Between NASA Langley Research Center and the University of Denver

    NASA Technical Reports Server (NTRS)

    Goldman, Aaron

    1999-01-01

    The Langley-D.U. collaboration on the analysis of high resolution infrared atmospheric spectra covered a number of important studies of trace gases identification and quantification from field spectra, and spectral line parameters analysis. The collaborative work included: Quantification and monitoring of trace gases from ground-based spectra available from various locations and seasons and from balloon flights. Studies toward identification and quantification of isotopic species, mostly oxygen and Sulfur isotopes. Search for new species on the available spectra. Update of spectroscopic line parameters, by combining laboratory and atmospheric spectra with theoretical spectroscopy methods. Study of trends of atmosphere trace constituents. Algorithms developments, retrievals intercomparisons and automatization of the analysis of NDSC spectra, for both column amounts and vertical profiles.

  11. A multi-center study benchmarks software tools for label-free proteome quantification

    PubMed Central

    Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan

    2016-01-01

    The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404

  12. A multicenter study benchmarks software tools for label-free proteome quantification.

    PubMed

    Navarro, Pedro; Kuharev, Jörg; Gillet, Ludovic C; Bernhardt, Oliver M; MacLean, Brendan; Röst, Hannes L; Tate, Stephen A; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I; Aebersold, Ruedi; Tenzer, Stefan

    2016-11-01

    Consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH 2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from sequential window acquisition of all theoretical fragment-ion spectra (SWATH)-MS, which uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test data sets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation-window setups. For consistent evaluation, we developed LFQbench, an R package, to calculate metrics of precision and accuracy in label-free quantitative MS and report the identification performance, robustness and specificity of each software tool. Our reference data sets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics.

  13. Monte Carlo Modeling-Based Digital Loop-Mediated Isothermal Amplification on a Spiral Chip for Absolute Quantification of Nucleic Acids.

    PubMed

    Xia, Yun; Yan, Shuangqian; Zhang, Xian; Ma, Peng; Du, Wei; Feng, Xiaojun; Liu, Bi-Feng

    2017-03-21

    Digital loop-mediated isothermal amplification (dLAMP) is an attractive approach for absolute quantification of nucleic acids with high sensitivity and selectivity. Theoretical and numerical analysis of dLAMP provides necessary guidance for the design and analysis of dLAMP devices. In this work, a mathematical model was proposed on the basis of the Monte Carlo method and the theories of Poisson statistics and chemometrics. To examine the established model, we fabricated a spiral chip with 1200 uniform and discrete reaction chambers (9.6 nL) for absolute quantification of pathogenic DNA samples by dLAMP. Under the optimized conditions, dLAMP analysis on the spiral chip realized quantification of nucleic acids spanning over 4 orders of magnitude in concentration with sensitivity as low as 8.7 × 10 -2 copies/μL in 40 min. The experimental results were consistent with the proposed mathematical model, which could provide useful guideline for future development of dLAMP devices.

  14. Reconciling Experiment and Theory in the Use of Aryl-Extended Calix[4]pyrrole Receptors for the Experimental Quantification of Chloride–π Interactions in Solution

    PubMed Central

    Bauzá, Antonio; Quiñonero, David; Frontera, Antonio; Ballester, Pablo

    2015-01-01

    In this manuscript we consider from a theoretical point of view the recently reported experimental quantification of anion–π interactions (the attractive force between electron deficient aromatic rings and anions) in solution using aryl extended calix[4]pyrrole receptors as model systems. Experimentally, two series of calix[4]pyrrole receptors functionalized, respectively, with two and four aryl rings at the meso positions, were used to assess the strength of chloride–π interactions in acetonitrile solution. As a result of these studies the contribution of each individual chloride–π interaction was quantified to be very small (<1 kcal/mol). This result is in contrast with the values derived from most theoretical calculations. Herein we report a theoretical study using high-level density functional theory (DFT) calculations that provides a plausible explanation for the observed disagreement between theory and experiment. The study reveals the existence of molecular interactions between solvent molecules and the aromatic walls of the receptors that strongly modulate the chloride–π interaction. In addition, the obtained theoretical results also suggest that the chloride-calix[4]pyrrole complex used as reference to dissect experimentally the contribution of the chloride–π interactions to the total binding energy for both the two and four-wall aryl-extended calix[4]pyrrole model systems is probably not ideal. PMID:25913375

  15. Experimental and theoretical investigations on the antioxidant activity of isoorientin from Crotalaria globosa

    NASA Astrophysics Data System (ADS)

    Deepha, V.; Praveena, R.; Sivakumar, Raman; Sadasivam, K.

    2014-03-01

    The increasing interests in naturally occurring flavonoids are well known for their bioactivity as antioxidants. The present investigations with combined experimental and theoretical methods are employed to determine the radical scavenging activity and phytochemicals present in Crotalaria globosa, a novel plant source. Preliminary quantification of ethanolic extract of leaves shows high phenolic and flavonoid content than root extract; also it is validated through DPPHrad assay. Further analysis is carried out with successive extracts of leaves of varying polarity of solvents. In DPPHrad and FRAP assays, ethyl acetate fraction (EtOAc) exhibit higher scavenging activity followed by ethanol fraction (EtOH) whereas in NOS assay ethanol fraction is slightly predominant over the EtOAc fraction. The LC-MS analysis provides tentative information about the presence of flavonoid C-glycoside in EtOAc fraction (yellow solid). Presence of flavonoid isorientin has been confirmed through isolation (PTLC) and detected by spectroscopy methods (UV-visible and 1H NMR). Utilizing B3LYP/6-311G (d,p) level of theory the structure and reactivity of flavonoid isoorientin theoretically have been explored. The analysis of the theoretical Bond dissociation energy values, for all Osbnd H sites of isoorientin reveals that minimum energy is required to dissociate H-atom from B-ring than A and C-rings. In order to validate the antioxidant characteristics of isoorientin the relevant molecular descriptors IP, HOMO-LUMO, Mulliken spin density analysis and molecular electrostatic potential surfaces have been computed and interpreted. From experimental and theoretical results, it is proved that isoorientin can act as potent antiradical scavenger in oxidative system.

  16. Photoelectron angular distribution from free SiO2 nanoparticles as a probe of elastic electron scattering.

    PubMed

    Antonsson, E; Langer, B; Halfpap, I; Gottwald, J; Rühl, E

    2017-06-28

    In order to gain quantitative information on the surface composition of nanoparticles from X-ray photoelectron spectroscopy, a detailed understanding of photoelectron transport phenomena in these samples is needed. Theoretical results on the elastic and inelastic scattering have been reported, but a rigorous experimental verification is lacking. We report in this work on the photoelectron angular distribution from free SiO 2 nanoparticles (d = 122 ± 9 nm) after ionization by soft X-rays above the Si 2p and O 1s absorption edges, which gives insight into the relative importance of elastic and inelastic scattering channels in the sample particles. The photoelectron angular anisotropy is found to be lower for photoemission from SiO 2 nanoparticles than that expected from the theoretical values for the isolated Si and O atoms in the photoelectron kinetic energy range 20-380 eV. The reduced angular anisotropy is explained by elastic scattering of the outgoing photoelectrons from neighboring atoms, smearing out the atomic distribution. Photoelectron angular distributions yield detailed information on photoelectron elastic scattering processes allowing for a quantification of the number of elastic scattering events the photoelectrons have undergone prior to leaving the sample. The interpretation of the experimental photoelectron angular distributions is complemented by Monte Carlo simulations, which take inelastic and elastic photoelectron scattering into account using theoretical values for the scattering cross sections. The results of the simulations reproduce the experimental photoelectron angular distributions and provide further support for the assignment that elastic and inelastic electron scattering processes need to be considered.

  17. Quantitative assessment of drivers of recent global temperature variability: an information theoretic approach

    NASA Astrophysics Data System (ADS)

    Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.

    2017-12-01

    Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2, CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance ( TSI) and cosmic ray flux ( CR); El Niño Southern Oscillation ( ENSO) and Global Mean Temperature Anomaly ( GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 %), CH4 ({˜ } 19 %) and volcanic aerosols ({˜ }23 %) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 %) and ENSO ({˜ } 12 %) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.

  18. Metering error quantification under voltage and current waveform distortion

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran

    2017-09-01

    With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.

  19. Creative Stories: A Storytelling Game Fostering Creativity

    ERIC Educational Resources Information Center

    Koukourikos, Antonis; Karampiperis, Pythagoras; Panagopoulos, George

    2014-01-01

    The process of identifying techniques for fostering creativity, and applying these theoretical constructs in real-world educational activities, is, by nature, multifaceted and not straightforward, pertaining to several fields such as cognitive theory and psychology. Furthermore, the quantification of the impact of different activities on…

  20. ELECTROCHEMICAL DECHLORINATION OF TRICHLOROETHYLENE USING GRANULAR-GRAPHITE ELECTRODES: IDENTIFICATION AND QUANTIFICATION OF DECHLORINATION PRODUCTS

    EPA Science Inventory

    Electrochemical degradation (ECD) utilizes high redox potential at the anode and low redox potential at the cathode to oxidize and/or reduce organic and inorganic contaminants. ECD of Trichloroethylene (TCE), although theoretically possible, has not been experimentally proven. Th...

  1. Empirical Evidence for Childhood Depression.

    ERIC Educational Resources Information Center

    Lachar, David

    Although several theoretical positions deal with the concept of childhood depression, accurate measurement of depression can only occur if valid and reliable measures are available. Current efforts emphasize direct questioning of the child and quantification of parents' observations. One scale used to study childhood depression, the Personality…

  2. Photon path distribution and optical responses of turbid media: theoretical analysis based on the microscopic Beer-Lambert law.

    PubMed

    Tsuchiya, Y

    2001-08-01

    A concise theoretical treatment has been developed to describe the optical responses of a highly scattering inhomogeneous medium using functions of the photon path distribution (PPD). The treatment is based on the microscopic Beer-Lambert law and has been found to yield a complete set of optical responses by time- and frequency-domain measurements. The PPD is defined for possible photons having a total zigzag pathlength of l between the points of light input and detection. Such a distribution is independent of the absorption properties of the medium and can be uniquely determined for the medium under quantification. Therefore, the PPD can be calculated with an imaginary reference medium having the same optical properties as the medium under quantification except for the absence of absorption. One of the advantages of this method is that the optical responses, the total attenuation, the mean pathlength, etc are expressed by functions of the PPD and the absorption distribution.

  3. Endogenously determined cycles: empirical evidence from livestock industries.

    PubMed

    McCullough, Michael P; Huffaker, Ray; Marsh, Thomas L

    2012-04-01

    This paper applies the techniques of phase space reconstruction and recurrence quantification analysis to investigate U.S. livestock cycles in relation to recent literature on the business cycle. Results are presented for pork and cattle cycles, providing empirical evidence that the cycles themselves have slowly diminished. By comparing the evolution of production processes for the two livestock cycles we argue that the major cause for this moderation is largely endogenous. The analysis suggests that previous theoretical models relying solely on exogenous shocks to create cyclical patterns do not fully capture changes in system dynamics. Specifically, the biological constraint in livestock dynamics has become less significant while technology and information are relatively more significant. Concurrently, vertical integration of the supply chain may have improved inventory management, all resulting in a small, less deterministic, cyclical effect.

  4. Localized 2D COSY sequences: Method and experimental evaluation for a whole metabolite quantification approach

    NASA Astrophysics Data System (ADS)

    Martel, Dimitri; Tse Ve Koon, K.; Le Fur, Yann; Ratiney, Hélène

    2015-11-01

    Two-dimensional spectroscopy offers the possibility to unambiguously distinguish metabolites by spreading out the multiplet structure of J-coupled spin systems into a second dimension. Quantification methods that perform parametric fitting of the 2D MRS signal have recently been proposed for resolved PRESS (JPRESS) but not explicitly for Localized Correlation Spectroscopy (LCOSY). Here, through a whole metabolite quantification approach, correlation spectroscopy quantification performances are studied. The ability to quantify metabolite relaxation constant times is studied for three localized 2D MRS sequences (LCOSY, LCTCOSY and the JPRESS) in vitro on preclinical MR systems. The issues encountered during implementation and quantification strategies are discussed with the help of the Fisher matrix formalism. The described parameterized models enable the computation of the lower bound for error variance - generally known as the Cramér Rao bounds (CRBs), a standard of precision - on the parameters estimated from these 2D MRS signal fittings. LCOSY has a theoretical net signal loss of two per unit of acquisition time compared to JPRESS. A rapid analysis could point that the relative CRBs of LCOSY compared to JPRESS (expressed as a percentage of the concentration values) should be doubled but we show that this is not necessarily true. Finally, the LCOSY quantification procedure has been applied on data acquired in vivo on a mouse brain.

  5. New class of radioenzymatic assay for the quantification of p-tyramine and phenylethylamine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henry, D.P.; Van Huysse, J.W.; Bowsher, R.R.

    Radioenzymatic assays are widely used for the quantification of a number of biogenic amines. All previous procedures have utilized methyltransferases derived from mammalian tissues. In this assay for the quantification of the trace aralkylamines, p-tyramine (p-tym) and phenylethylamine (PEA), an enzyme, tyramine N-methyltransferase isolated from sprouted barley roots was used. The enzyme was specific for phenylethylamines. Of 26 structurally-related compounds, only p-tym, PEA, m-tym and amphetamine were substrates in vitro. Theoretic maximal methylation of substrates occurred at 10-20/sup 0/C. When TLC was used to separate the radiolabeled reaction products, a specific method was developed for p-tym and PEA. The assaymore » had a sensitivity of 0.8 and 2.8 pg/tube with a C.V. < 5% and was applicable to human plasma and urine. Assay throughput is similar to that of other TLC based radioenzymatic assays.« less

  6. Collaborative Study for Analysis of High Resolution Infrared Atmospheric Spectra Between NASA Langley Research Center and the University of Denver

    NASA Technical Reports Server (NTRS)

    Goldman, A.

    2002-01-01

    The Langley-D.U. collaboration on the analysis of high resolultion infrared atmospheric spectra covered a number of important studies of trace gases identification and quantification from field spectra, and spectral line parameters analysis. The collaborative work included: 1) Quantification and monitoring of trace gases from ground-based spectra available from various locations and seasons and from balloon flights; 2) Identification and preliminary quantification of several isotopic species, including oxygen and Sulfur isotopes; 3) Search for new species on the available spectra, including the use of selective coadding of ground-based spectra for high signal to noise; 4) Update of spectroscopic line parameters, by combining laboratory and atmospheric spectra with theoretical spectroscopy methods; 5) Study of trends and correlations of atmosphere trace constituents; and 6) Algorithms developments, retrievals intercomparisons and automatization of the analysis of NDSC spectra, for both column amounts and vertical profiles.

  7. Multi-tissue partial volume quantification in multi-contrast MRI using an optimised spectral unmixing approach.

    PubMed

    Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme

    2018-06-01

    Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. A comprehensive study of the delay vector variance method for quantification of nonlinearity in dynamical systems

    PubMed Central

    Mandic, D. P.; Ryan, K.; Basu, B.; Pakrashi, V.

    2016-01-01

    Although vibration monitoring is a popular method to monitor and assess dynamic structures, quantification of linearity or nonlinearity of the dynamic responses remains a challenging problem. We investigate the delay vector variance (DVV) method in this regard in a comprehensive manner to establish the degree to which a change in signal nonlinearity can be related to system nonlinearity and how a change in system parameters affects the nonlinearity in the dynamic response of the system. A wide range of theoretical situations are considered in this regard using a single degree of freedom (SDOF) system to obtain numerical benchmarks. A number of experiments are then carried out using a physical SDOF model in the laboratory. Finally, a composite wind turbine blade is tested for different excitations and the dynamic responses are measured at a number of points to extend the investigation to continuum structures. The dynamic responses were measured using accelerometers, strain gauges and a Laser Doppler vibrometer. This comprehensive study creates a numerical and experimental benchmark for structurally dynamical systems where output-only information is typically available, especially in the context of DVV. The study also allows for comparative analysis between different systems driven by the similar input. PMID:26909175

  9. Nuclemeter: A Reaction-Diffusion Column for Quantifying Nucleic Acids Undergoing Enzymatic Amplification

    NASA Astrophysics Data System (ADS)

    Bau, Haim; Liu, Changchun; Killawala, Chitvan; Sadik, Mohamed; Mauk, Michael

    2014-11-01

    Real-time amplification and quantification of specific nucleic acid sequences plays a major role in many medical and biotechnological applications. In the case of infectious diseases, quantification of the pathogen-load in patient specimens is critical to assessing disease progression, effectiveness of drug therapy, and emergence of drug-resistance. Typically, nucleic acid quantification requires sophisticated and expensive instruments, such as real-time PCR machines, which are not appropriate for on-site use and for low resource settings. We describe a simple, low-cost, reactiondiffusion based method for end-point quantification of target nucleic acids undergoing enzymatic amplification. The number of target molecules is inferred from the position of the reaction-diffusion front, analogous to reading temperature in a mercury thermometer. We model the process with the Fisher Kolmogoroff Petrovskii Piscounoff (FKPP) Equation and compare theoretical predictions with experimental observations. The proposed method is suitable for nucleic acid quantification at the point of care, compatible with multiplexing and high-throughput processing, and can function instrument-free. C.L. was supported by NIH/NIAID K25AI099160; M.S. was supported by the Pennsylvania Ben Franklin Technology Development Authority; C.K. and H.B. were funded, in part, by NIH/NIAID 1R41AI104418-01A1.

  10. Phase 1 of the near term hybrid passenger vehicle development program, appendix A. Mission analysis and performance specification studies. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Traversi, M.; Barbarek, L. A. C.

    1979-01-01

    A handy reference for JPL minimum requirements and guidelines is presented as well as information on the use of the fundamental information source represented by the Nationwide Personal Transportation Survey. Data on U.S. demographic statistics and highway speeds are included along with methodology for normal parameters evaluation, synthesis of daily distance distributions, and projection of car ownership distributions. The synthesis of tentative mission quantification results, of intermediate mission quantification results, and of mission quantification parameters are considered and 1985 in place fleet fuel economy data are included.

  11. An information-theoretic approach to the modeling and analysis of whole-genome bisulfite sequencing data.

    PubMed

    Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John

    2018-03-07

    DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.

  12. Self-dissimilarity as a High Dimensional Complexity Measure

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Macready, William

    2005-01-01

    For many systems characterized as "complex" the patterns exhibited on different scales differ markedly from one another. For example the biomass distribution in a human body "looks very different" depending on the scale at which one examines it. Conversely, the patterns at different scales in "simple" systems (e.g., gases, mountains, crystals) vary little from one scale to another. Accordingly, the degrees of self-dissimilarity between the patterns of a system at various scales constitute a complexity "signature" of that system. Here we present a novel quantification of self-dissimilarity. This signature can, if desired, incorporate a novel information-theoretic measure of the distance between probability distributions that we derive here. Whatever distance measure is chosen, our quantification of self-dissimilarity can be measured for many kinds of real-world data. This allows comparisons of the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. species densities in a rain-forest vs. capital density in an economy, etc.). Moreover, in contrast to many other suggested complexity measures, evaluating the self-dissimilarity of a system does not require one to already have a model of the system. These facts may allow self-dissimilarity signatures to be used a s the underlying observational variables of an eventual overarching theory relating all complex systems. To illustrate self-dissimilarity we present several numerical experiments. In particular, we show that underlying structure of the logistic map is picked out by the self-dissimilarity signature of time series produced by that map

  13. The use of self-quantification systems for personal health information: big data management activities and prospects.

    PubMed

    Almalki, Manal; Gray, Kathleen; Sanchez, Fernando Martin

    2015-01-01

    Self-quantification is seen as an emerging paradigm for health care self-management. Self-quantification systems (SQS) can be used for tracking, monitoring, and quantifying health aspects including mental, emotional, physical, and social aspects in order to gain self-knowledge. However, there has been a lack of a systematic approach for conceptualising and mapping the essential activities that are undertaken by individuals who are using SQS in order to improve health outcomes. In this paper, we propose a new model of personal health information self-quantification systems (PHI-SQS). PHI-SQS model describes two types of activities that individuals go through during their journey of health self-managed practice, which are 'self-quantification' and 'self-activation'. In this paper, we aimed to examine thoroughly the first type of activity in PHI-SQS which is 'self-quantification'. Our objectives were to review the data management processes currently supported in a representative set of self-quantification tools and ancillary applications, and provide a systematic approach for conceptualising and mapping these processes with the individuals' activities. We reviewed and compared eleven self-quantification tools and applications (Zeo Sleep Manager, Fitbit, Actipressure, MoodPanda, iBGStar, Sensaris Senspod, 23andMe, uBiome, Digifit, BodyTrack, and Wikilife), that collect three key health data types (Environmental exposure, Physiological patterns, Genetic traits). We investigated the interaction taking place at different data flow stages between the individual user and the self-quantification technology used. We found that these eleven self-quantification tools and applications represent two major tool types (primary and secondary self-quantification systems). In each type, the individuals experience different processes and activities which are substantially influenced by the technologies' data management capabilities. Self-quantification in personal health maintenance appears promising and exciting. However, more studies are needed to support its use in this field. The proposed model will in the future lead to developing a measure for assessing the effectiveness of interventions to support using SQS for health self-management (e.g., assessing the complexity of self-quantification activities, and activation of the individuals).

  14. The use of self-quantification systems for personal health information: big data management activities and prospects

    PubMed Central

    2015-01-01

    Background Self-quantification is seen as an emerging paradigm for health care self-management. Self-quantification systems (SQS) can be used for tracking, monitoring, and quantifying health aspects including mental, emotional, physical, and social aspects in order to gain self-knowledge. However, there has been a lack of a systematic approach for conceptualising and mapping the essential activities that are undertaken by individuals who are using SQS in order to improve health outcomes. In this paper, we propose a new model of personal health information self-quantification systems (PHI-SQS). PHI-SQS model describes two types of activities that individuals go through during their journey of health self-managed practice, which are 'self-quantification' and 'self-activation'. Objectives In this paper, we aimed to examine thoroughly the first type of activity in PHI-SQS which is 'self-quantification'. Our objectives were to review the data management processes currently supported in a representative set of self-quantification tools and ancillary applications, and provide a systematic approach for conceptualising and mapping these processes with the individuals' activities. Method We reviewed and compared eleven self-quantification tools and applications (Zeo Sleep Manager, Fitbit, Actipressure, MoodPanda, iBGStar, Sensaris Senspod, 23andMe, uBiome, Digifit, BodyTrack, and Wikilife), that collect three key health data types (Environmental exposure, Physiological patterns, Genetic traits). We investigated the interaction taking place at different data flow stages between the individual user and the self-quantification technology used. Findings We found that these eleven self-quantification tools and applications represent two major tool types (primary and secondary self-quantification systems). In each type, the individuals experience different processes and activities which are substantially influenced by the technologies' data management capabilities. Conclusions Self-quantification in personal health maintenance appears promising and exciting. However, more studies are needed to support its use in this field. The proposed model will in the future lead to developing a measure for assessing the effectiveness of interventions to support using SQS for health self-management (e.g., assessing the complexity of self-quantification activities, and activation of the individuals). PMID:26019809

  15. Quality Concerns in Technical Education in India: A Quantifiable Quality Enabled Model

    ERIC Educational Resources Information Center

    Gambhir, Victor; Wadhwa, N. C.; Grover, Sandeep

    2016-01-01

    Purpose: The paper aims to discuss current Technical Education scenarios in India. It proposes modelling the factors affecting quality in a technical institute and then applying a suitable technique for assessment, comparison and ranking. Design/methodology/approach: The paper chose graph theoretic approach for quantification of quality-enabled…

  16. The Politics of Language. Lektos: Interdisciplinary Working Papers in Language Sciences, Vol. 3, No. 2.

    ERIC Educational Resources Information Center

    St. Clair, Robert N.

    The areas of language planning and the language of oppression are discussed within the theoretical framework of existential sociolinguistics. This tradition is contrasted with the contemporary models of positivism with its assumptions about constancy and quantification. The proposed model brings in social history, intent, consciousness, and other…

  17. PREFACE: Quantum Information, Communication, Computation and Cryptography

    NASA Astrophysics Data System (ADS)

    Benatti, F.; Fannes, M.; Floreanini, R.; Petritis, D.

    2007-07-01

    The application of quantum mechanics to information related fields such as communication, computation and cryptography is a fast growing line of research that has been witnessing an outburst of theoretical and experimental results, with possible practical applications. On the one hand, quantum cryptography with its impact on secrecy of transmission is having its first important actual implementations; on the other hand, the recent advances in quantum optics, ion trapping, BEC manipulation, spin and quantum dot technologies allow us to put to direct test a great deal of theoretical ideas and results. These achievements have stimulated a reborn interest in various aspects of quantum mechanics, creating a unique interplay between physics, both theoretical and experimental, mathematics, information theory and computer science. In view of all these developments, it appeared timely to organize a meeting where graduate students and young researchers could be exposed to the fundamentals of the theory, while senior experts could exchange their latest results. The activity was structured as a school followed by a workshop, and took place at The Abdus Salam International Center for Theoretical Physics (ICTP) and The International School for Advanced Studies (SISSA) in Trieste, Italy, from 12-23 June 2006. The meeting was part of the activity of the Joint European Master Curriculum Development Programme in Quantum Information, Communication, Cryptography and Computation, involving the Universities of Cergy-Pontoise (France), Chania (Greece), Leuven (Belgium), Rennes1 (France) and Trieste (Italy). This special issue of Journal of Physics A: Mathematical and Theoretical collects 22 contributions from well known experts who took part in the workshop. They summarize the present day status of the research in the manifold aspects of quantum information. The issue is opened by two review articles, the first by G Adesso and F Illuminati discussing entanglement in continuous variable systems, the second by T Prosen, discussing chaos and complexity in quantum systems. Both topics have theoretical as well as experimental relevance and are likely to witness a fast growing development in the near future. The remaining contributions present more specific and very recent results. They involve the study of the structure of quantum states and their estimation (B Baumgartner et al, C King et al, S Olivares et al, D Petz et al and W van Dam et al), of entanglement generation and its quantification (G Brida et al, F Ciccarello et al, G Costantini et al, O Romero-Isart et al, D Rossini et al, A Serafini et al and D Vitali et al), of randomness related effects on entanglement behaviour (I Akhalwaya et al, O Dahlsten et al and L Viola et al), and of abstract and applied aspects of quantum computation and communication (K Audenart, G M D'Ariano et al, N Datta et al, L C Kwek et al and M Nathanson et al). We would like to express our gratitude to the European Commission, the Abdus Salam ICTP, SISSA and Eurotech SpA (Amaro, Udine, Italy) for financial and/or logistic support. Special thanks also go to the workshop secretary Marina De Comelli, and the secretaries of the Department of Theoretical Physics, University of Trieste, Sabrina Gaspardis and Rosita Glavina for their precious help and assistance.

  18. dPCR: A Technology Review

    PubMed Central

    Quan, Phenix-Lan; Sauzade, Martin

    2018-01-01

    Digital Polymerase Chain Reaction (dPCR) is a novel method for the absolute quantification of target nucleic acids. Quantification by dPCR hinges on the fact that the random distribution of molecules in many partitions follows a Poisson distribution. Each partition acts as an individual PCR microreactor and partitions containing amplified target sequences are detected by fluorescence. The proportion of PCR-positive partitions suffices to determine the concentration of the target sequence without a need for calibration. Advances in microfluidics enabled the current revolution of digital quantification by providing efficient partitioning methods. In this review, we compare the fundamental concepts behind the quantification of nucleic acids by dPCR and quantitative real-time PCR (qPCR). We detail the underlying statistics of dPCR and explain how it defines its precision and performance metrics. We review the different microfluidic digital PCR formats, present their underlying physical principles, and analyze the technological evolution of dPCR platforms. We present the novel multiplexing strategies enabled by dPCR and examine how isothermal amplification could be an alternative to PCR in digital assays. Finally, we determine whether the theoretical advantages of dPCR over qPCR hold true by perusing studies that directly compare assays implemented with both methods. PMID:29677144

  19. Information-Theoretic Benchmarking of Land Surface Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed about 40%. There was relatively little difference between the different models. 1. G. Abramowitz, R. Leuning, M. Clark, A. Pitman, Evaluating the performance of land surface models. Journal of Climate 21, (2008). 2. W. Gong, H. V. Gupta, D. Yang, K. Sricharan, A. O. Hero, Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach. Water Resources Research 49, 2253-2273 (2013). 3. G. S. Nearing, H. V. Gupta, The quantity and quality of information in hydrologic models. Water Resources Research 51, 524-538 (2015). 4. H. V. Gupta, G. S. Nearing, Using models and data to learn: A systems theoretic perspective on the future of hydrological science. Water Resources Research 50(6), 5351-5359 (2014). 5. H. V. Gupta et al., Large-sample hydrology: a need to balance depth with breadth. Hydrology and Earth System Sciences Discussions 10, 9147-9189 (2013).

  20. Fisher information and steric effect: study of the internal rotation barrier of ethane.

    PubMed

    Esquivel, Rodolfo O; Liu, Shubin; Angulo, Juan Carlos; Dehesa, Jesús S; Antolín, Juan; Molina-Espíritu, Moyocoyani

    2011-05-05

    On the basis of a density-based quantification of the steric effect [Liu, S. B. J. Chem. Phys.2007, 126, 244103], the origin of the internal rotation barrier between the eclipsed and staggered conformers of ethane is systematically investigated in this work from an information-theoretical point of view by using the Fisher information measure in conjugated spaces. Two kinds of computational approaches are considered in this work: adiabatic (with optimal structure) and vertical (with fixed geometry). The analyses are performed systematically by following, in each case, the conformeric path by changing the dihedral angle from 0 to 180° . This is calculated at the HF, MP2, B3LYP, and CCSD(T) levels of theory and with several basis sets. Selected descriptors of the densities are utilized to support the observations. Our results show that in the adiabatic case the eclipsed conformer possesses a larger steric repulsion than the staggered conformer, but in the vertical cases the staggered conformer retains a larger steric repulsion. Our results verify the plausibility for defining and computing the steric effect in the post-Hartree-Fock level of theory according to the scheme proposed by Liu.

  1. New LightCycler PCR for Rapid and Sensitive Quantification of Parvovirus B19 DNA Guides Therapeutic Decision-Making in Relapsing Infections

    PubMed Central

    Harder, Timm C.; Hufnagel, Markus; Zahn, Katrin; Beutel, Karin; Schmitt, Heinz-Josef; Ullmann, Uwe; Rautenberg, Peter

    2001-01-01

    Detection of parvovirus B19 DNA offers diagnostic advantages over serology, particularly in persistent infections of immunocompromised patients. A rapid, novel method of B19 DNA detection and quantification is introduced. This method, a quantitative PCR assay, is based on real-time glass capillary thermocycling (LightCycler [LC]) and fluorescence resonance energy transfer (FRET). The PCR assay allowed quantification over a dynamic range of over 7 logs and could quantify as little as 250 B19 genome equivalents (geq) per ml as calculated for plasmid DNA (i.e., theoretically ≥5 geq per assay). Interrater agreement analysis demonstrated equivalence of LC-FRET PCR and conventional nested PCR in the diagnosis of an active B19 infection (kappa coefficient = 0.83). The benefit of the new method was demonstrated in an immunocompromised child with a relapsing infection, who required an attenuation of the immunosuppressive therapy in addition to repeated doses of immunoglobulin to eliminate the virus. PMID:11724854

  2. Viral video: Live imaging of virus-host encounters

    NASA Astrophysics Data System (ADS)

    Son, Kwangmin; Guasto, Jeffrey S.; Cubillos-Ruiz, Andres; Chisholm, Sallie W.; Sullivan, Matthew B.; Stocker, Roman

    2014-11-01

    Viruses are non-motile infectious agents that rely on Brownian motion to encounter and subsequently adsorb to their hosts. Paradoxically, the viral adsorption rate is often reported to be larger than the theoretical limit imposed by the virus-host encounter rate, highlighting a major gap in the experimental quantification of virus-host interactions. Here we present the first direct quantification of the viral adsorption rate, obtained using live imaging of individual host cells and viruses for thousands of encounter events. The host-virus pair consisted of Prochlorococcus MED4, a 800 nm small non-motile bacterium that dominates photosynthesis in the oceans, and its virus PHM-2, a myovirus that has a 80 nm icosahedral capsid and a 200 nm long rigid tail. We simultaneously imaged hosts and viruses moving by Brownian motion using two-channel epifluorescent microscopy in a microfluidic device. This detailed quantification of viral transport yielded a 20-fold smaller adsorption efficiency than previously reported, indicating the need for a major revision in infection models for marine and likely other ecosystems.

  3. Advantages of a dual-tracer model over reference tissue models for binding potential measurement in tumors

    PubMed Central

    Tichauer, K M; Samkoe, K S; Klubben, W S; Hasan, T; Pogue, B W

    2012-01-01

    The quantification of tumor molecular expression in vivo could have a significant impact for informing and monitoring immerging targeted therapies in oncology. Molecular imaging of targeted tracers can be used to quantify receptor expression in the form of a binding potential (BP) if the arterial input curve or a surrogate of it is also measured. However, the assumptions of the most common approaches (reference tissue models) may not be valid for use in tumors. In this study, the validity of reference tissue models is investigated for use in tumors experimentally and in simulations. Three different tumor lines were grown subcutaneously in athymic mice and the mice were injected with a mixture of an epidermal growth factor receptor- (EGFR-) targeted fluorescent tracer and an untargeted fluorescent tracer. A one-compartment plasma input model demonstrated that the transport kinetics of both tracers were significantly different between tumors and all potential reference tissues, and using the reference tissue model resulted in a theoretical underestimation in BP of 50 ± 37%. On the other hand, the targeted and untargeted tracers demonstrated similar transport kinetics, allowing a dual-tracer approach to be employed to accurately estimate binding potential (with a theoretical error of 0.23 ± 9.07%). These findings highlight the potential for using a dual-tracer approach to quantify receptor expression in tumors with abnormal hemodynamics, possibly to inform the choice or progress of molecular cancer therapies. PMID:23022732

  4. Direct qPCR quantification using the Quantifiler(®) Trio DNA quantification kit.

    PubMed

    Liu, Jason Yingjie

    2014-11-01

    The effectiveness of a direct quantification assay is essential to the adoption of the combined direct quantification/direct STR workflow. In this paper, the feasibility of using the Quantifiler(®) Trio DNA quantification kit for the direct quantification of forensic casework samples was investigated. Both low-level touch DNA samples and blood samples were collected on PE swabs and quantified directly. The increased sensitivity of the Quantifiler(®) Trio kit enabled the detection of less than 10pg of DNA in unprocessed touch samples and also minimizes the stochastic effect experienced by different targets in the same sample. The DNA quantity information obtained from a direct quantification assay using the Quantifiler(®) Trio kit can also be used to accurately estimate the optimal input DNA quantity for a direct STR amplification reaction. The correlation between the direct quantification results (Quantifiler(®) Trio kit) and the direct STR results (GlobalFiler™ PCR amplification kit(*)) for low-level touch DNA samples indicates that direct quantification using the Quantifiler(®) Trio DNA quantification kit is more reliable than the Quantifiler(®) Duo DNA quantification kit for predicting the STR results of unprocessed touch DNA samples containing less than 10pg of DNA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  6. PyXRD v0.6.7: a free and open-source program to quantify disordered phyllosilicates using multi-specimen X-ray diffraction profile fitting

    NASA Astrophysics Data System (ADS)

    Dumon, M.; Van Ranst, E.

    2016-01-01

    This paper presents a free and open-source program called PyXRD (short for Python X-ray diffraction) to improve the quantification of complex, poly-phasic mixed-layer phyllosilicate assemblages. The validity of the program was checked by comparing its output with Sybilla v2.2.2, which shares the same mathematical formalism. The novelty of this program is the ab initio incorporation of the multi-specimen method, making it possible to share phases and (a selection of) their parameters across multiple specimens. PyXRD thus allows for modelling multiple specimens side by side, and this approach speeds up the manual refinement process significantly. To check the hypothesis that this multi-specimen set-up - as it effectively reduces the number of parameters and increases the number of observations - can also improve automatic parameter refinements, we calculated X-ray diffraction patterns for four theoretical mineral assemblages. These patterns were then used as input for one refinement employing the multi-specimen set-up and one employing the single-pattern set-ups. For all of the assemblages, PyXRD was able to reproduce or approximate the input parameters with the multi-specimen approach. Diverging solutions only occurred in single-pattern set-ups, which do not contain enough information to discern all minerals present (e.g. patterns of heated samples). Assuming a correct qualitative interpretation was made and a single pattern exists in which all phases are sufficiently discernible, the obtained results indicate a good quantification can often be obtained with just that pattern. However, these results from theoretical experiments cannot automatically be extrapolated to all real-life experiments. In any case, PyXRD has proven to be useful when X-ray diffraction patterns are modelled for complex mineral assemblages containing mixed-layer phyllosilicates with a multi-specimen approach.

  7. Bayesian flood forecasting methods: A review

    NASA Astrophysics Data System (ADS)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.

  8. An information-theoretic approach to surrogate-marker evaluation with failure time endpoints.

    PubMed

    Pryseley, Assam; Tilahun, Abel; Alonso, Ariel; Molenberghs, Geert

    2011-04-01

    Over the last decades, the evaluation of potential surrogate endpoints in clinical trials has steadily been growing in importance, not only thanks to the availability of ever more potential markers and surrogate endpoints, also because more methodological development has become available. While early work has been devoted, to a large extent, to Gaussian, binary, and longitudinal endpoints, the case of time-to-event endpoints is in need of careful scrutiny as well, owing to the strong presence of such endpoints in oncology and beyond. While work had been done in the past, it was often cumbersome to use such tools in practice, because of the need for fitting copula or frailty models that were further embedded in a hierarchical or two-stage modeling approach. In this paper, we present a methodologically elegant and easy-to-use approach based on information theory. We resolve essential issues, including the quantification of "surrogacy" based on such an approach. Our results are put to the test in a simulation study and are applied to data from clinical trials in oncology. The methodology has been implemented in R.

  9. Risk assessment for ecotoxicity of pharmaceuticals--an emerging issue.

    PubMed

    Kar, Supratik; Roy, Kunal

    2012-03-01

    Existence of a large amount of pharmaceuticals and their active metabolites in the environment has recently been considered as one of the most serious concerns in environmental sciences. Large diversity of pharmaceuticals has been found in the environmental domain in considerable amounts that are not only destructive to environment but also fatal for human and animal fraternity. There is a considerable lack of knowledge about the environmental fate and quantification of a large number of pharmaceuticals. This communication aims to review the literature information regarding occurrence of pharmaceuticals and their metabolites in the environment, their persistence, environmental fate and toxicity as well as application of theoretical, non-experimental, non-animal, alternative and, in particular, in silico methods to provide information about the basic physicochemical and fate properties of pharmaceuticals to the environment. The reader will gain an overview of risk assessment strategies for ecotoxicity of pharmaceuticals and advances in application of quantitative structure-toxicity relationship (QSTR) in this field. This review justifies the need to develop more QSTR models for prediction of ecotoxicity of pharmaceuticals in order to reduce time and cost involvement in such exercise.

  10. Quantification of biofilm in microtiter plates: overview of testing conditions and practical recommendations for assessment of biofilm production by staphylococci.

    PubMed

    Stepanović, Srdjan; Vuković, Dragana; Hola, Veronika; Di Bonaventura, Giovanni; Djukić, Slobodanka; Cirković, Ivana; Ruzicka, Filip

    2007-08-01

    The details of all steps involved in the quantification of biofilm formation in microtiter plates are described. The presented protocol incorporates information on assessment of biofilm production by staphylococci, gained both by direct experience as well as by analysis of methods for assaying biofilm production. The obtained results should simplify quantification of biofilm formation in microtiter plates, and make it more reliable and comparable among different laboratories.

  11. Direct Visual and Radar Methods for the Detection, Quantification, and Prediction of Bird Migration.

    DTIC Science & Technology

    1980-04-30

    domesticus), 12 cm2 ; and Rock Dove ( Columba livia ), 80 cm’. Eastwood (1967) has given the radar cross sections of a number of European birds as mea...in Table 4 and Table 5 it is apparent that a single pigeon ( Columba livia ) flying toward the radar should theoretically produce an echo on the ASR-4

  12. An international collaboration to standardize HIV-2 viral load assays: results from the 2009 ACHI(E)V(2E) quality control study.

    PubMed

    Damond, F; Benard, A; Balotta, Claudia; Böni, Jürg; Cotten, Matthew; Duque, Vitor; Ferns, Bridget; Garson, Jeremy; Gomes, Perpetua; Gonçalves, Fátima; Gottlieb, Geoffrey; Kupfer, Bernd; Ruelle, Jean; Rodes, Berta; Soriano, Vicente; Wainberg, Mark; Taieb, Audrey; Matheron, Sophie; Chene, Genevieve; Brun-Vezinet, Francoise

    2011-10-01

    Accurate HIV-2 plasma viral load quantification is crucial for adequate HIV-2 patient management and for the proper conduct of clinical trials and international cohort collaborations. This study compared the homogeneity of HIV-2 RNA quantification when using HIV-2 assays from ACHI(E)V(2E) study sites and either in-house PCR calibration standards or common viral load standards supplied to all collaborators. Each of the 12 participating laboratories quantified blinded HIV-2 samples, using its own HIV-2 viral load assay and standard as well as centrally validated and distributed common HIV-2 group A and B standards (http://www.hiv.lanl.gov/content/sequence/HelpDocs/subtypes-more.html). Aliquots of HIV-2 group A and B strains, each at 2 theoretical concentrations (2.7 and 3.7 log(10) copies/ml), were tested. Intralaboratory, interlaboratory, and overall variances of quantification results obtained with both standards were compared using F tests. For HIV-2 group A quantifications, overall and interlaboratory and/or intralaboratory variances were significantly lower when using the common standard than when using in-house standards at the concentration levels of 2.7 log(10) copies/ml and 3.7 log(10) copies/ml, respectively. For HIV-2 group B, a high heterogeneity was observed and the variances did not differ according to the type of standard used. In this international collaboration, the use of a common standard improved the homogeneity of HIV-2 group A RNA quantification only. The diversity of HIV-2 group B, particularly in PCR primer-binding regions, may explain the heterogeneity in quantification of this strain. Development of a validated HIV-2 viral load assay that accurately quantifies distinct circulating strains is needed.

  13. An International Collaboration To Standardize HIV-2 Viral Load Assays: Results from the 2009 ACHIEV2E Quality Control Study▿

    PubMed Central

    Damond, F.; Benard, A.; Balotta, Claudia; Böni, Jürg; Cotten, Matthew; Duque, Vitor; Ferns, Bridget; Garson, Jeremy; Gomes, Perpetua; Gonçalves, Fátima; Gottlieb, Geoffrey; Kupfer, Bernd; Ruelle, Jean; Rodes, Berta; Soriano, Vicente; Wainberg, Mark; Taieb, Audrey; Matheron, Sophie; Chene, Genevieve; Brun-Vezinet, Francoise

    2011-01-01

    Accurate HIV-2 plasma viral load quantification is crucial for adequate HIV-2 patient management and for the proper conduct of clinical trials and international cohort collaborations. This study compared the homogeneity of HIV-2 RNA quantification when using HIV-2 assays from ACHIEV2E study sites and either in-house PCR calibration standards or common viral load standards supplied to all collaborators. Each of the 12 participating laboratories quantified blinded HIV-2 samples, using its own HIV-2 viral load assay and standard as well as centrally validated and distributed common HIV-2 group A and B standards (http://www.hiv.lanl.gov/content/sequence/HelpDocs/subtypes-more.html). Aliquots of HIV-2 group A and B strains, each at 2 theoretical concentrations (2.7 and 3.7 log10 copies/ml), were tested. Intralaboratory, interlaboratory, and overall variances of quantification results obtained with both standards were compared using F tests. For HIV-2 group A quantifications, overall and interlaboratory and/or intralaboratory variances were significantly lower when using the common standard than when using in-house standards at the concentration levels of 2.7 log10 copies/ml and 3.7 log10 copies/ml, respectively. For HIV-2 group B, a high heterogeneity was observed and the variances did not differ according to the type of standard used. In this international collaboration, the use of a common standard improved the homogeneity of HIV-2 group A RNA quantification only. The diversity of HIV-2 group B, particularly in PCR primer-binding regions, may explain the heterogeneity in quantification of this strain. Development of a validated HIV-2 viral load assay that accurately quantifies distinct circulating strains is needed. PMID:21813718

  14. QUANTIFICATION OF IN-SITU GAS HYDRATES WITH WELL LOGS.

    USGS Publications Warehouse

    Collett, Timothy S.; Godbole, Sanjay P.; Economides, Christine

    1984-01-01

    This study evaluates in detail the expected theoretical log responses and the actual log responses within one stratigraphically controlled hydrate horizon in six wells spaced throughout the Kuparuk Oil Field. Detailed examination of the neutron porosity and sonic velocity responses within the horizon is included. In addition, the theoretical effect of the presence of hydrates on the neutron porosity and sonic velocity devices has been examined in order to correct for such an effect on the calculation of formation properties such as porosity and hydrate saturation. Also presented in the paper is a technique which allows the conclusive identification of a potential hydrate occurrence.

  15. Online drug databases: a new method to assess and compare inclusion of clinically relevant information.

    PubMed

    Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro

    2013-08-01

    Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed better but performance was inconsistent at subsections level, within the same DB. The method developed allows quantification of the inclusion of relevant information items in DB and comparison with an "ideal database". It is necessary to consult diverse DB in order to find all the relevant information needed to support clinical drug use.

  16. Quantifying introgression risk with realistic population genetics.

    PubMed

    Ghosh, Atiyo; Meirmans, Patrick G; Haccou, Patsy

    2012-12-07

    Introgression is the permanent incorporation of genes from the genome of one population into another. This can have severe consequences, such as extinction of endemic species, or the spread of transgenes. Quantification of the risk of introgression is an important component of genetically modified crop regulation. Most theoretical introgression studies aimed at such quantification disregard one or more of the most important factors concerning introgression: realistic genetical mechanisms, repeated invasions and stochasticity. In addition, the use of linkage as a risk mitigation strategy has not been studied properly yet with genetic introgression models. Current genetic introgression studies fail to take repeated invasions and demographic stochasticity into account properly, and use incorrect measures of introgression risk that can be manipulated by arbitrary choices. In this study, we present proper methods for risk quantification that overcome these difficulties. We generalize a probabilistic risk measure, the so-called hazard rate of introgression, for application to introgression models with complex genetics and small natural population sizes. We illustrate the method by studying the effects of linkage and recombination on transgene introgression risk at different population sizes.

  17. Quantifying introgression risk with realistic population genetics

    PubMed Central

    Ghosh, Atiyo; Meirmans, Patrick G.; Haccou, Patsy

    2012-01-01

    Introgression is the permanent incorporation of genes from the genome of one population into another. This can have severe consequences, such as extinction of endemic species, or the spread of transgenes. Quantification of the risk of introgression is an important component of genetically modified crop regulation. Most theoretical introgression studies aimed at such quantification disregard one or more of the most important factors concerning introgression: realistic genetical mechanisms, repeated invasions and stochasticity. In addition, the use of linkage as a risk mitigation strategy has not been studied properly yet with genetic introgression models. Current genetic introgression studies fail to take repeated invasions and demographic stochasticity into account properly, and use incorrect measures of introgression risk that can be manipulated by arbitrary choices. In this study, we present proper methods for risk quantification that overcome these difficulties. We generalize a probabilistic risk measure, the so-called hazard rate of introgression, for application to introgression models with complex genetics and small natural population sizes. We illustrate the method by studying the effects of linkage and recombination on transgene introgression risk at different population sizes. PMID:23055068

  18. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  19. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  20. Reliable estimates of predictive uncertainty for an Alpine catchment using a non-parametric methodology

    NASA Astrophysics Data System (ADS)

    Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.

    2017-04-01

    Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.

  1. Cross-recurrence quantification analysis of categorical and continuous time series: an R package

    PubMed Central

    Coco, Moreno I.; Dale, Rick

    2014-01-01

    This paper describes the R package crqa to perform cross-recurrence quantification analysis of two time series of either a categorical or continuous nature. Streams of behavioral information, from eye movements to linguistic elements, unfold over time. When two people interact, such as in conversation, they often adapt to each other, leading these behavioral levels to exhibit recurrent states. In dialog, for example, interlocutors adapt to each other by exchanging interactive cues: smiles, nods, gestures, choice of words, and so on. In order for us to capture closely the goings-on of dynamic interaction, and uncover the extent of coupling between two individuals, we need to quantify how much recurrence is taking place at these levels. Methods available in crqa would allow researchers in cognitive science to pose such questions as how much are two people recurrent at some level of analysis, what is the characteristic lag time for one person to maximally match another, or whether one person is leading another. First, we set the theoretical ground to understand the difference between “correlation” and “co-visitation” when comparing two time series, using an aggregative or cross-recurrence approach. Then, we describe more formally the principles of cross-recurrence, and show with the current package how to carry out analyses applying them. We end the paper by comparing computational efficiency, and results’ consistency, of crqa R package, with the benchmark MATLAB toolbox crptoolbox (Marwan, 2013). We show perfect comparability between the two libraries on both levels. PMID:25018736

  2. Integrated quantification and identification of aldehydes and ketones in biological samples.

    PubMed

    Siegel, David; Meinema, Anne C; Permentier, Hjalmar; Hopfgartner, Gérard; Bischoff, Rainer

    2014-05-20

    The identification of unknown compounds remains to be a bottleneck of mass spectrometry (MS)-based metabolomics screening experiments. Here, we present a novel approach which facilitates the identification and quantification of analytes containing aldehyde and ketone groups in biological samples by adding chemical information to MS data. Our strategy is based on rapid autosampler-in-needle-derivatization with p-toluenesulfonylhydrazine (TSH). The resulting TSH-hydrazones are separated by ultrahigh-performance liquid chromatography (UHPLC) and detected by electrospray ionization-quadrupole-time-of-flight (ESI-QqTOF) mass spectrometry using a SWATH (Sequential Window Acquisition of all Theoretical Fragment-Ion Spectra) data-independent high-resolution mass spectrometry (HR-MS) approach. Derivatization makes small, poorly ionizable or retained analytes amenable to reversed phase chromatography and electrospray ionization in both polarities. Negatively charged TSH-hydrazone ions furthermore show a simple and predictable fragmentation pattern upon collision induced dissociation, which enables the chemo-selective screening for unknown aldehydes and ketones via a signature fragment ion (m/z 155.0172). By means of SWATH, targeted and nontargeted application scenarios of the suggested derivatization route are enabled in the frame of a single UHPLC-ESI-QqTOF-HR-MS workflow. The method's ability to simultaneously quantify and identify molecules containing aldehyde and ketone groups is demonstrated using 61 target analytes from various compound classes and a (13)C labeled yeast matrix. The identification of unknowns in biological samples is detailed using the example of indole-3-acetaldehyde.

  3. Quantification and Formalization of Security

    DTIC Science & Technology

    2010-02-01

    Quantification of Information Flow . . . . . . . . . . . . . . . . . . 30 2.4 Language Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 46...system behavior observed by users holding low clearances. This policy, or a variant of it, is enforced by many pro- gramming language -based mechanisms...illustrates with a particular programming language (while-programs plus probabilistic choice). The model is extended in §2.5 to programs in which

  4. Quantification of terrestrial ecosystem carbon dynamics in the conterminous United States combining a process-based biogeochemical model and MODIS and AmeriFlux data

    USDA-ARS?s Scientific Manuscript database

    Satellite remote sensing provides continuous temporal and spatial information of terrestrial ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical models, such as the Terrestrial Ecosystem Model (TEM), should provide a more adequate quantification of carbon dynami...

  5. Nested polynomial trends for the improvement of Gaussian process-based predictors

    NASA Astrophysics Data System (ADS)

    Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.

    2017-10-01

    The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.

  6. Quantification Bias Caused by Plasmid DNA Conformation in Quantitative Real-Time PCR Assay

    PubMed Central

    Lin, Chih-Hui; Chen, Yu-Chieh; Pan, Tzu-Ming

    2011-01-01

    Quantitative real-time PCR (qPCR) is the gold standard for the quantification of specific nucleic acid sequences. However, a serious concern has been revealed in a recent report: supercoiled plasmid standards cause significant over-estimation in qPCR quantification. In this study, we investigated the effect of plasmid DNA conformation on the quantification of DNA and the efficiency of qPCR. Our results suggest that plasmid DNA conformation has significant impact on the accuracy of absolute quantification by qPCR. DNA standard curves shifted significantly among plasmid standards with different DNA conformations. Moreover, the choice of DNA measurement method and plasmid DNA conformation may also contribute to the measurement error of DNA standard curves. Due to the multiple effects of plasmid DNA conformation on the accuracy of qPCR, efforts should be made to assure the highest consistency of plasmid standards for qPCR. Thus, we suggest that the conformation, preparation, quantification, purification, handling, and storage of standard plasmid DNA should be described and defined in the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) to assure the reproducibility and accuracy of qPCR absolute quantification. PMID:22194997

  7. Entropy factor for randomness quantification in neuronal data.

    PubMed

    Rajdl, K; Lansky, P; Kostal, L

    2017-11-01

    A novel measure of neural spike train randomness, an entropy factor, is proposed. It is based on the Shannon entropy of the number of spikes in a time window and can be seen as an analogy to the Fano factor. Theoretical properties of the new measure are studied for equilibrium renewal processes and further illustrated on gamma and inverse Gaussian probability distributions of interspike intervals. Finally, the entropy factor is evaluated from the experimental records of spontaneous activity in macaque primary visual cortex and compared to its theoretical behavior deduced for the renewal process models. Both theoretical and experimental results show substantial differences between the Fano and entropy factors. Rather paradoxically, an increase in the variability of spike count is often accompanied by an increase of its predictability, as evidenced by the entropy factor. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Information content of thermal infrared a microwave bands for simultaneous retrieval of cirrus ice water path and particle effective diameter

    NASA Astrophysics Data System (ADS)

    Bell, A.; Tang, G.; Yang, P.; Wu, D.

    2017-12-01

    Due to their high spatial and temporal coverage, cirrus clouds have a profound role in regulating the Earth's energy budget. Variability of their radiative, geometric, and microphysical properties can pose significant uncertainties in global climate model simulations if not adequately constrained. Thus, the development of retrieval methodologies able to accurately retrieve ice cloud properties and present associated uncertainties is essential. The effectiveness of cirrus cloud retrievals relies on accurate a priori understanding of ice radiative properties, as well as the current state of the atmosphere. Current studies have implemented information content theory analyses prior to retrievals to quantify the amount of information that should be expected on parameters to be retrieved, as well as the relative contribution of information provided by certain measurement channels. Through this analysis, retrieval algorithms can be designed in a way to maximize the information in measurements, and therefore ensure enough information is present to retrieve ice cloud properties. In this study, we present such an information content analysis to quantify the amount of information to be expected in retrievals of cirrus ice water path and particle effective diameter using sub-millimeter and thermal infrared radiometry. Preliminary results show these bands to be sensitive to changes in ice water path and effective diameter, and thus lend confidence their ability to simultaneously retrieve these parameters. Further quantification of sensitivity and the information provided from these bands can then be used to design and optimal retrieval scheme. While this information content analysis is employed on a theoretical retrieval combining simulated radiance measurements, the methodology could in general be applicable to any instrument or retrieval approach.

  9. Correlation analysis of targeted proteins and metabolites to assess and engineer microbial isopentenol production.

    PubMed

    George, Kevin W; Chen, Amy; Jain, Aakriti; Batth, Tanveer S; Baidoo, Edward E K; Wang, George; Adams, Paul D; Petzold, Christopher J; Keasling, Jay D; Lee, Taek Soon

    2014-08-01

    The ability to rapidly assess and optimize heterologous pathway function is critical for effective metabolic engineering. Here, we develop a systematic approach to pathway analysis based on correlations between targeted proteins and metabolites and apply it to the microbial production of isopentenol, a promising biofuel. Starting with a seven-gene pathway, we performed a correlation analysis to reduce pathway complexity and identified two pathway proteins as the primary determinants of efficient isopentenol production. Aided by the targeted quantification of relevant pathway intermediates, we constructed and subsequently validated a conceptual model of isopentenol pathway function. Informed by our analysis, we assembled a strain which produced isopentenol at a titer 1.5 g/L, or 46% of theoretical yield. Our engineering approach allowed us to accurately identify bottlenecks and determine appropriate pathway balance. Paired with high-throughput cloning techniques and analytics, this strategy should prove useful for the analysis and optimization of increasingly complex heterologous pathways. © 2014 Wiley Periodicals, Inc.

  10. Theoretical Study of tip apex electronic structure in Scanning Tunneling Microscope

    NASA Astrophysics Data System (ADS)

    Choi, Heesung; Huang, Min; Randall, John; Cho, Kyeongjae

    2011-03-01

    Scanning Tunneling Microscope (STM) has been widely used to explore diverse surface properties with an atomic resolution, and STM tip has played a critical role in controlling surface structures. However, detailed information of atomic and electronic structure of STM tip and the fundamental understanding of STM images are still incomplete. Therefore, it is important to develop a comprehensive understanding of the electronic structure of STM tip. We have studied the atomic and electronic structures of STM tip with various transition metals (TMs) by DFT method. The d-electrons of TM tip apex atoms show different orbital states near the Fermi level. We will present comprehensive data of STM tips from our DFT calculation. Verified quantification of the tip electronic structures will lead to fundamental understanding of STM tip structure-property relationship. This work is supported by the DARPA TBN Program and the Texas ETF. DARPA Tip Based Nanofabrication Program and the Emerging Technology Fund of the State of Texas.

  11. Frobenius-norm-based measures of quantum coherence and asymmetry

    PubMed Central

    Yao, Yao; Dong, G. H.; Xiao, Xing; Sun, C. P.

    2016-01-01

    We formulate the Frobenius-norm-based measures for quantum coherence and asymmetry respectively. In contrast to the resource theory of coherence and asymmetry, we construct a natural measure of quantum coherence inspired from optical coherence theory while the group theoretical approach is employed to quantify the asymmetry of quantum states. Besides their simple structures and explicit physical meanings, we observe that these quantities are intimately related to the purity (or linear entropy) of the corresponding quantum states. Remarkably, we demonstrate that the proposed coherence quantifier is not only a measure of mixedness, but also an intrinsic (basis-independent) quantification of quantum coherence contained in quantum states, which can also be viewed as a normalized version of Brukner-Zeilinger invariant information. In our context, the asymmetry of N-qubit quantum systems is considered under local independent and collective transformations. In- triguingly, it is illustrated that the collective effect has a significant impact on the asymmetry measure, and quantum correlation between subsystems plays a non-negligible role in this circumstance. PMID:27558009

  12. Interval stability for complex systems

    NASA Astrophysics Data System (ADS)

    Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.

    2018-04-01

    Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

  13. Rapid quantification and sex determination of forensic evidence materials.

    PubMed

    Andréasson, Hanna; Allen, Marie

    2003-11-01

    DNA quantification of forensic evidence is very valuable for an optimal use of the available biological material. Moreover, sex determination is of great importance as additional information in criminal investigations as well as in identification of missing persons, no suspect cases, and ancient DNA studies. While routine forensic DNA analysis based on short tandem repeat markers includes a marker for sex determination, analysis of samples containing scarce amounts of DNA is often based on mitochondrial DNA, and sex determination is not performed. In order to allow quantification and simultaneous sex determination on minute amounts of DNA, an assay based on real-time PCR analysis of a marker within the human amelogenin gene has been developed. The sex determination is based on melting curve analysis, while an externally standardized kinetic analysis allows quantification of the nuclear DNA copy number in the sample. This real-time DNA quantification assay has proven to be highly sensitive, enabling quantification of single DNA copies. Although certain limitations were apparent, the system is a rapid, cost-effective, and flexible assay for analysis of forensic casework samples.

  14. Quantitative Proteomics via High Resolution MS Quantification: Capabilities and Limitations

    PubMed Central

    Higgs, Richard E.; Butler, Jon P.; Han, Bomie; Knierman, Michael D.

    2013-01-01

    Recent improvements in the mass accuracy and resolution of mass spectrometers have led to renewed interest in label-free quantification using data from the primary mass spectrum (MS1) acquired from data-dependent proteomics experiments. The capacity for higher specificity quantification of peptides from samples enriched for proteins of biological interest offers distinct advantages for hypothesis generating experiments relative to immunoassay detection methods or prespecified peptide ions measured by multiple reaction monitoring (MRM) approaches. Here we describe an evaluation of different methods to post-process peptide level quantification information to support protein level inference. We characterize the methods by examining their ability to recover a known dilution of a standard protein in background matrices of varying complexity. Additionally, the MS1 quantification results are compared to a standard, targeted, MRM approach on the same samples under equivalent instrument conditions. We show the existence of multiple peptides with MS1 quantification sensitivity similar to the best MRM peptides for each of the background matrices studied. Based on these results we provide recommendations on preferred approaches to leveraging quantitative measurements of multiple peptides to improve protein level inference. PMID:23710359

  15. DICOM image quantification secondary capture (DICOM IQSC) integrated with numeric results, regions, and curves: implementation and applications in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan

    2017-03-01

    In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.

  16. Robust high-resolution quantification of time signals encoded by in vivo magnetic resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Belkić, Dževad; Belkić, Karen

    2018-01-01

    This paper on molecular imaging emphasizes improving specificity of magnetic resonance spectroscopy (MRS) for early cancer diagnostics by high-resolution data analysis. Sensitivity of magnetic resonance imaging (MRI) is excellent, but specificity is insufficient. Specificity is improved with MRS by going beyond morphology to assess the biochemical content of tissue. This is contingent upon accurate data quantification of diagnostically relevant biomolecules. Quantification is spectral analysis which reconstructs chemical shifts, amplitudes and relaxation times of metabolites. Chemical shifts inform on electronic shielding of resonating nuclei bound to different molecular compounds. Oscillation amplitudes in time signals retrieve the abundance of MR sensitive nuclei whose number is proportional to metabolite concentrations. Transverse relaxation times, the reciprocal of decay probabilities of resonances, arise from spin-spin coupling and reflect local field inhomogeneities. In MRS single voxels are used. For volumetric coverage, multi-voxels are employed within a hybrid of MRS and MRI called magnetic resonance spectroscopic imaging (MRSI). Common to MRS and MRSI is encoding of time signals and subsequent spectral analysis. Encoded data do not provide direct clinical information. Spectral analysis of time signals can yield the quantitative information, of which metabolite concentrations are the most clinically important. This information is equivocal with standard data analysis through the non-parametric, low-resolution fast Fourier transform and post-processing via fitting. By applying the fast Padé transform (FPT) with high-resolution, noise suppression and exact quantification via quantum mechanical signal processing, advances are made, presented herein, focusing on four areas of critical public health importance: brain, prostate, breast and ovarian cancers.

  17. Quantification of susceptibility change at high-concentrated SPIO-labeled target by characteristic phase gradient recognition.

    PubMed

    Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci

    2016-05-01

    Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility quantification at high-concentrated iron accumulation. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Quantification of Fluorine Content in AFFF Concentrates

    DTIC Science & Technology

    2017-09-29

    and quantitative integrations, a 100 ppm spectral window (FIDRes 0.215 Hz) was scanned using the following acquisition parameters: acquisition time ...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6120--17-9752 Quantification of Fluorine Content in AFFF Concentrates September 29, 2017...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources

  19. Hash Functions and Information Theoretic Security

    NASA Astrophysics Data System (ADS)

    Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.

  20. New insight into the IR-spectra/structure relationship in amyloid fibrils: a theoretical study on a prion peptide.

    PubMed

    Zanetti Polzi, Laura; Amadei, Andrea; Aschi, Massimiliano; Daidone, Isabella

    2011-08-03

    Molecular-level structural information on amyloid aggregates is of great importance for the understanding of protein-misfolding-related deseases. Nevertheless, this kind of information is experimentally difficult to obtain. In this work, we used molecular dynamics (MD) simulations combined with a mixed quantum mechanics/molecular mechanics theoretical methodology, the perturbed matrix method (PMM), in order to study the amide I' IR spectrum of fibrils formed by a short peptide, the H1 peptide, derived from residues 109 through 122 of the Syrian hamster prion protein. The PMM/MD approach allows isolation of the amide I' signal arising from any desired peptide group of the polypeptide chain and quantification of the effect of the excitonic coupling on the frequency position. The calculated single-residue signals were found to be in good agreement with the experimental site-specific spectra obtained by means of isotope-labeled IR spectroscopy, providing a means for their interpretation at the molecular level. In particular, our results confirm the experimental hypothesis that residues ala117 are aligned in all strands and that the alignment gives rise to a red shift of the corresponding site-specific amide I' mode due to strong excitonic coupling among the ala117 peptide groups. In addition, our data show that a red shift of the amide I' band due to strong excitonic coupling can also occur for amino acids adjacent in sequence to the aligned ones. Thus, a red shift of the signal of a given isotope-labeled amino acid does not necessarily imply that the peptide groups under consideration are aligned in the β-sheet.

  1. The intrinsic combinatorial organization and information theoretic content of a sequence are correlated to the DNA encoded nucleosome organization of eukaryotic genomes.

    PubMed

    Utro, Filippo; Di Benedetto, Valeria; Corona, Davide F V; Giancarlo, Raffaele

    2016-03-15

    Thanks to research spanning nearly 30 years, two major models have emerged that account for nucleosome organization in chromatin: statistical and sequence specific. The first is based on elegant, easy to compute, closed-form mathematical formulas that make no assumptions of the physical and chemical properties of the underlying DNA sequence. Moreover, they need no training on the data for their computation. The latter is based on some sequence regularities but, as opposed to the statistical model, it lacks the same type of closed-form formulas that, in this case, should be based on the DNA sequence only. We contribute to close this important methodological gap between the two models by providing three very simple formulas for the sequence specific one. They are all based on well-known formulas in Computer Science and Bioinformatics, and they give different quantifications of how complex a sequence is. In view of how remarkably well they perform, it is very surprising that measures of sequence complexity have not even been considered as candidates to close the mentioned gap. We provide experimental evidence that the intrinsic level of combinatorial organization and information-theoretic content of subsequences within a genome are strongly correlated to the level of DNA encoded nucleosome organization discovered by Kaplan et al Our results establish an important connection between the intrinsic complexity of subsequences in a genome and the intrinsic, i.e. DNA encoded, nucleosome organization of eukaryotic genomes. It is a first step towards a mathematical characterization of this latter 'encoding'. Supplementary data are available at Bioinformatics online. futro@us.ibm.com. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. PSEA-Quant: a protein set enrichment analysis on label-free and label-based protein quantification data.

    PubMed

    Lavallée-Adam, Mathieu; Rauniyar, Navin; McClatchy, Daniel B; Yates, John R

    2014-12-05

    The majority of large-scale proteomics quantification methods yield long lists of quantified proteins that are often difficult to interpret and poorly reproduced. Computational approaches are required to analyze such intricate quantitative proteomics data sets. We propose a statistical approach to computationally identify protein sets (e.g., Gene Ontology (GO) terms) that are significantly enriched with abundant proteins with reproducible quantification measurements across a set of replicates. To this end, we developed PSEA-Quant, a protein set enrichment analysis algorithm for label-free and label-based protein quantification data sets. It offers an alternative approach to classic GO analyses, models protein annotation biases, and allows the analysis of samples originating from a single condition, unlike analogous approaches such as GSEA and PSEA. We demonstrate that PSEA-Quant produces results complementary to GO analyses. We also show that PSEA-Quant provides valuable information about the biological processes involved in cystic fibrosis using label-free protein quantification of a cell line expressing a CFTR mutant. Finally, PSEA-Quant highlights the differences in the mechanisms taking place in the human, rat, and mouse brain frontal cortices based on tandem mass tag quantification. Our approach, which is available online, will thus improve the analysis of proteomics quantification data sets by providing meaningful biological insights.

  3. PSEA-Quant: A Protein Set Enrichment Analysis on Label-Free and Label-Based Protein Quantification Data

    PubMed Central

    2015-01-01

    The majority of large-scale proteomics quantification methods yield long lists of quantified proteins that are often difficult to interpret and poorly reproduced. Computational approaches are required to analyze such intricate quantitative proteomics data sets. We propose a statistical approach to computationally identify protein sets (e.g., Gene Ontology (GO) terms) that are significantly enriched with abundant proteins with reproducible quantification measurements across a set of replicates. To this end, we developed PSEA-Quant, a protein set enrichment analysis algorithm for label-free and label-based protein quantification data sets. It offers an alternative approach to classic GO analyses, models protein annotation biases, and allows the analysis of samples originating from a single condition, unlike analogous approaches such as GSEA and PSEA. We demonstrate that PSEA-Quant produces results complementary to GO analyses. We also show that PSEA-Quant provides valuable information about the biological processes involved in cystic fibrosis using label-free protein quantification of a cell line expressing a CFTR mutant. Finally, PSEA-Quant highlights the differences in the mechanisms taking place in the human, rat, and mouse brain frontal cortices based on tandem mass tag quantification. Our approach, which is available online, will thus improve the analysis of proteomics quantification data sets by providing meaningful biological insights. PMID:25177766

  4. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    NASA Astrophysics Data System (ADS)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.

  5. Information transduction capacity reduces the uncertainties in annotation-free isoform discovery and quantification

    PubMed Central

    Deng, Yue; Bao, Feng; Yang, Yang; Ji, Xiangyang; Du, Mulong; Zhang, Zhengdong

    2017-01-01

    Abstract The automated transcript discovery and quantification of high-throughput RNA sequencing (RNA-seq) data are important tasks of next-generation sequencing (NGS) research. However, these tasks are challenging due to the uncertainties that arise in the inference of complete splicing isoform variants from partially observed short reads. Here, we address this problem by explicitly reducing the inherent uncertainties in a biological system caused by missing information. In our approach, the RNA-seq procedure for transforming transcripts into short reads is considered an information transmission process. Consequently, the data uncertainties are substantially reduced by exploiting the information transduction capacity of information theory. The experimental results obtained from the analyses of simulated datasets and RNA-seq datasets from cell lines and tissues demonstrate the advantages of our method over state-of-the-art competitors. Our algorithm is an open-source implementation of MaxInfo. PMID:28911101

  6. Theoretical limitations of quantification for noncompetitive sandwich immunoassays.

    PubMed

    Woolley, Christine F; Hayes, Mark A; Mahanti, Prasun; Douglass Gilman, S; Taylor, Tom

    2015-11-01

    Immunoassays exploit the highly selective interaction between antibodies and antigens to provide a vital method for biomolecule detection at low concentrations. Developers and practitioners of immunoassays have long known that non-specific binding often restricts immunoassay limits of quantification (LOQs). Aside from non-specific binding, most efforts by analytical chemists to reduce the LOQ for these techniques have focused on improving the signal amplification methods and minimizing the limitations of the detection system. However, with detection technology now capable of sensing single-fluorescence molecules, this approach is unlikely to lead to dramatic improvements in the future. Here, fundamental interactions based on the law of mass action are analytically connected to signal generation, replacing the four- and five-parameter fittings commercially used to approximate sigmoidal immunoassay curves and allowing quantitative consideration of non-specific binding and statistical limitations in order to understand the ultimate detection capabilities of immunoassays. The restrictions imposed on limits of quantification by instrumental noise, non-specific binding, and counting statistics are discussed based on equilibrium relations for a sandwich immunoassay. Understanding the maximal capabilities of immunoassays for each of these regimes can greatly assist in the development and evaluation of immunoassay platforms. While many studies suggest that single molecule detection is possible through immunoassay techniques, here, it is demonstrated that the fundamental limit of quantification (precision of 10 % or better) for an immunoassay is approximately 131 molecules and this limit is based on fundamental and unavoidable statistical limitations.

  7. Overview of Brain Microdialysis

    PubMed Central

    Chefer, Vladimir I.; Thompson, Alexis C.; Zapata, Agustin; Shippenberg, Toni S.

    2010-01-01

    The technique of microdialysis enables sampling and collecting of small-molecular-weight substances from the interstitial space. It is a widely used method in neuroscience and is one of the few techniques available that permits quantification of neurotransmitters, peptides, and hormones in the behaving animal. More recently, it has been used in tissue preparations for quantification of neurotransmitter release. This unit provides a brief review of the history of microdialysis and its general application in the neurosciences. The authors review the theoretical principles underlying the microdialysis process, methods available for estimating extracellular concentration from dialysis samples (i.e., relative recovery), the various factors that affect the estimate of in vivo relative recovery, and the importance of determining in vivo relative recovery to data interpretation. Several areas of special note, including impact of tissue trauma on the interpretation of microdialysis results, are discussed. Step-by-step instructions for the planning and execution of conventional and quantitative microdialysis experiments are provided. PMID:19340812

  8. PET/MRI for neurologic applications.

    PubMed

    Catana, Ciprian; Drzezga, Alexander; Heiss, Wolf-Dieter; Rosen, Bruce R

    2012-12-01

    PET and MRI provide complementary information in the study of the human brain. Simultaneous PET/MRI data acquisition allows the spatial and temporal correlation of the measured signals, creating opportunities impossible to realize using stand-alone instruments. This paper reviews the methodologic improvements and potential neurologic and psychiatric applications of this novel technology. We first present methods for improving the performance and information content of each modality by using the information provided by the other technique. On the PET side, we discuss methods that use the simultaneously acquired MRI data to improve the PET data quantification. On the MRI side, we present how improved PET quantification can be used to validate several MRI techniques. Finally, we describe promising research, translational, and clinical applications that can benefit from these advanced tools.

  9. Assessment of SRM, MRM(3) , and DIA for the targeted analysis of phosphorylation dynamics in non-small cell lung cancer.

    PubMed

    Schmidlin, Thierry; Garrigues, Luc; Lane, Catherine S; Mulder, T Celine; van Doorn, Sander; Post, Harm; de Graaf, Erik L; Lemeer, Simone; Heck, Albert J R; Altelaar, A F Maarten

    2016-08-01

    Hypothesis-driven MS-based targeted proteomics has gained great popularity in a relatively short timespan. Next to the widely established selected reaction monitoring (SRM) workflow, data-independent acquisition (DIA), also referred to as sequential window acquisition of all theoretical spectra (SWATH) was introduced as a high-throughput targeted proteomics method. DIA facilitates increased proteome coverage, however, does not yet reach the sensitivity obtained with SRM. Therefore, a well-informed method selection is crucial for designing a successful targeted proteomics experiment. This is especially the case when targeting less conventional peptides such as those that contain PTMs, as these peptides do not always adhere to the optimal fragmentation considerations for targeted assays. Here, we provide insight into the performance of DIA, SRM, and MRM cubed (MRM(3) ) in the analysis of phosphorylation dynamics throughout the phosphoinositide 3-kinase mechanistic target of rapamycin (PI3K-mTOR) and mitogen-activated protein kinase (MAPK) signaling network. We observe indeed that DIA is less sensitive when compared to SRM, however demonstrates increased flexibility, by postanalysis selection of alternative phosphopeptide precursors. Additionally, we demonstrate the added benefit of MRM(3) , allowing the quantification of two poorly accessible phosphosites. In total, targeted proteomics enabled the quantification of 42 PI3K-mTOR and MAPK phosphosites, gaining a so far unachieved in-depth view mTOR signaling events linked to tyrosine kinase inhibitor resistance in non-small cell lung cancer. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. On the origins of logarithmic number-to-position mapping.

    PubMed

    Dotan, Dror; Dehaene, Stanislas

    2016-11-01

    The number-to-position task, in which children and adults are asked to place numbers on a spatial number line, has become a classic measure of number comprehension. We present a detailed experimental and theoretical dissection of the processing stages that underlie this task. We used a continuous finger-tracking technique, which provides detailed information about the time course of processing stages. When adults map the position of 2-digit numbers onto a line, their final mapping is essentially linear, but intermediate finger location show a transient logarithmic mapping. We identify the origins of this log effect: Small numbers are processed faster than large numbers, so the finger deviates toward the target position earlier for small numbers than for large numbers. When the trajectories are aligned on the finger deviation onset, the log effect disappears. The small-number advantage and the log effect are enhanced in dual-task setting and are further enhanced when the delay between the 2 tasks is shortened, suggesting that these effects originate from a central stage of quantification and decision making. We also report cases of logarithmic mapping-by children and by a brain-injured individual-which cannot be explained by faster responding to small numbers. We show that these findings are captured by an ideal-observer model of the number-to-position mapping task, comprising 3 distinct stages: a quantification stage, whose duration is influenced by both exact and approximate representations of numerical quantity; a Bayesian accumulation-of-evidence stage, leading to a decision about the target location; and a pointing stage. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. A survey of the year 2007 literature on applications of isothermal titration calorimetry.

    PubMed

    Bjelić, Sasa; Jelesarov, Ilian

    2008-01-01

    Elucidation of the energetic principles of binding affinity and specificity is a central task in many branches of current sciences: biology, medicine, pharmacology, chemistry, material sciences, etc. In biomedical research, integral approaches combining structural information with in-solution biophysical data have proved to be a powerful way toward understanding the physical basis of vital cellular phenomena. Isothermal titration calorimetry (ITC) is a valuable experimental tool facilitating quantification of the thermodynamic parameters that characterize recognition processes involving biomacromolecules. The method provides access to all relevant thermodynamic information by performing a few experiments. In particular, ITC experiments allow to by-pass tedious and (rarely precise) procedures aimed at determining the changes in enthalpy and entropy upon binding by van't Hoff analysis. Notwithstanding limitations, ITC has now the reputation of being the "gold standard" and ITC data are widely used to validate theoretical predictions of thermodynamic parameters, as well as to benchmark the results of novel binding assays. In this paper, we discuss several publications from 2007 reporting ITC results. The focus is on applications in biologically oriented fields. We do not intend a comprehensive coverage of all newly accumulated information. Rather, we emphasize work which has captured our attention with originality and far-reaching analysis, or else has provided ideas for expanding the potential of the method. Copyright (c) 2008 John Wiley & Sons, Ltd.

  12. Stabilization of X–Au–X complexes on the Au(111) surface: A theoretical investigation and comparison of X = S, Cl, CH 3S, and SiH 3S

    DOE PAGES

    Lee, Jiyoung; Boschen, Jeffery S.; Windus, Theresa L.; ...

    2017-01-27

    Alnico alloys have long been used as strong permanent magnets because of their ferromagnetism and high coercivity. Understanding their structural details allows for better prediction of the resulting magnetic properties. However, quantitative three-dimensional characterization of the phase separation in these alloys is still challenged by the spatial quantification of nanoscale phases. Herein, we apply a dual tomography approach, where correlative scanning transmission electron microscopy (STEM) energy-dispersive X-ray spectroscopic (EDS) tomography and atom probe tomography (APT) are used to investigate the initial phase separation process of an alnico 8 alloy upon non-magnetic annealing. STEM-EDS tomography provides information on the morphology andmore » volume fractions of Fe–Co-rich and Νi–Al-rich phases after spinodal decomposition in addition to quantitative information of the composition of a nanoscale volume. Subsequent analysis of a portion of the same specimen by APT offers quantitative chemical information of each phase at the sub-nanometer scale. Furthermore, APT reveals small, 2–4 nm Fe-rich α 1 phases that are nucleated in the Ni-rich α 2 matrix. From this information, we show that phase separation of the alnico 8 alloy consists of both spinodal decomposition and nucleation and growth processes. The complementary benefits and challenges associated with correlative STEM-EDS and APT are discussed.« less

  13. Quantification of HCV RNA in Clinical Specimens by Branched DNA (bDNA) Technology.

    PubMed

    Wilber, J C; Urdea, M S

    1999-01-01

    The diagnosis and monitoring of hepatitis C virus (HCV) infection have been aided by the development of HCV RNA quantification assays A direct measure of viral load, HCV RNA quantification has the advantage of providing information on viral kinetics and provides unique insight into the disease process. Branched DNA (bDNA) signal amplification technology provides a novel approach for the direct quantification of HCV RNA in patient specimens. The bDNA assay measures HCV RNA at physiological levels by boosting the reporter signal, rather than by replicating target sequences as the means of detection, and thus avoids the errors inherent in the extraction and amplification of target sequences. Inherently quantitative and nonradioactive, the bDNA assay is amenable to routine use in a clinical research setting, and has been used by several groups to explore the natural history, pathogenesis, and treatment of HCV infection.

  14. PET/MRI for Neurological Applications

    PubMed Central

    Catana, Ciprian; Drzezga, Alexander; Heiss, Wolf-Dieter; Rosen, Bruce R.

    2013-01-01

    PET and MRI provide complementary information in the study of the human brain. Simultaneous PET/MR data acquisition allows the spatial and temporal correlation of the measured signals, opening up opportunities impossible to realize using stand-alone instruments. This paper reviews the methodological improvements and potential neurological and psychiatric applications of this novel technology. We first present methods for improving the performance and information content of each modality by using the information provided by the other technique. On the PET side, we discuss methods that use the simultaneously acquired MR data to improve the PET data quantification. On the MR side, we present how improved PET quantification could be used to validate a number of MR techniques. Finally, we describe promising research, translational and clinical applications that could benefit from these advanced tools. PMID:23143086

  15. Random mechanics: Nonlinear vibrations, turbulences, seisms, swells, fatigue

    NASA Astrophysics Data System (ADS)

    Kree, P.; Soize, C.

    The random modeling of physical phenomena, together with probabilistic methods for the numerical calculation of random mechanical forces, are analytically explored. Attention is given to theoretical examinations such as probabilistic concepts, linear filtering techniques, and trajectory statistics. Applications of the methods to structures experiencing atmospheric turbulence, the quantification of turbulence, and the dynamic responses of the structures are considered. A probabilistic approach is taken to study the effects of earthquakes on structures and to the forces exerted by ocean waves on marine structures. Theoretical analyses by means of vector spaces and stochastic modeling are reviewed, as are Markovian formulations of Gaussian processes and the definition of stochastic differential equations. Finally, random vibrations with a variable number of links and linear oscillators undergoing the square of Gaussian processes are investigated.

  16. Speech recognition: Acoustic-phonetic knowledge acquisition and representation

    NASA Astrophysics Data System (ADS)

    Zue, Victor W.

    1988-09-01

    The long-term research goal is to develop and implement speaker-independent continuous speech recognition systems. It is believed that the proper utilization of speech-specific knowledge is essential for such advanced systems. This research is thus directed toward the acquisition, quantification, and representation, of acoustic-phonetic and lexical knowledge, and the application of this knowledge to speech recognition algorithms. In addition, we are exploring new speech recognition alternatives based on artificial intelligence and connectionist techniques. We developed a statistical model for predicting the acoustic realization of stop consonants in various positions in the syllable template. A unification-based grammatical formalism was developed for incorporating this model into the lexical access algorithm. We provided an information-theoretic justification for the hierarchical structure of the syllable template. We analyzed segmented duration for vowels and fricatives in continuous speech. Based on contextual information, we developed durational models for vowels and fricatives that account for over 70 percent of the variance, using data from multiple, unknown speakers. We rigorously evaluated the ability of human spectrogram readers to identify stop consonants spoken by many talkers and in a variety of phonetic contexts. Incorporating the declarative knowledge used by the readers, we developed a knowledge-based system for stop identification. We achieved comparable system performance to that to the readers.

  17. Determining the optimal forensic DNA analysis procedure following investigation of sample quality.

    PubMed

    Hedell, Ronny; Hedman, Johannes; Mostad, Petter

    2018-07-01

    Crime scene traces of various types are routinely sent to forensic laboratories for analysis, generally with the aim of addressing questions about the source of the trace. The laboratory may choose to analyse the samples in different ways depending on the type and quality of the sample, the importance of the case and the cost and performance of the available analysis methods. Theoretically well-founded guidelines for the choice of analysis method are, however, lacking in most situations. In this paper, it is shown how such guidelines can be created using Bayesian decision theory. The theory is applied to forensic DNA analysis, showing how the information from the initial qPCR analysis can be utilized. It is assumed the alternatives for analysis are using a standard short tandem repeat (STR) DNA analysis assay, using the standard assay and a complementary assay, or the analysis may be cancelled following quantification. The decision is based on information about the DNA amount and level of DNA degradation of the forensic sample, as well as case circumstances and the cost for analysis. Semi-continuous electropherogram models are used for simulation of DNA profiles and for computation of likelihood ratios. It is shown how tables and graphs, prepared beforehand, can be used to quickly find the optimal decision in forensic casework.

  18. Colour thresholding and objective quantification in bioimaging

    NASA Technical Reports Server (NTRS)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  19. Emphysema quantification from CT scans using novel application of diaphragm curvature estimation: comparison with standard quantification methods and pulmonary function data

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Reeves, Anthony P.; Yankelevitz, David F.; Henschke, Claudia I.; Barr, R. Graham

    2009-02-01

    Emphysema is a disease of the lungs that destroys the alveolar air sacs and induces long-term respiratory dysfunction. CT scans allow for the imaging of the anatomical basis of emphysema and quantification of the underlying disease state. Several measures have been introduced for the quantification emphysema directly from CT data; most,however, are based on the analysis of density information provided by the CT scans, which vary by scanner and can be hard to standardize across sites and time. Given that one of the anatomical variations associated with the progression of emphysema is the flatting of the diaphragm due to the loss of elasticity in the lung parenchyma, curvature analysis of the diaphragm would provide information about emphysema from CT. Therefore, we propose a new, non-density based measure of the curvature of the diaphragm that would allow for further quantification methods in a robust manner. To evaluate the new method, 24 whole-lung scans were analyzed using the ratios of the lung height and diaphragm width to diaphragm height as curvature estimates as well as using the emphysema index as comparison. Pearson correlation coefficients showed a strong trend of several of the proposed diaphragm curvature measures to have higher correlations, of up to r=0.57, with DLCO% and VA than did the emphysema index. Furthermore, we found emphysema index to have only a 0.27 correlation to the proposed measures, indicating that the proposed measures evaluate different aspects of the disease.

  20. VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.

    2015-12-01

    A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.

  1. MS/MS library facilitated MRM quantification of native peptides prepared by denaturing ultrafiltration

    PubMed Central

    2012-01-01

    Naturally occurring native peptides provide important information about physiological states of an organism and its changes in disease conditions but protocols and methods for assessing their abundance are not well-developed. In this paper, we describe a simple procedure for the quantification of non-tryptic peptides in body fluids. The workflow includes an enrichment step followed by two-dimensional fractionation of native peptides and MS/MS data management facilitating the design and validation of LC- MRM MS assays. The added value of the workflow is demonstrated in the development of a triplex LC-MRM MS assay used for quantification of peptides potentially associated with the progression of liver disease to hepatocellular carcinoma. PMID:22304756

  2. Lesion Quantification in Dual-Modality Mammotomography

    NASA Astrophysics Data System (ADS)

    Li, Heng; Zheng, Yibin; More, Mitali J.; Goodale, Patricia J.; Williams, Mark B.

    2007-02-01

    This paper describes a novel x-ray/SPECT dual modality breast imaging system that provides 3D structural and functional information. While only a limited number of views on one side of the breast can be acquired due to mechanical and time constraints, we developed a technique to compensate for the limited angle artifact in reconstruction images and accurately estimate both the lesion size and radioactivity concentration. Various angular sampling strategies were evaluated using both simulated and experimental data. It was demonstrated that quantification of lesion size to an accuracy of 10% and quantification of radioactivity to an accuracy of 20% are feasible from limited-angle data acquired with clinically practical dosage and acquisition time

  3. Considerations on the quantitative analysis of apparent amorphicity of milled lactose by Raman spectroscopy.

    PubMed

    Pazesh, Samaneh; Lazorova, Lucia; Berggren, Jonas; Alderborn, Göran; Gråsjö, Johan

    2016-09-10

    The main purpose of the study was to evaluate various pre-processing and quantification approaches of Raman spectrum to quantify low level of amorphous content in milled lactose powder. To improve the quantification analysis, several spectral pre-processing methods were used to adjust background effects. The effects of spectral noise on the variation of determined amorphous content were also investigated theoretically by propagation of error analysis and were compared to the experimentally obtained values. Additionally, the applicability of calibration method with crystalline or amorphous domains in the estimation of amorphous content in milled lactose powder was discussed. Two straight baseline pre-processing methods gave the best and almost equal performance. By the succeeding quantification methods, PCA performed best, although the classical least square analysis (CLS) gave comparable results, while peak parameter analysis displayed to be inferior. The standard deviations of experimental determined percentage amorphous content were 0.94% and 0.25% for pure crystalline and pure amorphous samples respectively, which was very close to the standard deviation values from propagated spectral noise. The reasonable conformity between the milled samples spectra and synthesized spectra indicated representativeness of physical mixtures with crystalline or amorphous domains in the estimation of apparent amorphous content in milled lactose. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  4. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  5. Quantification of the Keto-Hydroperoxide (HOOCH2OCHO) and Other Elusive Intermediates during Low-Temperature Oxidation of Dimethyl Ether.

    PubMed

    Moshammer, Kai; Jasper, Ahren W; Popolan-Vaida, Denisia M; Wang, Zhandong; Bhavani Shankar, Vijai Shankar; Ruwe, Lena; Taatjes, Craig A; Dagaut, Philippe; Hansen, Nils

    2016-10-04

    This work provides new temperature-dependent mole fractions of elusive intermediates relevant to the low-temperature oxidation of dimethyl ether (DME). It extends the previous study of Moshammer et al. [ J. Phys. Chem. A 2015 , 119 , 7361 - 7374 ] in which a combination of a jet-stirred reactor and molecular beam mass spectrometry with single-photon ionization via tunable synchrotron-generated vacuum-ultraviolet radiation was used to identify (but not quantify) several highly oxygenated species. Here, temperature-dependent concentration profiles of 17 components were determined in the range of 450-1000 K and compared to up-to-date kinetic modeling results. Special emphasis is paid toward the validation and application of a theoretical method for predicting photoionization cross sections that are hard to obtain experimentally but essential to turn mass spectral data into mole fraction profiles. The presented approach enabled the quantification of the hydroperoxymethyl formate (HOOCH 2 OCH 2 O), which is a key intermediate in the low-temperature oxidation of DME. The quantification of this keto-hydroperoxide together with the temperature-dependent concentration profiles of other intermediates including H 2 O 2 , HCOOH, CH 3 OCHO, and CH 3 OOH reveals new opportunities for the development of a next-generation DME combustion chemistry mechanism.

  6. Quantification and scaling of multipartite entanglement in continuous variable systems.

    PubMed

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-11-26

    We present a theoretical method to determine the multipartite entanglement between different partitions of multimode, fully or partially symmetric Gaussian states of continuous variable systems. For such states, we determine the exact expression of the logarithmic negativity and show that it coincides with that of equivalent two-mode Gaussian states. Exploiting this reduction, we demonstrate the scaling of the multipartite entanglement with the number of modes and its reliable experimental estimate by direct measurements of the global and local purities.

  7. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    NASA Astrophysics Data System (ADS)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  8. GMO quantification: valuable experience and insights for the future.

    PubMed

    Milavec, Mojca; Dobnik, David; Yang, Litao; Zhang, Dabing; Gruden, Kristina; Zel, Jana

    2014-10-01

    Cultivation and marketing of genetically modified organisms (GMOs) have been unevenly adopted worldwide. To facilitate international trade and to provide information to consumers, labelling requirements have been set up in many countries. Quantitative real-time polymerase chain reaction (qPCR) is currently the method of choice for detection, identification and quantification of GMOs. This has been critically assessed and the requirements for the method performance have been set. Nevertheless, there are challenges that should still be highlighted, such as measuring the quantity and quality of DNA, and determining the qPCR efficiency, possible sequence mismatches, characteristics of taxon-specific genes and appropriate units of measurement, as these remain potential sources of measurement uncertainty. To overcome these problems and to cope with the continuous increase in the number and variety of GMOs, new approaches are needed. Statistical strategies of quantification have already been proposed and expanded with the development of digital PCR. The first attempts have been made to use new generation sequencing also for quantitative purposes, although accurate quantification of the contents of GMOs using this technology is still a challenge for the future, and especially for mixed samples. New approaches are needed also for the quantification of stacks, and for potential quantification of organisms produced by new plant breeding techniques.

  9. Plan of study to quantify the hydrologic relations between the Rio Grande and the Santa Fe Group aquifer system near Albuquerque, central New Mexico

    USGS Publications Warehouse

    McAda, D.P.

    1996-01-01

    The Albuquerque Basin in central New Mexico covers an area of about 3,060 square miles. Ground water from the Santa Fe Group aquifer system of the Albuquerque Basin is the principal source of water for municipal, domestic, commercial, and industrial uses in the Albuquerque area, an area of about 410 square miles. Ground- water withdrawal in the basin has increased from about 97,000 acre-feet in 1970 to about 171,000 acre-feet in 1994. About 92 percent of the 1994 total was withdrawn in the Albuquerque area. Management of ground water in the Albuquerque Basin is related to the surface water in the Rio Grande. Because the aquifer system is hydraulically connected to the Rio Grande and water in the river is fully appropriated, the ability to reliably estimate the effects of ground-water withdrawals on flow in the river is important. This report describes the components of the Rio Grande/Santa Fe Group aquifer system in the Albuquerque area and the data availability and data and interpretation needs relating to those components, and presents a plan of study to quantify the hydrologic relations between the Rio Grande and the Santa Fe Group aquifer system. The information needs related to the components of the river/aquifer system are prioritized. Information that is necessary to improve the understanding or quantification of a component in the river/aquifer system is prioritized as essential. Information that could add additional understanding of the system, but would not be necessary to improve the quantification of the system, is prioritized as useful. The study elements are prioritized in the same manner as the information needs; study elements designed to provide information considered necessary to improve the quantification of the system are prioritized as essential, and those designed to provide information that would add additional understanding of the system, but would not be necessary to improve the quantification of the system, are prioritized as useful.

  10. Motion-aware stroke volume quantification in 4D PC-MRI data of the human aorta.

    PubMed

    Köhler, Benjamin; Preim, Uta; Grothoff, Matthias; Gutberlet, Matthias; Fischbach, Katharina; Preim, Bernhard

    2016-02-01

    4D PC-MRI enables the noninvasive measurement of time-resolved, three-dimensional blood flow data that allow quantification of the hemodynamics. Stroke volumes are essential to assess the cardiac function and evolution of different cardiovascular diseases. The calculation depends on the wall position and vessel orientation, which both change during the cardiac cycle due to the heart muscle contraction and the pumped blood. However, current systems for the quantitative 4D PC-MRI data analysis neglect the dynamic character and instead employ a static 3D vessel approximation. We quantify differences between stroke volumes in the aorta obtained with and without consideration of its dynamics. We describe a method that uses the approximating 3D segmentation to automatically initialize segmentation algorithms that require regions inside and outside the vessel for each temporal position. This enables the use of graph cuts to obtain 4D segmentations, extract vessel surfaces including centerlines for each temporal position and derive motion information. The stroke volume quantification is compared using measuring planes in static (3D) vessels, planes with fixed angulation inside dynamic vessels (this corresponds to the common 2D PC-MRI) and moving planes inside dynamic vessels. Seven datasets with different pathologies such as aneurysms and coarctations were evaluated in close collaboration with radiologists. Compared to the experts' manual stroke volume estimations, motion-aware quantification performs, on average, 1.57% better than calculations without motion consideration. The mean difference between stroke volumes obtained with the different methods is 7.82%. Automatically obtained 4D segmentations overlap by 85.75% with manually generated ones. Incorporating motion information in the stroke volume quantification yields slight but not statistically significant improvements. The presented method is feasible for the clinical routine, since computation times are low and essential parts run fully automatically. The 4D segmentations can be used for other algorithms as well. The simultaneous visualization and quantification may support the understanding and interpretation of cardiac blood flow.

  11. On the capability of IASI measurements to inform about CO surface emissions

    NASA Astrophysics Data System (ADS)

    Fortems-Cheiney, A.; Chevallier, F.; Pison, I.; Bousquet, P.; Carouge, C.; Clerbaux, C.; Coheur, P.-F.; George, M.; Hurtmans, D.; Szopa, S.

    2009-03-01

    Between July and November 2008, simultaneous observations were conducted by several orbiting instruments that monitor carbon monoxide in the atmosphere, among them the Infrared Atmospheric Sounding Instrument (IASI) and Measurements Of Pollution In The Troposphere (MOPITT). In this paper, the concentration retrievals at about 700 hPa from these two instruments are successively used in a variational Bayesian system to infer the global distribution of CO emissions. Our posterior estimate of CO emissions using IASI retrievals gives a total of 793 Tg for the considered period, which is 40% higher than the global budget calculated with the MOPITT data (566 Tg). Over six continental regions (Eurasian Boreal, South Asia, South East Asia, North American Boreal, Northern Africa and South American Temperate) and thanks to a better observation density, the theoretical uncertainty reduction obtained with the IASI retrievals is better or similar than with MOPITT. For the other continental regions, IASI constrains the emissions less than MOPITT because of lesser sensitivity in the lower troposphere. These first results indicate that IASI may play a major role in the quantification of the emissions of CO.

  12. Gas Hydrate Estimation Using Rock Physics Modeling and Seismic Inversion

    NASA Astrophysics Data System (ADS)

    Dai, J.; Dutta, N.; Xu, H.

    2006-05-01

    ABSTRACT We conducted a theoretical study of the effects of gas hydrate saturation on the acoustic properties (P- and S- wave velocities, and bulk density) of host rocks, using wireline log data from the Mallik wells in the Mackenzie Delta in Northern Canada. We evaluated a number of gas hydrate rock physics models that correspond to different rock textures. Our study shows that, among the existing rock physics models, the one that treats gas hydrate as part of the solid matrix best fits the measured data. This model was also tested on gas hydrate hole 995B of ODP leg 164 drilling at Blake Ridge, which shows adequate match. Based on the understanding of rock models of gas hydrates and properties of shallow sediments, we define a procedure that quantifies gas hydrate using rock physics modeling and seismic inversion. The method allows us to estimate gas hydrate directly from seismic information only. This paper will show examples of gas hydrates quantification from both 1D profile and 3D volume in the deepwater of Gulf of Mexico.

  13. Colloquium: Non-Markovian dynamics in open quantum systems

    NASA Astrophysics Data System (ADS)

    Breuer, Heinz-Peter; Laine, Elsi-Mari; Piilo, Jyrki; Vacchini, Bassano

    2016-04-01

    The dynamical behavior of open quantum systems plays a key role in many applications of quantum mechanics, examples ranging from fundamental problems, such as the environment-induced decay of quantum coherence and relaxation in many-body systems, to applications in condensed matter theory, quantum transport, quantum chemistry, and quantum information. In close analogy to a classical Markovian stochastic process, the interaction of an open quantum system with a noisy environment is often modeled phenomenologically by means of a dynamical semigroup with a corresponding time-independent generator in Lindblad form, which describes a memoryless dynamics of the open system typically leading to an irreversible loss of characteristic quantum features. However, in many applications open systems exhibit pronounced memory effects and a revival of genuine quantum properties such as quantum coherence, correlations, and entanglement. Here recent theoretical results on the rich non-Markovian quantum dynamics of open systems are discussed, paying particular attention to the rigorous mathematical definition, to the physical interpretation and classification, as well as to the quantification of quantum memory effects. The general theory is illustrated by a series of physical examples. The analysis reveals that memory effects of the open system dynamics reflect characteristic features of the environment which opens a new perspective for applications, namely, to exploit a small open system as a quantum probe signifying nontrivial features of the environment it is interacting with. This Colloquium further explores the various physical sources of non-Markovian quantum dynamics, such as structured environmental spectral densities, nonlocal correlations between environmental degrees of freedom, and correlations in the initial system-environment state, in addition to developing schemes for their local detection. Recent experiments addressing the detection, quantification, and control of non-Markovian quantum dynamics are also briefly discussed.

  14. Itô-SDE MCMC method for Bayesian characterization of errors associated with data limitations in stochastic expansion methods for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.

    2017-11-01

    This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.

  15. Uncertainty Quantification using Epi-Splines and Soft Information

    DTIC Science & Technology

    2012-06-01

    use of the Kullback - Leibler divergence measure. The Kullback - Leibler ...to illustrate the application of soft information related to the Kullback - Leibler (KL) divergence discussed in Chapter 2. The idea behind apply- ing... information for the estimation of system performance density functions in order to quantify uncertainty. We conduct empirical testing of

  16. Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography.

    PubMed

    Venhuizen, Freerk G; van Ginneken, Bram; Liefers, Bart; van Asten, Freekje; Schreur, Vivian; Fauser, Sascha; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I

    2018-04-01

    We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies.

  17. Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography

    PubMed Central

    Venhuizen, Freerk G.; van Ginneken, Bram; Liefers, Bart; van Asten, Freekje; Schreur, Vivian; Fauser, Sascha; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.

    2018-01-01

    We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies. PMID:29675301

  18. Multiscale recurrence quantification analysis of order recurrence plots

    NASA Astrophysics Data System (ADS)

    Xu, Mengjia; Shang, Pengjian; Lin, Aijing

    2017-03-01

    In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.

  19. Quantification of Soil Redoximorphic Features by Standardized Color Identification

    USDA-ARS?s Scientific Manuscript database

    Photography has been a welcome tool in assisting to document and convey qualitative soil information. Greater availability of digital cameras with increased information storage capabilities has promoted novel uses of this technology in investigations of water movement patterns, organic matter conte...

  20. Uncertainty Quantification using Exponential Epi-Splines

    DTIC Science & Technology

    2013-06-01

    Leibler divergence. The choice of κ in applications can be informed by the fact that the Kullback - Leibler divergence between two normal densities, ϕ1... of ran- dom output quantities of interests. The framework systematically incorporates hard information derived from physics-based sensors, field test ... information , and determines the ‘best’ estimate within that family. Bayesian estima- tion makes use of prior soft information

  1. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  2. qFlow Cytometry-Based Receptoromic Screening: A High-Throughput Quantification Approach Informing Biomarker Selection and Nanosensor Development.

    PubMed

    Chen, Si; Weddell, Jared; Gupta, Pavan; Conard, Grace; Parkin, James; Imoukhuede, Princess I

    2017-01-01

    Nanosensor-based detection of biomarkers can improve medical diagnosis; however, a critical factor in nanosensor development is deciding which biomarker to target, as most diseases present several biomarkers. Biomarker-targeting decisions can be informed via an understanding of biomarker expression. Currently, immunohistochemistry (IHC) is the accepted standard for profiling biomarker expression. While IHC provides a relative mapping of biomarker expression, it does not provide cell-by-cell readouts of biomarker expression or absolute biomarker quantification. Flow cytometry overcomes both these IHC challenges by offering biomarker expression on a cell-by-cell basis, and when combined with calibration standards, providing quantitation of biomarker concentrations: this is known as qFlow cytometry. Here, we outline the key components for applying qFlow cytometry to detect biomarkers within the angiogenic vascular endothelial growth factor receptor family. The key aspects of the qFlow cytometry methodology include: antibody specificity testing, immunofluorescent cell labeling, saturation analysis, fluorescent microsphere calibration, and quantitative analysis of both ensemble and cell-by-cell data. Together, these methods enable high-throughput quantification of biomarker expression.

  3. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less

  4. Quantification of Cannabinoid Content in Cannabis

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.

    2015-09-01

    Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.

  5. Arkas: Rapid reproducible RNAseq analysis

    PubMed Central

    Colombo, Anthony R.; J. Triche Jr, Timothy; Ramsingh, Giridharan

    2017-01-01

    The recently introduced Kallisto pseudoaligner has radically simplified the quantification of transcripts in RNA-sequencing experiments.  We offer cloud-scale RNAseq pipelines Arkas-Quantification, and Arkas-Analysis available within Illumina’s BaseSpace cloud application platform which expedites Kallisto preparatory routines, reliably calculates differential expression, and performs gene-set enrichment of REACTOME pathways .  Due to inherit inefficiencies of scale, Illumina's BaseSpace computing platform offers a massively parallel distributive environment improving data management services and data importing.   Arkas-Quantification deploys Kallisto for parallel cloud computations and is conveniently integrated downstream from the BaseSpace Sequence Read Archive (SRA) import/conversion application titled SRA Import.  Arkas-Analysis annotates the Kallisto results by extracting structured information directly from source FASTA files with per-contig metadata, calculates the differential expression and gene-set enrichment analysis on both coding genes and transcripts. The Arkas cloud pipeline supports ENSEMBL transcriptomes and can be used downstream from the SRA Import facilitating raw sequencing importing, SRA FASTQ conversion, RNA quantification and analysis steps. PMID:28868134

  6. Probabilistic Weather Information Tailored to the Needs of Transmission System Operators

    NASA Astrophysics Data System (ADS)

    Alberts, I.; Stauch, V.; Lee, D.; Hagedorn, R.

    2014-12-01

    Reliable and accurate forecasts for wind and photovoltaic (PV) power production are essential for stable transmission systems. A high potential for improving the wind and PV power forecasts lies in optimizing the weather forecasts, since these energy sources are highly weather dependent. For this reason the main objective of the German research project EWeLiNE is to improve the quality the underlying numerical weather predictions towards energy operations. In this project, the German Meteorological Service (DWD), the Fraunhofer Institute for Wind Energy and Energy System Technology, and three of the German transmission system operators (TSOs) are working together to improve the weather and power forecasts. Probabilistic predictions are of particular interest, as the quantification of uncertainties provides an important tool for risk management. Theoretical considerations suggest that it can be advantageous to use probabilistic information to represent and respond to the remaining uncertainties in the forecasts. However, it remains a challenge to integrate this information into the decision making processes related to market participation and power systems operations. The project is planned and carried out in close cooperation with the involved TSOs in order to ensure the usability of the products developed. It will conclude with a demonstration phase, in which the improved models and newly developed products are combined into a process chain and used to provide information to TSOs in a real-time decision support tool. The use of a web-based development platform enables short development cycles and agile adaptation to evolving user needs. This contribution will present the EWeLiNE project and discuss ideas on how to incorporate probabilistic information into the users' current decision making processes.

  7. Recurrence plots and recurrence quantification analysis of human motion data

    NASA Astrophysics Data System (ADS)

    Josiński, Henryk; Michalczuk, Agnieszka; Świtoński, Adam; Szczesna, Agnieszka; Wojciechowski, Konrad

    2016-06-01

    The authors present exemplary application of recurrence plots, cross recurrence plots and recurrence quantification analysis for the purpose of exploration of experimental time series describing selected aspects of human motion. Time series were extracted from treadmill gait sequences which were recorded in the Human Motion Laboratory (HML) of the Polish-Japanese Academy of Information Technology in Bytom, Poland by means of the Vicon system. Analysis was focused on the time series representing movements of hip, knee, ankle and wrist joints in the sagittal plane.

  8. [Building Mass Spectrometry Spectral Libraries of Human Cancer Cell Lines].

    PubMed

    Faktor, J; Bouchal, P

    Cancer research often focuses on protein quantification in model cancer cell lines and cancer tissues. SWATH (sequential windowed acquisition of all theoretical fragment ion spectra), the state of the art method, enables the quantification of all proteins included in spectral library. Spectral library contains fragmentation patterns of each detectable protein in a sample. Thorough spectral library preparation will improve quantitation of low abundant proteins which usually play an important role in cancer. Our research is focused on the optimization of spectral library preparation aimed at maximizing the number of identified proteins in MCF-7 breast cancer cell line. First, we optimized the sample preparation prior entering the mass spectrometer. We examined the effects of lysis buffer composition, peptide dissolution protocol and the material of sample vial on the number of proteins identified in spectral library. Next, we optimized mass spectrometry (MS) method for spectral library data acquisition. Our thorough optimized protocol for spectral library building enabled the identification of 1,653 proteins (FDR < 1%) in 1 µg of MCF-7 lysate. This work contributed to the enhancement of protein coverage in SWATH digital biobanks which enable quantification of arbitrary protein from physically unavailable samples. In future, high quality spectral libraries could play a key role in preparing of patient proteome digital fingerprints.Key words: biomarker - mass spectrometry - proteomics - digital biobanking - SWATH - protein quantificationThis work was supported by the project MEYS - NPS I - LO1413.The authors declare they have no potential conflicts of interest concerning drugs, products, or services used in the study.The Editorial Board declares that the manuscript met the ICMJE recommendation for biomedical papers.Submitted: 7. 5. 2016Accepted: 9. 6. 2016.

  9. A tool for selective inline quantification of co-eluting proteins in chromatography using spectral analysis and partial least squares regression.

    PubMed

    Brestrich, Nina; Briskot, Till; Osberghaus, Anna; Hubbuch, Jürgen

    2014-07-01

    Selective quantification of co-eluting proteins in chromatography is usually performed by offline analytics. This is time-consuming and can lead to late detection of irregularities in chromatography processes. To overcome this analytical bottleneck, a methodology for selective protein quantification in multicomponent mixtures by means of spectral data and partial least squares regression was presented in two previous studies. In this paper, a powerful integration of software and chromatography hardware will be introduced that enables the applicability of this methodology for a selective inline quantification of co-eluting proteins in chromatography. A specific setup consisting of a conventional liquid chromatography system, a diode array detector, and a software interface to Matlab® was developed. The established tool for selective inline quantification was successfully applied for a peak deconvolution of a co-eluting ternary protein mixture consisting of lysozyme, ribonuclease A, and cytochrome c on SP Sepharose FF. Compared to common offline analytics based on collected fractions, no loss of information regarding the retention volumes and peak flanks was observed. A comparison between the mass balances of both analytical methods showed, that the inline quantification tool can be applied for a rapid determination of pool yields. Finally, the achieved inline peak deconvolution was successfully applied to make product purity-based real-time pooling decisions. This makes the established tool for selective inline quantification a valuable approach for inline monitoring and control of chromatographic purification steps and just in time reaction on process irregularities. © 2014 Wiley Periodicals, Inc.

  10. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  11. Hydrodynamic interactions in active colloidal crystal microrheology.

    PubMed

    Weeber, R; Harting, J

    2012-11-01

    In dense colloids it is commonly assumed that hydrodynamic interactions do not play a role. However, a found theoretical quantification is often missing. We present computer simulations that are motivated by experiments where a large colloidal particle is dragged through a colloidal crystal. To qualify the influence of long-ranged hydrodynamics, we model the setup by conventional Langevin dynamics simulations and by an improved scheme with limited hydrodynamic interactions. This scheme significantly improves our results and allows to show that hydrodynamics strongly impacts the development of defects, the crystal regeneration, as well as the jamming behavior.

  12. Coastal watershed management across an international border in the Tijuana River watershed

    NASA Astrophysics Data System (ADS)

    Fernandez, Linda

    2005-05-01

    The paper develops and applies a game theoretic model of upstream and downstream countries to examine cooperative and noncooperative strategies of a common watershed. The application to the Tijuana River watershed shared by the United States and Mexico provides quantification of the strategies for internalizing water quality externalities to upstream and downstream originating from sedimentation. Results show that different transfer payments, such as the Chander/Tulkens cost sharing rule and the Shapley value, imply the size of the existing transfer from downstream to upstream could increase the amount currently allocated.

  13. Informing Physics: Jacob Bekenstein and the Informational Turn in Theoretical Physics

    NASA Astrophysics Data System (ADS)

    Belfer, Israel

    2014-03-01

    In his PhD dissertation in the early 1970s, the Mexican-Israeli theoretical physicist Jacob Bekenstein developed the thermodynamics of black holes using a generalized version of the second law of thermodynamics. This work made it possible for physicists to describe and analyze black holes using information-theoretical concepts. It also helped to transform information theory into a fundamental and foundational concept in theoretical physics. The story of Bekenstein's work—which was initially opposed by many scientists, including Stephen Hawking—highlights the transformation within physics towards an information-oriented scientific mode of theorizing. This "informational turn" amounted to a mild-mannered revolution within physics, revolutionary without being rebellious.

  14. Application of ecological site information to transformative changes on Great Basin sagebrush rangelands

    USDA-ARS?s Scientific Manuscript database

    Ecological Site Description (ESD) concepts are broadly applicable and provide a necessary framework to inform and guide rangeland management decisions. In this paper, we demonstrate how understanding and quantification of key vegetation, hydrology, and soil relationships in the ESD context can info...

  15. Benefits from bremsstrahlung distribution evaluation to get unknown information from specimen in SEM and TEM

    NASA Astrophysics Data System (ADS)

    Eggert, F.; Camus, P. P.; Schleifer, M.; Reinauer, F.

    2018-01-01

    The energy-dispersive X-ray spectrometer (EDS or EDX) is a commonly used device to characterise the composition of investigated material in scanning and transmission electron microscopes (SEM and TEM). One major benefit compared to wavelength-dispersive X-ray spectrometers (WDS) is that EDS systems collect the entire spectrum simultaneously. Therefore, not only are all emitted characteristic X-ray lines in the spectrum, but also the complete bremsstrahlung distribution is included. It is possible to get information about the specimen even from this radiation, which is usually perceived more as a disturbing background. This is possible by using theoretical model knowledge about bremsstrahlung excitation and absorption in the specimen in comparison to the actual measured spectrum. The core aim of this investigation is to present a method for better bremsstrahlung fitting in unknown geometry cases by variation of the geometry parameters and to utilise this knowledge also for characteristic radiation evaluation. A method is described, which allows the parameterisation of the true X-ray absorption conditions during spectrum acquisition. An ‘effective tilt’ angle parameter is determined by evaluation of the bremsstrahlung shape of the measured SEM spectra. It is useful for bremsstrahlung background approximation, with exact calculations of the absorption edges below the characteristic peaks, required for P/B-ZAF model based quantification methods. It can even be used for ZAF based quantification models as a variable input parameter. The analytical results are then much more reliable for the different absorption effects from irregular specimen surfaces because the unknown absorption dependency is considered. Finally, the method is also applied for evaluation of TEM spectra. In this case, the real physical parameter optimisation is with sample thickness (mass thickness), which is influencing the emitted and measured spectrum due to different absorption with TEM measurements. The effects are in the very low energy part of the spectrum, and are much more visible with most recent windowless TEM detectors. The thickness of the sample can be determined in this way from the measured bremsstrahlung spectrum shape.

  16. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  17. Physically-based modelling of high magnitude torrent events with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wing-Yuen Chow, Candace; Ramirez, Jorge; Zimmermann, Markus; Keiler, Margreth

    2017-04-01

    High magnitude torrent events are associated with the rapid propagation of vast quantities of water and available sediment downslope where human settlements may be established. Assessing the vulnerability of built structures to these events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. The specific contribution of the presented work describes a procedure simulate these damaging events by applying physically-based modelling and to include uncertainty information about the simulated results. This is a first step in the development of vulnerability curves based on several intensity parameters (i.e. maximum velocity, sediment deposition depth and impact pressure). The investigation process begins with the collection, organization and interpretation of detailed post-event documentation and photograph-based observation data of affected structures in three sites that exemplify the impact of highly destructive mudflows and flood occurrences on settlements in Switzerland. Hazard intensity proxies are then simulated with the physically-based FLO-2D model (O'Brien et al., 1993). Prior to modelling, global sensitivity analysis is conducted to support a better understanding of model behaviour, parameterization and the quantification of uncertainties (Song et al., 2015). The inclusion of information describing the degree of confidence in the simulated results supports the credibility of vulnerability curves developed with the modelled data. First, key parameters are identified and selected based on literature review. Truncated a priori ranges of parameter values were then defined by expert solicitation. Local sensitivity analysis is performed based on manual calibration to provide an understanding of the parameters relevant to the case studies of interest. Finally, automated parameter estimation is performed to comprehensively search for optimal parameter combinations and associated values, which are evaluated using the observed data collected in the first stage of the investigation. O'Brien, J.S., Julien, P.Y., Fullerton, W. T., 1993. Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering 119(2): 244-261.
 Song, X., Zhang, J., Zhan, C., Xuan, Y., Ye, M., Xu C., 2015. Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical frameworks, Journal of Hydrology 523: 739-757.

  18. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  19. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, Michael P.; Goldsmith, C. Franklin; Klippenstein, Stephen J.

    2015-07-16

    We have developed a multi-scale approach (Burke, M. P.; Klippenstein, S. J.; Harding, L. B. Proc. Combust. Inst. 2013, 34, 547–555.) to kinetic model formulation that directly incorporates elementary kinetic theories as a means to provide reliable, physics-based extrapolation to unexplored conditions. Here, we extend and generalize the multi-scale modeling strategy to treat systems of considerable complexity – involving multi-well reactions, potentially missing reactions, non-statistical product branching ratios, and non-Boltzmann (i.e. non-thermal) reactant distributions. The methodology is demonstrated here for a subsystem of low-temperature propane oxidation, as a representative system for low-temperature fuel oxidation. A multi-scale model is assembled andmore » informed by a wide variety of targets that include ab initio calculations of molecular properties, rate constant measurements of isolated reactions, and complex systems measurements. Active model parameters are chosen to accommodate both “parametric” and “structural” uncertainties. Theoretical parameters (e.g. barrier heights) are included as active model parameters to account for parametric uncertainties in the theoretical treatment; experimental parameters (e.g. initial temperatures) are included to account for parametric uncertainties in the physical models of the experiments. RMG software is used to assess potential structural uncertainties due to missing reactions. Additionally, branching ratios among product channels are included as active model parameters to account for structural uncertainties related to difficulties in modeling sequences of multiple chemically activated steps. The approach is demonstrated here for interpreting time-resolved measurements of OH, HO2, n-propyl, i-propyl, propene, oxetane, and methyloxirane from photolysis-initiated low-temperature oxidation of propane at pressures from 4 to 60 Torr and temperatures from 300 to 700 K. In particular, the multi-scale informed model provides a consistent quantitative explanation of both ab initio calculations and time-resolved species measurements. The present results show that interpretations of OH measurements are significantly more complicated than previously thought – in addition to barrier heights for key transition states considered previously, OH profiles also depend on additional theoretical parameters for R + O2 reactions, secondary reactions, QOOH + O2 reactions, and treatment of non-Boltzmann reaction sequences. Extraction of physically rigorous information from those measurements may require more sophisticated treatment of all of those model aspects, as well as additional experimental data under more conditions, to discriminate among possible interpretations and ensure model reliability. Keywords: Optimization, Uncertainty quantification, Chemical mechanism, Low-Temperature Oxidation, Non-Boltzmann« less

  20. An Information Theoretic Investigation Of Complex Adaptive Supply Networks With Organizational Topologies

    DTIC Science & Technology

    2016-12-22

    assumptions of behavior. This research proposes an information theoretic methodology to discover such complex network structures and dynamics while overcoming...the difficulties historically associated with their study. Indeed, this was the first application of an information theoretic methodology as a tool...1 Research Objectives and Questions..............................................................................2 Methodology

  1. Theoretical Model of Development of Information Competence among Students Enrolled in Elective Courses

    ERIC Educational Resources Information Center

    Zhumasheva, Anara; Zhumabaeva, Zaida; Sakenov, Janat; Vedilina, Yelena; Zhaxylykova, Nuriya; Sekenova, Balkumis

    2016-01-01

    The current study focuses on the research topic of creating a theoretical model of development of information competence among students enrolled in elective courses. In order to examine specific features of the theoretical model of development of information competence among students enrolled in elective courses, we performed an analysis of…

  2. Quantification method for the appearance of melanin pigmentation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Okiyama, Natsuko; Okaguchi, Saya; Tsumura, Norimichi; Nakaguchi, Toshiya; Hori, Kimihiko; Miyake, Yoichi

    2005-04-01

    In the cosmetics industry, skin color is very important because skin color gives a direct impression of the face. In particular, many people suffer from melanin pigmentation such as liver spots and freckles. However, it is very difficult to evaluate melanin pigmentation using conventional colorimetric values because these values contain information on various skin chromophores simultaneously. Therefore, it is necessary to extract information of the chromophore of individual skins independently as density information. The isolation of the melanin component image based on independent component analysis (ICA) from a single skin image was reported in 2003. However, this technique has not developed a quantification method for melanin pigmentation. This paper introduces a quantification method based on the ICA of a skin color image to isolate melanin pigmentation. The image acquisition system we used consists of commercially available equipment such as digital cameras and lighting sources with polarized light. The images taken were analyzed using ICA to extract the melanin component images, and Laplacian of Gaussian (LOG) filter was applied to extract the pigmented area. As a result, for skin images including those showing melanin pigmentation and acne, the method worked well. Finally, the total amount of extracted area had a strong correspondence to the subjective rating values for the appearance of pigmentation. Further analysis is needed to recognize the appearance of pigmentation concerning the size of the pigmented area and its spatial gradation.

  3. Automated quantification of renal interstitial fibrosis for computer-aided diagnosis: A comprehensive tissue structure segmentation method.

    PubMed

    Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon

    2018-03-01

    Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Arterial Spin Labeling (ASL) fMRI: advantages, theoretical constrains, and experimental challenges in neurosciences.

    PubMed

    Borogovac, Ajna; Asllani, Iris

    2012-01-01

    Cerebral blood flow (CBF) is a well-established correlate of brain function and therefore an essential parameter for studying the brain at both normal and diseased states. Arterial spin labeling (ASL) is a noninvasive fMRI technique that uses arterial water as an endogenous tracer to measure CBF. ASL provides reliable absolute quantification of CBF with higher spatial and temporal resolution than other techniques. And yet, the routine application of ASL has been somewhat limited. In this review, we start by highlighting theoretical complexities and technical challenges of ASL fMRI for basic and clinical research. While underscoring the main advantages of ASL versus other techniques such as BOLD, we also expound on inherent challenges and confounds in ASL perfusion imaging. In closing, we expound on several exciting developments in the field that we believe will make ASL reach its full potential in neuroscience research.

  5. The Impact of Solid Surface Features on Fluid-Fluid Interface Configuration

    NASA Astrophysics Data System (ADS)

    Araujo, J. B.; Brusseau, M. L. L.

    2017-12-01

    Pore-scale fluid processes in geological media are critical for a broad range of applications such as radioactive waste disposal, carbon sequestration, soil moisture distribution, subsurface pollution, land stability, and oil and gas recovery. The continued improvement of high-resolution image acquisition and processing have provided a means to test the usefulness of theoretical models developed to simulate pore-scale fluid processes, through the direct quantification of interfaces. High-resolution synchrotron X-ray microtomography is used in combination with advanced visualization tools to characterize fluid distributions in natural geologic media. The studies revealed the presence of fluid-fluid interface associated with macroscopic features on the surfaces of the solids such as pits and crevices. These features and respective fluid interfaces, which are not included in current theoretical or computational models, may have a significant impact on accurate simulation and understanding of multi-phase flow, energy, heat and mass transfer processes.

  6. An Analysis of Information Asset Valuation (IAV) Quantification Methodology for Application with Cyber Information Mission Impact Assessment (CIMIA)

    DTIC Science & Technology

    2008-03-01

    sponsor, Capt. Larry Fortson, for sharing a common vision; my knowledgeable committee members, Dr. Robert F. Mills and Dr. Dennis D. Strouble, for...Accounting Approaches ........................................................................ 16  vi Page Fair Market Value (FMV...22  Uniform Commercial Code ( UCC ) ....................................................................... 23

  7. Toward a Quantification of the Information/Communication Industries. Publication No. 74-2.

    ERIC Educational Resources Information Center

    Lavey, Warren G.

    A national survey was made to collect data about the information/communication industries in the United States today. Eleven industries were studied: television, radio, telephone, telegraph, postal service, newspaper, periodical, book publishing and printing, motion pictures, computer software, and cable television. The data collection scheme used…

  8. One-dimensional barcode reading: an information theoretic approach

    NASA Astrophysics Data System (ADS)

    Houni, Karim; Sawaya, Wadih; Delignon, Yves

    2008-03-01

    In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.

  9. One-dimensional barcode reading: an information theoretic approach.

    PubMed

    Houni, Karim; Sawaya, Wadih; Delignon, Yves

    2008-03-10

    In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.

  10. Quantifying construction and demolition waste: An analytical review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Zezhou; Yu, Ann T.W., E-mail: bsannyu@polyu.edu.hk; Shen, Liyin

    2014-09-15

    Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and projectmore » levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested.« less

  11. Mass Spectrometric Quantification of N-Linked Glycans by Reference to Exogenous Standards.

    PubMed

    Mehta, Nickita; Porterfield, Mindy; Struwe, Weston B; Heiss, Christian; Azadi, Parastoo; Rudd, Pauline M; Tiemeyer, Michael; Aoki, Kazuhiro

    2016-09-02

    Environmental and metabolic processes shape the profile of glycoprotein glycans expressed by cells, whether in culture, developing tissues, or mature organisms. Quantitative characterization of glycomic changes associated with these conditions has been achieved historically by reductive coupling of oligosaccharides to various fluorophores following release from glycoprotein and subsequent HPLC or capillary electrophoretic separation. Such labeling-based approaches provide a robust means of quantifying glycan amount based on fluorescence yield. Mass spectrometry, on the other hand, has generally been limited to relative quantification in which the contribution of the signal intensity for an individual glycan is expressed as a percent of the signal intensity summed over the total profile. Relative quantification has been valuable for highlighting changes in glycan expression between samples; sensitivity is high, and structural information can be derived by fragmentation. We have investigated whether MS-based glycomics is amenable to absolute quantification by referencing signal intensities to well-characterized oligosaccharide standards. We report the qualification of a set of N-linked oligosaccharide standards by NMR, HPLC, and MS. We also demonstrate the dynamic range, sensitivity, and recovery from complex biological matrices for these standards in their permethylated form. Our results indicate that absolute quantification for MS-based glycomic analysis is reproducible and robust utilizing currently available glycan standards.

  12. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    PubMed Central

    Cankar, Katarina; Štebih, Dejan; Dreo, Tanja; Žel, Jana; Gruden, Kristina

    2006-01-01

    Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to evaluate the quality and performance on different matrixes and extraction techniques. The effect of PCR efficiency on the resulting GMO content is demonstrated. Conclusion The crucial influence of extraction technique and sample matrix properties on the results of GMO quantification is demonstrated. Appropriate extraction techniques for each matrix need to be determined to achieve accurate DNA quantification. Nevertheless, as it is shown that in the area of food and feed testing matrix with certain specificities is impossible to define strict quality controls need to be introduced to monitor PCR. The results of our study are also applicable to other fields of quantitative testing by real-time PCR. PMID:16907967

  13. A phase quantification method based on EBSD data for a continuously cooled microalloyed steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, H.; Wynne, B.P.; Palmiere, E.J., E-mail: e.j

    2017-01-15

    Mechanical properties of steels depend on the phase constitutions of the final microstructures which can be related to the processing parameters. Therefore, accurate quantification of different phases is necessary to investigate the relationships between processing parameters, final microstructures and mechanical properties. Point counting on micrographs observed by optical or scanning electron microscopy is widely used as a phase quantification method, and different phases are discriminated according to their morphological characteristics. However, it is difficult to differentiate some of the phase constituents with similar morphology. Differently, for EBSD based phase quantification methods, besides morphological characteristics, other parameters derived from the orientationmore » information can also be used for discrimination. In this research, a phase quantification method based on EBSD data in the unit of grains was proposed to identify and quantify the complex phase constitutions of a microalloyed steel subjected to accelerated coolings. Characteristics of polygonal ferrite/quasi-polygonal ferrite, acicular ferrite and bainitic ferrite on grain averaged misorientation angles, aspect ratios, high angle grain boundary fractions and grain sizes were analysed and used to develop the identification criteria for each phase. Comparing the results obtained by this EBSD based method and point counting, it was found that this EBSD based method can provide accurate and reliable phase quantification results for microstructures with relatively slow cooling rates. - Highlights: •A phase quantification method based on EBSD data in the unit of grains was proposed. •The critical grain area above which GAM angles are valid parameters was obtained. •Grain size and grain boundary misorientation were used to identify acicular ferrite. •High cooling rates deteriorate the accuracy of this EBSD based method.« less

  14. Quantification of biogenic volatile organic compounds with a flame ionization detector using the effective carbon number concept

    DOE PAGES

    Faiola, C. L.; Erickson, M. H.; Fricaud, V. L.; ...

    2012-08-10

    Biogenic volatile organic compounds (BVOCs) are emitted into the atmosphere by plants and include isoprene, monoterpenes, sesquiterpenes, and their oxygenated derivatives. These BVOCs are among the principal factors influencing the oxidative capacity of the atmosphere in forested regions. BVOC emission rates are often measured by collecting samples onto adsorptive cartridges in the field and then transporting these samples to the laboratory for chromatographic analysis. One of the most commonly used detectors in chromatographic analysis is the flame ionization detector (FID). For quantitative analysis with an FID, relative response factors may be estimated using the effective carbon number (ECN) concept. Themore » purpose of this study was to determine the ECN for a variety of terpenoid compounds to enable improved quantification of BVOC measurements. A dynamic dilution system was developed to make quantitative gas standards of VOCs with mixing ratios from 20–55 ppb. For each experiment using this system, one terpene standard was co-injected with an internal reference, n-octane, and analyzed via an automated cryofocusing system interfaced to a gas chromatograph flame ionization detector and mass spectrometer (GC/MS/FID). The ECNs of 16 compounds (14 BVOCs) were evaluated with this approach, with each test compound analyzed at least three times. The difference between the actual carbon number and measured ECN ranged from -24% to -2%. Furthermore, the difference between theoretical ECN and measured ECN ranged from -22% to 9%. Measured ECN values were within 10% of theoretical ECN values for most terpenoid compounds.« less

  15. An Information Theoretic Analysis of Classification Sorting and Cognition by Ninth Grade Children within a Piagetian Setting.

    ERIC Educational Resources Information Center

    Dunlop, David Livingston

    The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…

  16. Lamb wave-based damage quantification and probability of detection modeling for fatigue life assessment of riveted lap joint

    NASA Astrophysics Data System (ADS)

    He, Jingjing; Wang, Dengjiang; Zhang, Weifang

    2015-03-01

    This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.

  17. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  18. The Prediction-Focused Approach: An opportunity for hydrogeophysical data integration and interpretation

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Klepikova, Maria; Dassargues, Alain; Caers, Jef

    2017-04-01

    Hydrogeophysics is an interdisciplinary field of sciences aiming at a better understanding of subsurface hydrological processes. If geophysical surveys have been successfully used to qualitatively characterize the subsurface, two important challenges remain for a better quantification of hydrological processes: (1) the inversion of geophysical data and (2) their integration in hydrological subsurface models. The classical inversion approach using regularization suffers from spatially and temporally varying resolution and yields geologically unrealistic solutions without uncertainty quantification, making their utilization for hydrogeological calibration less consistent. More advanced techniques such as coupled inversion allow for a direct use of geophysical data for conditioning groundwater and solute transport model calibration. However, the technique is difficult to apply in complex cases and remains computationally demanding to estimate uncertainty. In a recent study, we investigate a prediction-focused approach (PFA) to directly estimate subsurface physical properties from geophysical data, circumventing the need for classic inversions. In PFA, we seek a direct relationship between the data and the subsurface variables we want to predict (the forecast). This relationship is obtained through a prior set of subsurface models for which both data and forecast are computed. A direct relationship can often be derived through dimension reduction techniques. PFA offers a framework for both hydrogeophysical "inversion" and hydrogeophysical data integration. For hydrogeophysical "inversion", the considered forecast variable is the subsurface variable, such as the salinity. An ensemble of possible solutions is generated, allowing uncertainty quantification. For hydrogeophysical data integration, the forecast variable becomes the prediction we want to make with our subsurface models, such as the concentration of contaminant in a drinking water production well. Geophysical and hydrological data are combined to derive a direct relationship between data and forecast. We illustrate the process for the design of an aquifer thermal energy storage (ATES) system. An ATES system can theoretically recover in winter the heat stored in the aquifer during summer. In practice, the energy efficiency is often lower than expected due to spatial heterogeneity of hydraulic properties combined to a non-favorable hydrogeological gradient. A proper design of ATES systems should consider the uncertainty of the prediction related to those parameters. With a global sensitivity analysis, we identify sensitive parameters for heat storage prediction and validate the use of a short term heat tracing experiment monitored with geophysics to generate informative data. First, we illustrate how PFA can be used to successfully derive the distribution of temperature in the aquifer from ERT during the heat tracing experiment. Then, we successfully integrate the geophysical data to predict medium-term heat storage in the aquifer using PFA. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data in a relatively limited time budget.

  19. A new statistical framework to assess structural alignment quality using information compression

    PubMed Central

    Collier, James H.; Allison, Lloyd; Lesk, Arthur M.; Garcia de la Banda, Maria; Konagurthu, Arun S.

    2014-01-01

    Motivation: Progress in protein biology depends on the reliability of results from a handful of computational techniques, structural alignments being one. Recent reviews have highlighted substantial inconsistencies and differences between alignment results generated by the ever-growing stock of structural alignment programs. The lack of consensus on how the quality of structural alignments must be assessed has been identified as the main cause for the observed differences. Current methods assess structural alignment quality by constructing a scoring function that attempts to balance conflicting criteria, mainly alignment coverage and fidelity of structures under superposition. This traditional approach to measuring alignment quality, the subject of considerable literature, has failed to solve the problem. Further development along the same lines is unlikely to rectify the current deficiencies in the field. Results: This paper proposes a new statistical framework to assess structural alignment quality and significance based on lossless information compression. This is a radical departure from the traditional approach of formulating scoring functions. It links the structural alignment problem to the general class of statistical inductive inference problems, solved using the information-theoretic criterion of minimum message length. Based on this, we developed an efficient and reliable measure of structural alignment quality, I-value. The performance of I-value is demonstrated in comparison with a number of popular scoring functions, on a large collection of competing alignments. Our analysis shows that I-value provides a rigorous and reliable quantification of structural alignment quality, addressing a major gap in the field. Availability: http://lcb.infotech.monash.edu.au/I-value Contact: arun.konagurthu@monash.edu Supplementary information: Online supplementary data are available at http://lcb.infotech.monash.edu.au/I-value/suppl.html PMID:25161241

  20. RNA-Skim: a rapid method for RNA-Seq quantification at transcript level

    PubMed Central

    Zhang, Zhaojun; Wang, Wei

    2014-01-01

    Motivation: RNA-Seq technique has been demonstrated as a revolutionary means for exploring transcriptome because it provides deep coverage and base pair-level resolution. RNA-Seq quantification is proven to be an efficient alternative to Microarray technique in gene expression study, and it is a critical component in RNA-Seq differential expression analysis. Most existing RNA-Seq quantification tools require the alignments of fragments to either a genome or a transcriptome, entailing a time-consuming and intricate alignment step. To improve the performance of RNA-Seq quantification, an alignment-free method, Sailfish, has been recently proposed to quantify transcript abundances using all k-mers in the transcriptome, demonstrating the feasibility of designing an efficient alignment-free method for transcriptome quantification. Even though Sailfish is substantially faster than alternative alignment-dependent methods such as Cufflinks, using all k-mers in the transcriptome quantification impedes the scalability of the method. Results: We propose a novel RNA-Seq quantification method, RNA-Skim, which partitions the transcriptome into disjoint transcript clusters based on sequence similarity, and introduces the notion of sig-mers, which are a special type of k-mers uniquely associated with each cluster. We demonstrate that the sig-mer counts within a cluster are sufficient for estimating transcript abundances with accuracy comparable with any state-of-the-art method. This enables RNA-Skim to perform transcript quantification on each cluster independently, reducing a complex optimization problem into smaller optimization tasks that can be run in parallel. As a result, RNA-Skim uses <4% of the k-mers and <10% of the CPU time required by Sailfish. It is able to finish transcriptome quantification in <10 min per sample by using just a single thread on a commodity computer, which represents >100 speedup over the state-of-the-art alignment-based methods, while delivering comparable or higher accuracy. Availability and implementation: The software is available at http://www.csbio.unc.edu/rs. Contact: weiwang@cs.ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931995

  1. Teaching and Learning Information Technology Process: From a 25 Year Perspective--Math Regents

    ERIC Educational Resources Information Center

    Lewis Sanchez, Louise

    2007-01-01

    This paper will describe the Teaching and Learning Informational Technology Process (TLITP). Before present day strategies, teaching and learning relied on transformations based on quantification to measure performance. The process will be a non-linear three construct of teacher, student and community. Emphasizing old practices now is the…

  2. Surveillance in the Information Age: Text Quantification, Anomaly Detection, and Empirical Evaluation

    ERIC Educational Resources Information Center

    Lu, Hsin-Min

    2010-01-01

    Deep penetration of personal computers, data communication networks, and the Internet has created a massive platform for data collection, dissemination, storage, and retrieval. Large amounts of textual data are now available at a very low cost. Valuable information, such as consumer preferences, new product developments, trends, and opportunities,…

  3. Rapid quantification of live/dead lactic acid bacteria in probiotic products using high-sensitivity flow cytometry

    NASA Astrophysics Data System (ADS)

    He, Shengbin; Hong, Xinyi; Huang, Tianxun; Zhang, Wenqiang; Zhou, Yingxing; Wu, Lina; Yan, Xiaomei

    2017-06-01

    A laboratory-built high-sensitivity flow cytometer (HSFCM) was employed for the rapid and accurate detection of lactic acid bacteria (LAB) and their viability in probiotic products. LAB were stained with both the cell membrane-permeable SYTO 9 green-fluorescent nucleic acid stain and the red-fluorescent nucleic acid stain, propidium iodide, which penetrates only bacteria with compromised membranes. The side scatter and dual-color fluorescence signals of single bacteria were detected simultaneously by the HSFCM. Ultra-high temperature processing milk and skim milk spiked with Lactobacillus casei were used as the model systems for the optimization of sample pretreatment and staining. The viable LAB counts measured by the HSFCM were in good agreement with those of the plate count method, and the measured ratios between the live and dead LAB matched well with the theoretical ratios. The established method was successfully applied to the rapid quantification of live/dead LAB in yogurts and fermented milk beverages of different brands. Moreover, the concentration and viability status of LAB in ambient yogurt, a relatively new yet popular milk product in China, are also reported.

  4. FIB preparation of a NiO Wedge-Lamella and STEM X-ray microanalysis for the determination of the experimental k(O-Ni) Cliff-Lorimer coefficient.

    PubMed

    Armigliato, Aldo; Frabboni, Stefano; Gazzadi, Gian Carlo; Rosa, Rodolfo

    2013-02-01

    A method for the fabrication of a wedge-shaped thin NiO lamella by focused ion beam is reported. The starting sample is an oxidized bulk single crystalline, <100> oriented, Ni commercial standard. The lamella is employed for the determination, by analytical electron microscopy at 200 kV of the experimental k(O-Ni) Cliff-Lorimer (G. Cliff & G.W. Lorimer, J Microsc 103, 203-207, 1975) coefficient, according to the extrapolation method by Van Cappellen (E. Van Cappellen, Microsc Microstruct Microanal 1, 1-22, 1990). The result thus obtained is compared to the theoretical k(O-Ni) values either implemented into the commercial software for X-ray microanalysis quantification of the scanning transmission electron microscopy/energy dispersive spectrometry equipment or calculated by the Monte Carlo method. Significant differences among the three values are found. This confirms that for a reliable quantification of binary alloys containing light elements, the choice of the Cliff-Lorimer coefficients is crucial and experimental values are recommended.

  5. Microstructural Effects on Initiation Behavior in HMX

    NASA Astrophysics Data System (ADS)

    Molek, Christopher; Welle, Eric; Hardin, Barrett; Vitarelli, Jim; Wixom, Ryan; Samuels, Philip

    Understanding the role microstructure plays on ignition and growth behavior has been the subject of a significant body of research within the detonation physics community. The pursuit of this understanding is important because safety and performance characteristics have been shown to strongly correlate to particle morphology. Historical studies have often correlated bulk powder characteristics to the performance or safety characteristics of pressed materials. We believe that a clearer and more relevant correlation is made between the pressed microstructure and the observed detonation behavior. This type of assessment is possible, as techniques now exist for the quantification of the pressed microstructures. Our talk will report on experimental efforts that correlate directly measured microstructural characteristics to initiation threshold behavior of HMX based materials. The internal microstructures were revealed using an argon ion cross-sectioning technique. This technique enabled the quantification of density and interface area of the pores within the pressed bed using methods of stereology. These bed characteristics are compared to the initiation threshold behavior of three HMX based materials using an electric gun based test method. Finally, a comparison of experimental threshold data to supporting theoretical efforts will be made.

  6. What heat is telling us about microbial conversions in nature and technology: from chip‐ to megacalorimetry

    PubMed Central

    Maskow, Thomas; Kemp, Richard; Buchholz, Friederike; Schubert, Torsten; Kiesel, Baerbel; Harms, Hauke

    2010-01-01

    Summary The exploitation of microorganisms in natural or technological systems calls for monitoring tools that reflect their metabolic activity in real time and, if necessary, are flexible enough for field application. The Gibbs energy dissipation of assimilated substrates or photons often in the form of heat is a general feature of life processes and thus, in principle, available to monitor and control microbial dynamics. Furthermore, the combination of measured heat fluxes with material fluxes allows the application of Hess' law to either prove expected growth stoichiometries and kinetics or identify and estimate unexpected side reactions. The combination of calorimetry with respirometry is theoretically suited for the quantification of the degree of coupling between catabolic and anabolic reactions. New calorimeter developments overcome the weaknesses of conventional devices, which hitherto limited the full exploitation of this powerful analytical tool. Calorimetric systems can be integrated easily into natural and technological systems of interest. They are potentially suited for high‐throughput measurements and are robust enough for field deployment. This review explains what information calorimetric analyses provide; it introduces newly emerging calorimetric techniques and it exemplifies the application of calorimetry in different fields of microbial research. PMID:21255327

  7. Quantum steering: a review with focus on semidefinite programming.

    PubMed

    Cavalcanti, D; Skrzypczyk, P

    2017-02-01

    Quantum steering refers to the non-classical correlations that can be observed between the outcomes of measurements applied on half of an entangled state and the resulting post-measured states that are left with the other party. From an operational point of view, a steering test can be seen as an entanglement test where one of the parties performs uncharacterised measurements. Thus, quantum steering is a form of quantum inseparability that lies in between the well-known notions of Bell nonlocality and entanglement. Moreover, quantum steering is also related to several asymmetric quantum information protocols where some of the parties are considered untrusted. Because of these facts, quantum steering has received a lot of attention both theoretically and experimentally. The main goal of this review is to give an overview of how to characterise quantum steering through semidefinite programming. This characterisation provides efficient numerical methods to address a number of problems, including steering detection, quantification, and applications. We also give a brief overview of some important results that are not directly related to semidefinite programming. Finally, we make available a collection of semidefinite programming codes that can be used to study the topics discussed in this article.

  8. Biomechanical approaches to identify and quantify injury mechanisms and risk factors in women's artistic gymnastics.

    PubMed

    Bradshaw, Elizabeth J; Hume, Patria A

    2012-09-01

    Targeted injury prevention strategies, based on biomechanical analyses, have the potential to help reduce the incidence and severity of gymnastics injuries. This review outlines the potential benefits of biomechanics research to contribute to injury prevention strategies for women's artistic gymnastics by identification of mechanisms of injury and quantification of the effects of injury risk factors. One hundred and twenty-three articles were retained for review after searching electronic databases using key words, including 'gymnastic', 'biomech*', and 'inj*', and delimiting by language and relevance to the paper aim. Impact load can be measured biomechanically by the use of instrumented equipment (e.g. beatboard), instrumentation on the gymnast (accelerometers), or by landings on force plates. We need further information on injury mechanisms and risk factors in gymnastics and practical methods of monitoring training loads. We have not yet shown, beyond a theoretical approach, how biomechanical analysis of gymnastics can help reduce injury risk through injury prevention interventions. Given the high magnitude of impact load, both acute and accumulative, coaches should monitor impact loads per training session, taking into consideration training quality and quantity such as the control of rotation and the height from which the landings are executed.

  9. Quantification of pathogen inactivation efficacy by free chlorine disinfection of drinking water for QMRA.

    PubMed

    Petterson, S R; Stenström, T A

    2015-09-01

    To support the implementation of quantitative microbial risk assessment (QMRA) for managing infectious risks associated with drinking water systems, a simple modeling approach for quantifying Log10 reduction across a free chlorine disinfection contactor was developed. The study was undertaken in three stages: firstly, review of the laboratory studies published in the literature; secondly, development of a conceptual approach to apply the laboratory studies to full-scale conditions; and finally implementation of the calculations for a hypothetical case study system. The developed model explicitly accounted for variability in residence time and pathogen specific chlorine sensitivity. Survival functions were constructed for a range of pathogens relying on the upper bound of the reported data transformed to a common metric. The application of the model within a hypothetical case study demonstrated the importance of accounting for variable residence time in QMRA. While the overall Log10 reduction may appear high, small parcels of water with short residence time can compromise the overall performance of the barrier. While theoretically simple, the approach presented is of great value for undertaking an initial assessment of a full-scale disinfection contactor based on limited site-specific information.

  10. Automated inference procedure for the determination of cell growth parameters

    NASA Astrophysics Data System (ADS)

    Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.

    2016-01-01

    The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.

  11. Quantification of Soil Pore Structure Based on Minkowski-Functions

    NASA Astrophysics Data System (ADS)

    Vogel, H.; Weller, U.; Schlüter, S.

    2009-05-01

    The porous structure in soils and other geologic media is typically a complex 3-dimensional object. Most of the physical material properties including mechanical and hydraulic characteristics are immediately linked to this structure which can be directly observed using non-invasive techniques as e.g. X-ray tomography. It is an old dream and still a formidable challenge to related structural features of porous media to their physical properties. In this contribution we present a scale-invariant concept to quantify pore structure based on a limited set of meaningful morphological functions. They are based on d+1 Minkowski functionals as defined for d-dimensional bodies. These basic quantities are determined as a function of pore size obtained by filter procedures using mathematical morphology. The resulting Minkowski functions provide valuable information on pore size, pore surface area and pore topology having the potential to be linked to physical properties. The theoretical background and the related algorithms are presented and the approach is demonstrated for the structure of an arable topsoil obtained by X-ray micro tomography. We also discuss the fundamental problem of limited resolution which is critical for any attempt to quantify structural features at any scale.

  12. Air Emissions Factors and Quantification

    EPA Pesticide Factsheets

    Emissions factors are used in developing air emissions inventories for air quality management decisions and in developing emissions control strategies. This area provides technical information on and support for the use of emissions factors.

  13. Monitoring and evaluating the quality consistency of Compound Bismuth Aluminate tablets by a simple quantified ratio fingerprint method combined with simultaneous determination of five compounds and correlated with antioxidant activities.

    PubMed

    Liu, Yingchun; Liu, Zhongbo; Sun, Guoxiang; Wang, Yan; Ling, Junhong; Gao, Jiayue; Huang, Jiahao

    2015-01-01

    A combination method of multi-wavelength fingerprinting and multi-component quantification by high performance liquid chromatography (HPLC) coupled with diode array detector (DAD) was developed and validated to monitor and evaluate the quality consistency of herbal medicines (HM) in the classical preparation Compound Bismuth Aluminate tablets (CBAT). The validation results demonstrated that our method met the requirements of fingerprint analysis and quantification analysis with suitable linearity, precision, accuracy, limits of detection (LOD) and limits of quantification (LOQ). In the fingerprint assessments, rather than using conventional qualitative "Similarity" as a criterion, the simple quantified ratio fingerprint method (SQRFM) was recommended, which has an important quantified fingerprint advantage over the "Similarity" approach. SQRFM qualitatively and quantitatively offers the scientific criteria for traditional Chinese medicines (TCM)/HM quality pyramid and warning gate in terms of three parameters. In order to combine the comprehensive characterization of multi-wavelength fingerprints, an integrated fingerprint assessment strategy based on information entropy was set up involving a super-information characteristic digitized parameter of fingerprints, which reveals the total entropy value and absolute information amount about the fingerprints and, thus, offers an excellent method for fingerprint integration. The correlation results between quantified fingerprints and quantitative determination of 5 marker compounds, including glycyrrhizic acid (GLY), liquiritin (LQ), isoliquiritigenin (ILG), isoliquiritin (ILQ) and isoliquiritin apioside (ILA), indicated that multi-component quantification could be replaced by quantified fingerprints. The Fenton reaction was employed to determine the antioxidant activities of CBAT samples in vitro, and they were correlated with HPLC fingerprint components using the partial least squares regression (PLSR) method. In summary, the method of multi-wavelength fingerprints combined with antioxidant activities has been proved to be a feasible and scientific procedure for monitoring and evaluating the quality consistency of CBAT.

  14. Information Needs and Information Competencies: A Case Study of the Off-Site Supervision of Financial Institutions in Brazil

    ERIC Educational Resources Information Center

    Miranda, Silvania V.; Tarapanoff, Kira M. A.

    2008-01-01

    Introduction: The paper deals with the identification of the information needs and information competencies of a professional group. Theoretical basis: A theoretical relationship between information needs and information competencies as subjects is proposed. Three dimensions are examine: cognitive, affective and situational. The recognition of an…

  15. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics

    PubMed Central

    Röst, Hannes L.; Liu, Yansheng; D’Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C.; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-01-01

    Large scale, quantitative proteomic studies have become essential for the analysis of clinical cohorts, large perturbation experiments and systems biology studies. While next-generation mass spectrometric techniques such as SWATH-MS have substantially increased throughput and reproducibility, ensuring consistent quantification of thousands of peptide analytes across multiple LC-MS/MS runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we have developed the TRIC software which utilizes fragment ion data to perform cross-run alignment, consistent peak-picking and quantification for high throughput targeted proteomics. TRIC uses a graph-based alignment strategy based on non-linear retention time correction to integrate peak elution information from all LC-MS/MS runs acquired in a study. When compared to state-of-the-art SWATH-MS data analysis, the algorithm was able to reduce the identification error by more than 3-fold at constant recall, while correcting for highly non-linear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem (iPS) cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups and substantially increased the quantitative completeness and biological information in the data, providing insights into protein dynamics of iPS cells. Overall, this study demonstrates the importance of consistent quantification in highly challenging experimental setups, and proposes an algorithm to automate this task, constituting the last missing piece in a pipeline for automated analysis of massively parallel targeted proteomics datasets. PMID:27479329

  16. Person-generated Data in Self-quantification. A Health Informatics Research Program.

    PubMed

    Gray, Kathleen; Martin-Sanchez, Fernando J; Lopez-Campos, Guillermo H; Almalki, Manal; Merolli, Mark

    2017-01-09

    The availability of internet-connected mobile, wearable and ambient consumer technologies, direct-to-consumer e-services and peer-to-peer social media sites far outstrips evidence about the efficiency, effectiveness and efficacy of using them in healthcare applications. The aim of this paper is to describe one approach to build a program of health informatics research, so as to generate rich and robust evidence about health data and information processing in self-quantification and associated healthcare and health outcomes. The paper summarises relevant health informatics research approaches in the literature and presents an example of developing a program of research in the Health and Biomedical Informatics Centre (HaBIC) at the University of Melbourne. The paper describes this program in terms of research infrastructure, conceptual models, research design, research reporting and knowledge sharing. The paper identifies key outcomes from integrative and multiple-angle approaches to investigating the management of information and data generated by use of this Centre's collection of wearable, mobiles and other devices in health self-monitoring experiments. These research results offer lessons for consumers, developers, clinical practitioners and biomedical and health informatics researchers. Health informatics is increasingly called upon to make sense of emerging self-quantification and other digital health phenomena that are well beyond the conventions of healthcare in which the field of informatics originated and consolidated. To make a substantial contribution to optimise the aims, processes and outcomes of health self-quantification needs further work at scale in multi-centre collaborations for this Centre and for health informatics researchers generally.

  17. Magnetic fingerprints of rolling cells for quantitative flow cytometry in whole blood

    NASA Astrophysics Data System (ADS)

    Reisbeck, Mathias; Helou, Michael Johannes; Richter, Lukas; Kappes, Barbara; Friedrich, Oliver; Hayden, Oliver

    2016-09-01

    Over the past 50 years, flow cytometry has had a profound impact on preclinical and clinical applications requiring single cell function information for counting, sub-typing and quantification of epitope expression. At the same time, the workflow complexity and high costs of such optical systems still limit flow cytometry applications to specialized laboratories. Here, we present a quantitative magnetic flow cytometer that incorporates in situ magnetophoretic cell focusing for highly accurate and reproducible rolling of the cellular targets over giant magnetoresistance sensing elements. Time-of-flight analysis is used to unveil quantitative single cell information contained in its magnetic fingerprint. Furthermore, we used erythrocytes as a biological model to validate our methodology with respect to precise analysis of the hydrodynamic cell diameter, quantification of binding capacity of immunomagnetic labels, and discrimination of cell morphology. The extracted time-of-flight information should enable point-of-care quantitative flow cytometry in whole blood for clinical applications, such as immunology and primary hemostasis.

  18. Time-resolved imaging of contrast kinetics does not improve performance of follow-up MRA of embolized intracranial aneurysms.

    PubMed

    Serafin, Zbigniew; Strześniewski, Piotr; Lasek, Władysław; Beuth, Wojciech

    2012-07-01

    The use of contrast media and the time-resolved imaging of contrast kinetics (TRICKS) technique have some theoretical advantages over time-of-flight magnetic resonance angiography (TOF-MRA) in the follow-up of intracranial aneurysms after endovascular treatment. We prospectively compared the diagnostic performance of TRICKS and TOF-MRA with digital subtracted angiography (DSA) in the assessment of occlusion of embolized aneurysms. Seventy-two consecutive patients with 72 aneurysms were examined 3 months after embolization. Test characteristics of TOF-MRA and TRICKS were calculated for the detection of residual flow. The results of quantification of flow were compared with weighted kappa. Intraobserver and interobserver reproducibility was determined. The sensitivity of TOF-MRA was 85% (95% CI, 65-96%) and of TRICKS, 89% (95% CI, 70-97%). The specificity of both methods was 91% (95% CI, 79-98%). The accuracy of the flow quantification ranged from 0.76 (TOF-MRA) to 0.83 (TRICKS). There was no significant difference between the methods in the area under the ROC curve regarding both the detection and the quantification of flow. Intraobserver reproducibility was very good with both techniques (kappa, 0.86-0.89). The interobserver reproducibility was moderate for TOF-MRA and very good for TRICKS (kappa, 0.74-0.80). In this study, TOF-MRA and TRICKS presented similar diagnostic performance; therefore, the use of time-resolved contrast-enhanced MRA is not justified in the follow-up of embolized aneurysms.

  19. Information Design Theories

    ERIC Educational Resources Information Center

    Pettersson, Rune

    2014-01-01

    Information design has practical and theoretical components. As an academic discipline we may view information design as a combined discipline, a practical theory, or as a theoretical practice. So far information design has incorporated facts, influences, methods, practices, principles, processes, strategies, and tools from a large number of…

  20. Ultrasound guided fluorescence molecular tomography with improved quantification by an attenuation compensated born-normalization and in vivo preclinical study of cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Baoqiang; Berti, Romain; Abran, Maxime

    2014-05-15

    Ultrasound imaging, having the advantages of low-cost and non-invasiveness over MRI and X-ray CT, was reported by several studies as an adequate complement to fluorescence molecular tomography with the perspective of improving localization and quantification of fluorescent molecular targets in vivo. Based on the previous work, an improved dual-modality Fluorescence-Ultrasound imaging system was developed and then validated in imaging study with preclinical tumor model. Ultrasound imaging and a profilometer were used to obtain the anatomical prior information and 3D surface, separately, to precisely extract the tissue boundary on both sides of sample in order to achieve improved fluorescence reconstruction. Furthermore,more » a pattern-based fluorescence reconstruction on the detection side was incorporated to enable dimensional reduction of the dataset while keeping the useful information for reconstruction. Due to its putative role in the current imaging geometry and the chosen reconstruction technique, we developed an attenuation compensated Born-normalization method to reduce the attenuation effects and cancel off experimental factors when collecting quantitative fluorescence datasets over large area. Results of both simulation and phantom study demonstrated that fluorescent targets could be recovered accurately and quantitatively using this reconstruction mechanism. Finally, in vivo experiment confirms that the imaging system associated with the proposed image reconstruction approach was able to extract both functional and anatomical information, thereby improving quantification and localization of molecular targets.« less

  1. Thermo-magneto-elastoplastic coupling model of metal magnetic memory testing method for ferromagnetic materials

    NASA Astrophysics Data System (ADS)

    Shi, Pengpeng; Zhang, Pengcheng; Jin, Ke; Chen, Zhenmao; Zheng, Xiaojing

    2018-04-01

    Metal magnetic memory (MMM) testing (also known as micro-magnetic testing) is a new non-destructive electromagnetic testing method that can diagnose ferromagnetic materials at an early stage by measuring the MMM signal directly on the material surface. Previous experiments have shown that many factors affect MMM signals, in particular, the temperature, the elastoplastic state, and the complex environmental magnetic field. However, the fact that there have been only a few studies of either how these factors affect the signals or the physical coupling mechanisms among them seriously limits the industrial applications of MMM testing. In this paper, a nonlinear constitutive relation for a ferromagnetic material considering the influences of temperature and elastoplastic state is established under a weak magnetic field and is used to establish a nonlinear thermo-magneto-elastoplastic coupling model of MMM testing. Comparing with experimental data verifies that the proposed theoretical model can accurately describe the thermo-magneto-elastoplastic coupling influence on MMM signals. The proposed theoretical model can predict the MMM signals in a complex environment and so is expected to provide a theoretical basis for improving the degree of quantification in MMM testing.

  2. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  3. Review of the quantification techniques for polycyclic aromatic hydrocarbons (PAHs) in food products.

    PubMed

    Bansal, Vasudha; Kumar, Pawan; Kwon, Eilhann E; Kim, Ki-Hyun

    2017-10-13

    There is a growing need for accurate detection of trace-level PAHs in food products due to the numerous detrimental effects caused by their contamination (e.g., toxicity, carcinogenicity, and teratogenicity). This review aims to discuss the up-to-date knowledge on the measurement techniques available for PAHs contained in food or its related products. This article aims to provide a comprehensive outline on the measurement techniques of PAHs in food to help reduce their deleterious impacts on human health based on the accurate quantification. The main part of this review is dedicated to the opportunities and practical options for the treatment of various food samples and for accurate quantification of PAHs contained in those samples. Basic information regarding all available analytical measurement techniques for PAHs in food samples is also evaluated with respect to their performance in terms of quality assurance.

  4. High throughput, parallel imaging and biomarker quantification of human spermatozoa by ImageStream flow cytometry.

    PubMed

    Buckman, Clayton; George, Thaddeus C; Friend, Sherree; Sutovsky, Miriam; Miranda-Vizuete, Antonio; Ozanon, Christophe; Morrissey, Phil; Sutovsky, Peter

    2009-12-01

    Spermatid specific thioredoxin-3 protein (SPTRX-3) accumulates in the superfluous cytoplasm of defective human spermatozoa. Novel ImageStream technology combining flow cytometry with cell imaging was used for parallel quantification and visualization of SPTRX-3 protein in defective spermatozoa of five men from infertile couples. The majority of the SPTRX-3 containing cells were overwhelmingly spermatozoa with a variety of morphological defects, detectable in the ImageStream recorded images. Quantitative parameters of relative SPTRX-3 induced fluorescence measured by ImageStream correlated closely with conventional flow cytometric measurements of the same sample set and reflected the results of clinical semen evaluation. Image Stream quantification of SPTRX-3 combines and surpasses the informative value of both conventional flow cytometry and light microscopic semen evaluation. The observed patterns of the retention of SPTRX-3 in the sperm samples from infertility patients support the view that SPTRX3 is a biomarker of male infertility.

  5. Proposing a Theoretical Framework for Digital Age Youth Information Behavior Building upon Radical Change Theory

    ERIC Educational Resources Information Center

    Koh, Kyungwon

    2011-01-01

    Contemporary young people are engaged in a variety of information behaviors, such as information seeking, using, sharing, and creating. The ways youth interact with information have transformed in the shifting digital information environment; however, relatively little empirical research exists and no theoretical framework adequately explains…

  6. New techniques for the quantification and modeling of remotely sensed alteration and linear features in mineral resource assessment studies

    USGS Publications Warehouse

    Trautwein, C.M.; Rowan, L.C.

    1987-01-01

    Linear structural features and hydrothermally altered rocks that were interpreted from Landsat data have been used by the U.S. Geological Survey (USGS) in regional mineral resource appraisals for more than a decade. In the past, linear features and alterations have been incorporated into models for assessing mineral resources potential by manually overlaying these and other data sets. Recently, USGS research into computer-based geographic information systems (GIS) for mineral resources assessment programs has produced several new techniques for data analysis, quantification, and integration to meet assessment objectives.

  7. Instantaneous Wavenumber Estimation for Damage Quantification in Layered Plate Structures

    NASA Technical Reports Server (NTRS)

    Mesnil, Olivier; Leckey, Cara A. C.; Ruzzene, Massimo

    2014-01-01

    This paper illustrates the application of instantaneous and local wavenumber damage quantification techniques for high frequency guided wave interrogation. The proposed methodologies can be considered as first steps towards a hybrid structural health monitoring/ nondestructive evaluation (SHM/NDE) approach for damage assessment in composites. The challenges and opportunities related to the considered type of interrogation and signal processing are explored through the analysis of numerical data obtained via EFIT simulations of damage in CRFP plates. Realistic damage configurations are modeled from x-ray CT scan data of plates subjected to actual impacts, in order to accurately predict wave-damage interactions in terms of scattering and mode conversions. Simulation data is utilized to enhance the information provided by instantaneous and local wavenumbers and mitigate the complexity related to the multi-modal content of the plate response. Signal processing strategies considered for this purpose include modal decoupling through filtering in the frequency/wavenumber domain, the combination of displacement components, and the exploitation of polarization information for the various modes as evaluated through the dispersion analysis of the considered laminate lay-up sequence. The results presented assess the effectiveness of the proposed wavefield processing techniques as a hybrid SHM/NDE technique for damage detection and quantification in composite, plate-like structures.

  8. Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects.

    PubMed

    Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea

    2016-01-01

    Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.

  9. Uncertainty quantification based on pillars of experiment, theory, and computation. Part I: Data analysis

    NASA Astrophysics Data System (ADS)

    Elishakoff, I.; Sarlin, N.

    2016-06-01

    In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.

  10. Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects

    PubMed Central

    Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea

    2017-01-01

    Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future. PMID:28167896

  11. Analyzing the management and disturbance in European forest based on self-thinning theory

    NASA Astrophysics Data System (ADS)

    Yan, Y.; Gielen, B.; Schelhaas, M.; Mohren, F.; Luyssaert, S.; Janssens, I. A.

    2012-04-01

    There is increasing awareness that natural and anthropogenic disturbance in forests affects exchange of CO2, H2O and energy between the ecosystem and the atmosphere. Consequently quantification of land use and disturbance intensity is one of the next steps needed to improve our understanding of the carbon cycle, its interactions with the atmosphere and its main drivers at local as well as at global level. The conventional NPP-based approaches to quantify the intensity of land management are limited because they lack a sound ecological basis. Here we apply a new way of characterising the degree of management and disturbance in forests using the self- thinning theory and observations of diameter at breast height and stand density. We used plot level information on dominant tree species, diameter at breast height, stand density and soil type from the French national forest inventory from 2005 to 2010. Stand density and diameter at breast height were used to parameterize the intercept of the self-thinning relationship and combined with theoretical slope to obtain an upper boundary for stand productivity given its density. Subsequently, we tested the sensitivity of the self-thinning relationship for tree species, soil type, climate and other environmental characteristics. We could find statistical differences in the self-thinning relationship between species and soil types, mainly due to the large uncertainty of the parameter estimates. Deviation from the theoretical self-thinning line defined as DBH=αN-3/4, was used as a proxy for disturbances, allowing to make spatially explicit maps of forest disturbance over France. The same framework was used to quantify the density-DBH trajectory of even-aged stand management of beech and oak over France. These trajectories will be used as a driver of forest management in the land surface model ORCHIDEE.

  12. An information-theoretical perspective on weighted ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Weijs, Steven V.; van de Giesen, Nick

    2013-08-01

    This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.

  13. Principal Components of Recurrence Quantification Analysis of EMG

    DTIC Science & Technology

    2001-10-25

    Springer, 1981, pp. 366-381. 4. M. Fraser and H. L. Swinney, “ Independent coordinates for strange attractors from mutual information ,” Phys. Rev. A...autocorrelation function of s(n), although it has also been argued that the first local minimum of the auto mutual information function is more appropriate [4...recordings from a given subject. T was taken as the lag corresponding to the first minimum of the auto mutual information function, calculated as

  14. Computer-aided Assessment of Regional Abdominal Fat with Food Residue Removal in CT

    PubMed Central

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2014-01-01

    Rationale and Objectives Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Materials and Methods Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. Results We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Conclusions Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. PMID:24119354

  15. Computer-aided assessment of regional abdominal fat with food residue removal in CT.

    PubMed

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2013-11-01

    Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. Published by Elsevier Inc.

  16. High throughput DNA damage quantification of human tissue with home-based collection device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costes, Sylvain V.; Tang, Jonathan; Yannone, Steven M.

    Kits, methods and systems for providing a service to provide a subject with information regarding the state of a subject's DNA damage. Collection, processing and analysis of samples are also described.

  17. EXFILTRATION IN SEWER SYSTEMS

    EPA Science Inventory

    This study focused on the quantification of leakage of sanitary and industrial sewage from sanitary sewer pipes on a national basis. The method for estimating exfiltration amounts utilized groundwater talbe information to identify areas of the country where the hydraulic gradient...

  18. The role of PET quantification in cardiovascular imaging.

    PubMed

    Slomka, Piotr; Berman, Daniel S; Alexanderson, Erick; Germano, Guido

    2014-08-01

    Positron Emission Tomography (PET) has several clinical and research applications in cardiovascular imaging. Myocardial perfusion imaging with PET allows accurate global and regional measurements of myocardial perfusion, myocardial blood flow and function at stress and rest in one exam. Simultaneous assessment of function and perfusion by PET with quantitative software is currently the routine practice. Combination of ejection fraction reserve with perfusion information may improve the identification of severe disease. The myocardial viability can be estimated by quantitative comparison of fluorodeoxyglucose ( 18 FDG) and rest perfusion imaging. The myocardial blood flow and coronary flow reserve measurements are becoming routinely included in the clinical assessment due to enhanced dynamic imaging capabilities of the latest PET/CT scanners. Absolute flow measurements allow evaluation of the coronary microvascular dysfunction and provide additional prognostic and diagnostic information for coronary disease. Standard quantitative approaches to compute myocardial blood flow from kinetic PET data in automated and rapid fashion have been developed for 13 N-ammonia, 15 O-water and 82 Rb radiotracers. The agreement between software methods available for such analysis is excellent. Relative quantification of 82 Rb PET myocardial perfusion, based on comparisons to normal databases, demonstrates high performance for the detection of obstructive coronary disease. New tracers, such as 18 F-flurpiridaz may allow further improvements in the disease detection. Computerized analysis of perfusion at stress and rest reduces the variability of the assessment as compared to visual analysis. PET quantification can be enhanced by precise coregistration with CT angiography. In emerging clinical applications, the potential to identify vulnerable plaques by quantification of atherosclerotic plaque uptake of 18 FDG and 18 F-sodium fluoride tracers in carotids, aorta and coronary arteries has been demonstrated.

  19. Quantification of Magnetic Surface and Edge States in an FeGe Nanostripe by Off-Axis Electron Holography

    NASA Astrophysics Data System (ADS)

    Song, Dongsheng; Li, Zi-An; Caron, Jan; Kovács, András; Tian, Huanfang; Jin, Chiming; Du, Haifeng; Tian, Mingliang; Li, Jianqi; Zhu, Jing; Dunin-Borkowski, Rafal E.

    2018-04-01

    Whereas theoretical investigations have revealed the significant influence of magnetic surface and edge states on Skyrmonic spin texture in chiral magnets, experimental studies of such chiral states remain elusive. Here, we study chiral edge states in an FeGe nanostripe experimentally using off-axis electron holography. Our results reveal the magnetic-field-driven formation of chiral edge states and their penetration lengths at 95 and 240 K. We determine values of saturation magnetization MS by analyzing the projected in-plane magnetization distributions of helices and Skyrmions. Values of MS inferred for Skyrmions are lower by a few percent than those for helices. We attribute this difference to the presence of chiral surface states, which are predicted theoretically in a three-dimensional Skyrmion model. Our experiments provide direct quantitative measurements of magnetic chiral boundary states and highlight the applicability of state-of-the-art electron holography for the study of complex spin textures in nanostructures.

  20. MR-Consistent Simultaneous Reconstruction of Attenuation and Activity for Non-TOF PET/MR

    NASA Astrophysics Data System (ADS)

    Heußer, Thorsten; Rank, Christopher M.; Freitag, Martin T.; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Beyer, Thomas; Kachelrieß, Marc

    2016-10-01

    Attenuation correction (AC) is required for accurate quantification of the reconstructed activity distribution in positron emission tomography (PET). For simultaneous PET/magnetic resonance (MR), however, AC is challenging, since the MR images do not provide direct information on the attenuating properties of the underlying tissue. Standard MR-based AC does not account for the presence of bone and thus leads to an underestimation of the activity distribution. To improve quantification for non-time-of-flight PET/MR, we propose an algorithm which simultaneously reconstructs activity and attenuation distribution from the PET emission data using available MR images as anatomical prior information. The MR information is used to derive voxel-dependent expectations on the attenuation coefficients. The expectations are modeled using Gaussian-like probability functions. An iterative reconstruction scheme incorporating the prior information on the attenuation coefficients is used to update attenuation and activity distribution in an alternating manner. We tested and evaluated the proposed algorithm for simulated 3D PET data of the head and the pelvis region. Activity deviations were below 5% in soft tissue and lesions compared to the ground truth whereas standard MR-based AC resulted in activity underestimation values of up to 12%.

  1. Activity Theory as a Theoretical Framework for Health Self-Quantification: A Systematic Review of Empirical Studies.

    PubMed

    Almalki, Manal; Gray, Kathleen; Martin-Sanchez, Fernando

    2016-05-27

    Self-quantification (SQ) is a way of working in which, by using tracking tools, people aim to collect, manage, and reflect on personal health data to gain a better understanding of their own body, health behavior, and interaction with the world around them. However, health SQ lacks a formal framework for describing the self-quantifiers' activities and their contextual components or constructs to pursue these health related goals. Establishing such framework is important because it is the first step to operationalize health SQ fully. This may in turn help to achieve the aims of health professionals and researchers who seek to make or study changes in the self-quantifiers' health systematically. The aim of this study was to review studies on health SQ in order to answer the following questions: What are the general features of the work and the particular activities that self-quantifiers perform to achieve their health objectives? What constructs of health SQ have been identified in the scientific literature? How have these studies described such constructs? How would it be possible to model these constructs theoretically to characterize the work of health SQ? A systematic review of peer-reviewed literature was conducted. A total of 26 empirical studies were included. The content of these studies was thematically analyzed using Activity Theory as an organizing framework. The literature provided varying descriptions of health SQ as data-driven and objective-oriented work mediated by SQ tools. From the literature, we identified two types of SQ work: work on data (ie, data management activities) and work with data (ie, health management activities). Using Activity Theory, these activities could be characterized into 6 constructs: users, tracking tools, health objectives, division of work, community or group setting, and SQ plan and rules. We could not find a reference to any single study that accounted for all these activities and constructs of health SQ activity. A Health Self-Quantification Activity Framework is presented, which shows SQ tool use in context, in relation to the goals, plans, and competence of the user. This makes it easier to analyze issues affecting SQ activity, and thereby makes it more feasible to address them. This review makes two significant contributions to research in this field: it explores health SQ work and its constructs thoroughly and it adapts Activity Theory to describe health SQ activity systematically.

  2. An Optimized Informatics Pipeline for Mass Spectrometry-Based Peptidomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chaochao; Monroe, Matthew E.; Xu, Zhe

    2015-12-26

    Comprehensive MS analysis of peptidome, the intracellular and intercellular products of protein degradation, has the potential to provide novel insights on endogenous proteolytic processing and their utility in disease diagnosis and prognosis. Along with the advances in MS instrumentation, a plethora of proteomics data analysis tools have been applied for direct use in peptidomics; however an evaluation of the currently available informatics pipelines for peptidomics data analysis has yet to be reported. In this study, we set off by evaluating the results of several popular MS/MS database search engines including MS-GF+, SEQUEST and MS-Align+ for peptidomics data analysis, followed bymore » identification and label-free quantification using the well-established accurate mass and time (AMT) tag and newly developed informed quantification (IQ) approaches, both based on direct LC-MS analysis. Our result demonstrated that MS-GF+ outperformed both SEQUEST and MS-Align+ in identifying peptidome peptides. Using a database established from the MS-GF+ peptide identifications, both the AMT tag and IQ approaches provided significantly deeper peptidome coverage and less missing value for each individual data set than the MS/MS methods, while achieving robust label-free quantification. Besides having an excellent correlation with the AMT tag quantification results, IQ also provided slightly higher peptidome coverage than AMT. Taken together, we propose an optimal informatics pipeline combining MS-GF+ for initial database searching with IQ (or AMT) for identification and label-free quantification for high-throughput, comprehensive and quantitative peptidomics analysis.« less

  3. Quantification of carbon accumulation in eleven New England eelgrass meadows

    EPA Science Inventory

    As atmospheric and oceanic concentrations of carbon dioxide continue to increase, quantifying the carbon storage potential of seagrass meadows and improving the understanding of the factors controlling carbon sequestration in seagrass meadows is essential information for decision...

  4. Accurate LC peak boundary detection for ¹⁶O/¹⁸O labeled LC-MS data.

    PubMed

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang S J; Zhang, Jianqiu Michelle

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements.

  5. Accurate LC Peak Boundary Detection for 16 O/ 18 O Labeled LC-MS Data

    PubMed Central

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang (SJ); Zhang, Jianqiu (Michelle)

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements. PMID:24115998

  6. On the predictions of the 11B solid state NMR parameters

    NASA Astrophysics Data System (ADS)

    Czernek, Jiří; Brus, Jiří

    2016-07-01

    The set of boron containing compounds has been subject to the prediction of the 11B solid state NMR spectral parameters using DFT-GIPAW methods properly treating the solid phase effects. The quantification of the differences between measured and theoretical values has been presented, which is directly applicable in structural studies involving 11B nuclei. In particular, a simple scheme has been proposed, which is expected to provide for an estimate of the 11B chemical shift within ±2.0 ppm from the experimental value. The computer program, INFOR, enabling the visualization of concomitant Euler rotations related to the tensorial transformations has been presented.

  7. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  8. Nuclear Data Uncertainty Quantification: Past, Present and Future

    NASA Astrophysics Data System (ADS)

    Smith, D. L.

    2015-01-01

    An historical overview is provided of the mathematical foundations of uncertainty quantification and the roles played in the more recent past by nuclear data uncertainties in nuclear data evaluations and nuclear applications. Significant advances that have established the mathematical framework for contemporary nuclear data evaluation methods, as well as the use of uncertainty information in nuclear data evaluation and nuclear applications, are described. This is followed by a brief examination of the current status concerning nuclear data evaluation methodology, covariance data generation, and the application of evaluated nuclear data uncertainties in contemporary nuclear technology. A few possible areas for future investigation of this subject are also suggested.

  9. Quantum non-Gaussianity and quantification of nonclassicality

    NASA Astrophysics Data System (ADS)

    Kühn, B.; Vogel, W.

    2018-05-01

    The algebraic quantification of nonclassicality, which naturally arises from the quantum superposition principle, is related to properties of regular nonclassicality quasiprobabilities. The latter are obtained by non-Gaussian filtering of the Glauber-Sudarshan P function. They yield lower bounds for the degree of nonclassicality. We also derive bounds for convex combinations of Gaussian states for certifying quantum non-Gaussianity directly from the experimentally accessible nonclassicality quasiprobabilities. Other quantum-state representations, such as s -parametrized quasiprobabilities, insufficiently indicate or even fail to directly uncover detailed information on the properties of quantum states. As an example, our approach is applied to multi-photon-added squeezed vacuum states.

  10. Automating pattern quantification: new tools for analysing anisotropy and inhomogeneity of 2d patterns

    NASA Astrophysics Data System (ADS)

    Gerik, A.; Kruhl, J. H.

    2006-12-01

    The quantitative analysis of patterns as a geometric arrangement of material domains with specific geometric or crystallographic properties such as shape, size or crystallographic orientation has been shown to be a valuable tool with a wide field of applications in geo- and material sciences. Pattern quantification allows an unbiased comparison of experimentally generated or theoretical patterns with patterns of natural origin. In addition to this, the application of different methods can also provide information about different pattern forming processes. This information includes the distribution of crystals in a matrix - to analyze i.e. the nature and orientation of flow within a melt - or the governing shear strain regime at the point of time the pattern was formed as well as nature of fracture patterns of different scales, all of which are of great interest not only in structural and engineering geology, but also in material sciences. Different approaches to this problem have been discussed over the past fifteen years, yet only few of the methods were applied successfully at least to single examples (i.e. Velde et al., 1990; Harris et al., 1991; Peternell et al., 2003; Volland &Kruhl, 2004). One of the reasons for this has been the high expenditure of time that was necessary to prepare and analyse the samples. To overcome this problem, a first selection of promising methods have been implemented into a growing collection of software tools: (1) The modifications that Harris et al. (1991) have suggested for the Cantor's dust method (Velde et al., 1990) and which have been applied by Volland &Kruhl (2004) to show the anisotropy in a breccia sample. (2) A map-counting method that uses local box-counting dimensions to map the inhomogeneity of a crystal distribution pattern. Peternell et al. (2003) have used this method to analyze the distribution of phenocrysts in a porphyric granite. (3) A modified perimeter method that relates the directional dependence of the perimeter of grain boundaries to the anisotropy of the pattern (Peternell et al., 2003). We have used the resulting new possibilities to analyze numerous patterns of natural, experimental and mathematical origin in order to determine the scope of applicability of the different methods and present these results along with an evaluation of their individual sensitivities and limitations. References: Harris, C., Franssen, R. &Loosveld, R. (1991): Fractal analysis of fractures in rocks: the Cantor's Dust method comment. Tectonophysics 198: 107-111. Peternell, M., Andries, F. &Kruhl, J.H. (2003): Magmatic flow-pattern anisotropies - analyzed on the basis of a new 'map-mounting' fractal geometry method. DRT Tectonics conference, St. Malo, Book of Abstracts. Velde, B., Dubois, J., Touchard, G. &Badri, A. (1990): Fractal analysis of fractures in rocks: the Cantor's Dust method. Tectonophysics (179): 345-352. Volland, S. &Kruhl, J.H. (2004): Anisotropy quantification: the application of fractal geometry methods on tectonic fracture patterns of a Hercynian fault zone in NW-Sardinia. Journal of Structural Geology 26: 1499- 1510.

  11. Sequence optimization to reduce velocity offsets in cardiovascular magnetic resonance volume flow quantification - A multi-vendor study

    PubMed Central

    2011-01-01

    Purpose Eddy current induced velocity offsets are of concern for accuracy in cardiovascular magnetic resonance (CMR) volume flow quantification. However, currently known theoretical aspects of eddy current behavior have not led to effective guidelines for the optimization of flow quantification sequences. This study is aimed at identifying correlations between protocol parameters and the resulting velocity error in clinical CMR flow measurements in a multi-vendor study. Methods Nine 1.5T scanners of three different types/vendors were studied. Measurements were performed on a large stationary phantom. Starting from a clinical breath-hold flow protocol, several protocol parameters were varied. Acquisitions were made in three clinically relevant orientations. Additionally, a time delay between the bipolar gradient and read-out, asymmetric versus symmetric velocity encoding, and gradient amplitude and slew rate were studied in adapted sequences as exploratory measurements beyond the protocol. Image analysis determined the worst-case offset for a typical great-vessel flow measurement. Results The results showed a great variation in offset behavior among scanners (standard deviation among samples of 0.3, 0.4, and 0.9 cm/s for the three different scanner types), even for small changes in the protocol. Considering the absolute values, none of the tested protocol settings consistently reduced the velocity offsets below the critical level of 0.6 cm/s neither for all three orientations nor for all three scanner types. Using multilevel linear model analysis, oblique aortic and pulmonary slices showed systematic higher offsets than the transverse aortic slices (oblique aortic 0.6 cm/s, and pulmonary 1.8 cm/s higher than transverse aortic). The exploratory measurements beyond the protocol yielded some new leads for further sequence development towards reduction of velocity offsets; however those protocols were not always compatible with the time-constraints of breath-hold imaging and flow-related artefacts. Conclusions This study showed that with current systems there was no generic protocol which resulted into acceptable flow offset values. Protocol optimization would have to be performed on a per scanner and per protocol basis. Proper optimization might make accurate (transverse) aortic flow quantification possible for most scanners. Pulmonary flow quantification would still need further (offline) correction. PMID:21388521

  12. Scoring the correlation of genes by their shared properties using OScal, an improved overlap quantification model.

    PubMed

    Liu, Hui; Liu, Wei; Lin, Ying; Liu, Teng; Ma, Zhaowu; Li, Mo; Zhang, Hong-Mei; Kenneth Wang, Qing; Guo, An-Yuan

    2015-05-27

    Scoring the correlation between two genes by their shared properties is a common and basic work in biological study. A prospective way to score this correlation is to quantify the overlap between the two sets of homogeneous properties of the two genes. However the proper model has not been decided, here we focused on studying the quantification of overlap and proposed a more effective model after theoretically compared 7 existing models. We defined three characteristic parameters (d, R, r) of an overlap, which highlight essential differences among the 7 models and grouped them into two classes. Then the pros and cons of the two groups of model were fully examined by their solution space in the (d, R, r) coordinate system. Finally we proposed a new model called OScal (Overlap Score calculator), which was modified on Poisson distribution (one of 7 models) to avoid its disadvantages. Tested in assessing gene relation using different data, OScal performs better than existing models. In addition, OScal is a basic mathematic model, with very low computation cost and few restrictive conditions, so it can be used in a wide-range of research areas to measure the overlap or similarity of two entities.

  13. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

    PubMed

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Electrochemistry of moexipril: experimental and computational approach and voltammetric determination.

    PubMed

    Taşdemir, Hüdai I; Kiliç, E

    2014-09-01

    The electrochemistry of moexipril (MOE) was studied by electrochemical methods with theoretical calculations performed at B3LYP/6-31 + G (d)//AM1. Cyclic voltammetric studies were carried out based on a reversible and adsorption-controlled reduction peak at -1.35 V on a hanging mercury drop electrode (HMDE). Concurrently irreversible diffusion-controlled oxidation peak at 1.15 V on glassy carbon electrode (GCE) was also employed. Potential values are according to Ag/AgCI, (3.0 M KCI) and measurements were performed in Britton-Robinson buffer of pH 5.5. Tentative electrode mechanisms were proposed according to experimental results and ab-initio calculations. Square-wave adsorptive stripping voltammetric methods have been developed and validated for quantification of MOE in pharmaceutical preparations. Linear working range was established as 0.03-1.35 microM for HMDE and 0.2-20.0 microM for GCE. Limit of quantification (LOQ) was calculated to be 0.032 and 0.47 microM for HMDE and GCE, respectively. Methods were successfully applied to assay the drug in tablets by calibration and standard addition methods with good recoveries between 97.1% and 106.2% having relative standard deviation less than 10%.

  15. A novel approach to quantify cybersecurity for electric power systems

    NASA Astrophysics Data System (ADS)

    Kaster, Paul R., Jr.

    Electric Power grid cybersecurity is a topic gaining increased attention in academia, industry, and government circles, yet a method of quantifying and evaluating a system's security is not yet commonly accepted. In order to be useful, a quantification scheme must be able to accurately reflect the degree to which a system is secure, simply determine the level of security in a system using real-world values, model a wide variety of attacker capabilities, be useful for planning and evaluation, allow a system owner to publish information without compromising the security of the system, and compare relative levels of security between systems. Published attempts at quantifying cybersecurity fail at one or more of these criteria. This document proposes a new method of quantifying cybersecurity that meets those objectives. This dissertation evaluates the current state of cybersecurity research, discusses the criteria mentioned previously, proposes a new quantification scheme, presents an innovative method of modeling cyber attacks, demonstrates that the proposed quantification methodology meets the evaluation criteria, and proposes a line of research for future efforts.

  16. Quantification of the physical properties required of raised pavement markers and accelerated laboratory testing.

    DOT National Transportation Integrated Search

    2014-04-01

    Retroreflective raised pavement markers (RRPMs) can provide lane and directional information at : night, particularly during wet weather conditions. In recent years, the RRPM service life in Florida has : been generally shorter than expected. Moreove...

  17. Quantification of the physical properties required of raised pavement markers and accelerated laboratory testing : [summary].

    DOT National Transportation Integrated Search

    2014-04-01

    Retroreflective raised pavement markers (RRPMs) : can provide lane and directional information at : night, particularly during wet weather conditions. : However, in recent years, the service life of : RRPMs in Florida has been generally shorter than ...

  18. MAMA User Guide v2.0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaschen, Brian Keith; Bloch, Jeffrey Joseph; Porter, Reid

    Morphological signatures of bulk SNM materials have significant promise, but these potential signatures are not fully utilized. This document describes software tools, collectively called the MAMA (Morphological Analysis for Material Attribution) software that can help provide robust and accurate quantification of morphological features in bulk material microscopy images (Optical, SEM). Although many of the specific tools are not unique to Mama, the software package has been designed specifically for nuclear material morphological analysis, and is at a point where it can be easily adapted (by Los Alamos or by collaborators) in response to new, different, or changing forensics needs. Themore » current release of the MAMA software only includes the image quantification, descriptions, and annotation functionality. Only limited information on a sample, its pedigree, and its chemistry are recorded inside this part of the software. This was decision based on initial feedback and the fact that there are several analytical chemistry databases being developed within the community. Currently MAMA is a standalone program that can export quantification results in a basic text format that can be imported into other programs such as Excel and Access. There is also a basic report generating feature that produces HTML formatted pages of the same information. We will be working with collaborators to provide better integration of MAMA into their particular systems, databases and workflows.« less

  19. Inference and quantification of peptidoforms in large sample cohorts by SWATH-MS

    PubMed Central

    Röst, Hannes L; Ludwig, Christina; Buil, Alfonso; Bensimon, Ariel; Soste, Martin; Spector, Tim D; Dermitzakis, Emmanouil T; Collins, Ben C; Malmström, Lars; Aebersold, Ruedi

    2017-01-01

    The consistent detection and quantification of protein post-translational modifications (PTMs) across sample cohorts is an essential prerequisite for the functional analysis of biological processes. Data-independent acquisition (DIA), a bottom-up mass spectrometry based proteomic strategy, exemplified by SWATH-MS, provides complete precursor and fragment ion information of a sample and thus, in principle, the information to identify peptidoforms, the modified variants of a peptide. However, due to the convoluted structure of DIA data sets the confident and systematic identification and quantification of peptidoforms has remained challenging. Here we present IPF (Inference of PeptidoForms), a fully automated algorithm that uses spectral libraries to query, validate and quantify peptidoforms in DIA data sets. The method was developed on data acquired by SWATH-MS and benchmarked using a synthetic phosphopeptide reference data set and phosphopeptide-enriched samples. The data indicate that IPF reduced false site-localization by more than 7-fold in comparison to previous approaches, while recovering 85.4% of the true signals. IPF was applied to detect and quantify peptidoforms carrying ten different types of PTMs in DIA data acquired from more than 200 samples of undepleted blood plasma of a human twin cohort. The data approportioned, for the first time, the contribution of heritable, environmental and longitudinal effects on the observed quantitative variability of specific modifications in blood plasma of a human population. PMID:28604659

  20. Orbital Evasive Target Tracking and Sensor Management

    DTIC Science & Technology

    2012-03-30

    maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game theoretic criterion where...tracking with multiple space borne observers. The results indicate that the game theoretic approach is more effective than the information based approach in...sensor management is to maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game

  1. The development of information processing biases in childhood anxiety: a review and exploration of its origins in parenting.

    PubMed

    Hadwin, Julie A; Garner, Matthew; Perez-Olivas, Gisela

    2006-11-01

    The aim of this paper is to explore parenting as one potential route through which information processing biases for threat develop in children. It reviews information processing biases in childhood anxiety in the context of theoretical models and empirical research in the adult anxiety literature. Specifically, it considers how adult models have been used and adapted to develop a theoretical framework with which to investigate information processing biases in children. The paper then considers research which specifically aims to understand the relationship between parenting and the development of information processing biases in children. It concludes that a clearer theoretical framework is required to understand the significance of information biases in childhood anxiety, as well as their origins in parenting.

  2. Intercomparison of aerosol optical parameters from WALI and R-MAN510 aerosol Raman lidars in the framework of HyMeX campaign

    NASA Astrophysics Data System (ADS)

    Boytard, Mai-Lan; Royer, Philippe; Chazette, Patrick; Shang, Xiaoxia; Marnas, Fabien; Totems, Julien; Bizard, Anthony; Bennai, Baya; Sauvage, Laurent

    2013-04-01

    The HyMeX program (Hydrological cycle in Mediterranean eXperiment) aims at improving our understanding of hydrological cycle in the Mediterranen and at a better quantification and forecast of high-impact weather events in numerical weather prediction models. The first Special Observation Period (SOP1) took place in September/October 2012. During this period two aerosol Raman lidars have been deployed at Menorca Island (Spain) : one Water-vapor and Aerosol Raman LIdar (WALI) operated by LSCE/CEA (Laboratoire des Sciences du Climat et de l'Environnement/Commissariat à l'Energie Atomique) and one aerosol Raman and dual-polarization lidar (R-Man510) developed and commercialized by LEOSPHERE company. Both lidars have been continuously running during the campaign and have provided information on aerosol and cloud optical properties under various atmospheric conditions (maritime background aerosols, dust events, cirrus clouds...). We will present here the results of intercomparisons between R-Man510, and WALI aerosol lidar systems and collocated sunphotometer measurements. Limitations and uncertainties on the retrieval of extinction coefficients, depolarization ratio, aerosol optical depths and detection of atmospheric structures (planetary boundary layer height, aerosol/cloud layers) will be discussed according atmospheric conditions. The results will also be compared with theoretical uncertainty assessed with direct/inverse model of lidar profiles.

  3. Animal reintroductions: an innovative assessment of survival

    USGS Publications Warehouse

    Muths, Erin L.; Bailey, Larissa L.; Watry, Mary Kay

    2014-01-01

    Quantitative evaluations of reintroductions are infrequent and assessments of milestones reached before a project is completed, or abandoned due to lack of funding, are rare. However, such assessments, which are promoted in adaptive management frameworks, are critical. Quantification can provide defensible estimates of biological success, such as the number of survivors from a released cohort, with associated cost per animal. It is unlikely that the global issues of endangered wildlife and population declines will abate, therefore, assurance colonies and reintroductions are likely to become more common. If such endeavors are to be successful biologically or achieve adequate funding, implementation must be more rigorous and accountable. We use a novel application of a multistate, robust design capture-recapture model to estimate survival of reintroduced tadpoles through metamorphosis (i.e., the number of individuals emerging from the pond) and thereby provide a quantitative measure of effort and success for an "in progress" reintroduction of toads. Our data also suggest that tadpoles released at later developmental stages have an increased probability of survival and that eggs laid in the wild hatched at higher rates than eggs laid by captive toads. We illustrate how an interim assessment can identify problems, highlight successes, and provide information for use in adjusting the effort or implementing a Decision-Theoretic adaptive management strategy.

  4. Real-time polymerase chain reaction-based approach for quantification of the pat gene in the T25 Zea mays event.

    PubMed

    Weighardt, Florian; Barbati, Cristina; Paoletti, Claudia; Querci, Maddalena; Kay, Simon; De Beuckeleer, Marc; Van den Eede, Guy

    2004-01-01

    In Europe, a growing interest for reliable techniques for the quantification of genetically modified component(s) of food matrixes is arising from the need to comply with the European legislative framework on novel food products. Real-time polymerase chain reaction (PCR) is currently the most powerful technique for the quantification of specific nucleic acid sequences. Several real-time PCR methodologies based on different molecular principles have been developed for this purpose. The most frequently used approach in the field of genetically modified organism (GMO) quantification in food or feed samples is based on the 5'-3'-exonuclease activity of Taq DNA polymerase on specific degradation probes (TaqMan principle). A novel approach was developed for the establishment of a TaqMan quantification system assessing GMO contents around the 1% threshold stipulated under European Union (EU) legislation for the labeling of food products. The Zea mays T25 elite event was chosen as a model for the development of the novel GMO quantification approach. The most innovative aspect of the system is represented by the use of sequences cloned in plasmids as reference standards. In the field of GMO quantification, plasmids are an easy to use, cheap, and reliable alternative to Certified Reference Materials (CRMs), which are only available for a few of the GMOs authorized in Europe, have a relatively high production cost, and require further processing to be suitable for analysis. Strengths and weaknesses of the use of novel plasmid-based standards are addressed in detail. In addition, the quantification system was designed to avoid the use of a reference gene (e.g., a single copy, species-specific gene) as normalizer, i.e., to perform a GMO quantification based on an absolute instead of a relative measurement. In fact, experimental evidences show that the use of reference genes adds variability to the measurement system because a second independent real-time PCR-based measurement must be performed. Moreover, for some reference genes no sufficient information on copy number in and among genomes of different lines is available, making adequate quantification difficult. Once developed, the method was subsequently validated according to IUPAC and ISO 5725 guidelines. Thirteen laboratories from 8 EU countries participated in the trial. Eleven laboratories provided results complying with the predefined study requirements. Repeatability (RSDr) values ranged from 8.7 to 15.9%, with a mean value of 12%. Reproducibility (RSDR) values ranged from 16.3 to 25.5%, with a mean value of 21%. Following Codex Alimentarius Committee guidelines, both the limits of detection and quantitation were determined to be <0.1%.

  5. A Holistic Theoretical Approach to Intellectual Disability: Going Beyond the Four Current Perspectives.

    PubMed

    Schalock, Robert L; Luckasson, Ruth; Tassé, Marc J; Verdugo, Miguel Angel

    2018-04-01

    This article describes a holistic theoretical framework that can be used to explain intellectual disability (ID) and organize relevant information into a usable roadmap to guide understanding and application. Developing the framework involved analyzing the four current perspectives on ID and synthesizing this information into a holistic theoretical framework. Practices consistent with the framework are described, and examples are provided of how multiple stakeholders can apply the framework. The article concludes with a discussion of the advantages and implications of a holistic theoretical approach to ID.

  6. CONDUCTING-POLYMER NANOWIRE IMMUNOSENSOR ARRAYS FOR MICROBIAL PATHOGENS

    EPA Science Inventory

    The lack of methods for routine rapid and sensitive detection and quantification of specific pathogens has limited the amount of information available on their occurrence in drinking water and other environmental samples. The nanowire biosensor arrays developed in this study w...

  7. Swapping Settings: Researching Information Literacy in Workplace and in Educational Contexts

    ERIC Educational Resources Information Center

    Lundh, Anna Hampson; Limberg, Louise; Lloyd, Annemaree

    2013-01-01

    Introduction: Information literacy research is characterised by a multitude of interests, research approaches and theoretical starting-points. Challenges lie in the relevance of research to professional fields where information literacy is a concern, and the need to build a strong theoretical base for the research area. We aim to lay a foundation…

  8. The Theoretical Principles of the Organization of Information Systems.

    ERIC Educational Resources Information Center

    Kulikowski, Juliusz Lech

    A survey of the theoretical problems connected with the organization and design of systems for processing and transmitting information is presented in this article. It gives a definition of Information Systems (IS) and classifies them from various points of view. It discusses briefly the most important aspects of the organization of IS, such as…

  9. Perfusion quantification in contrast-enhanced ultrasound (CEUS)--ready for research projects and routine clinical use.

    PubMed

    Tranquart, F; Mercier, L; Frinking, P; Gaud, E; Arditi, M

    2012-07-01

    With contrast-enhanced ultrasound (CEUS) now established as a valuable imaging modality for many applications, a more specific demand has recently emerged for quantifying perfusion and using measured parameters as objective indicators for various disease states. However, CEUS perfusion quantification remains challenging and is not well integrated in daily clinical practice. The development of VueBox™ alleviates existing limitations and enables quantification in a standardized way. VueBox™ operates as an off-line software application, after dynamic contrast-enhanced ultrasound (DCE-US) is performed. It enables linearization of DICOM clips, assessment of perfusion using patented curve-fitting models, and generation of parametric images by synthesizing perfusion information at the pixel level using color coding. VueBox™ is compatible with most of the available ultrasound platforms (nonlinear contrast-enabled), has the ability to process both bolus and disruption-replenishment kinetics loops, allows analysis results and their context to be saved, and generates analysis reports automatically. Specific features have been added to VueBox™, such as fully automatic in-plane motion compensation and an easy-to-use clip editor. Processing time has been reduced as a result of parallel programming optimized for multi-core processors. A long list of perfusion parameters is available for each of the two administration modes to address all possible demands currently reported in the literature for diagnosis or treatment monitoring. In conclusion, VueBox™ is a valid and robust quantification tool to be used for standardizing perfusion quantification and to improve the reproducibility of results across centers. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Study on the influence of X-ray tube spectral distribution on the analysis of bulk samples and thin films: Fundamental parameters method and theoretical coefficient algorithms

    NASA Astrophysics Data System (ADS)

    Sitko, Rafał

    2008-11-01

    Knowledge of X-ray tube spectral distribution is necessary in theoretical methods of matrix correction, i.e. in both fundamental parameter (FP) methods and theoretical influence coefficient algorithms. Thus, the influence of X-ray tube distribution on the accuracy of the analysis of thin films and bulk samples is presented. The calculations are performed using experimental X-ray tube spectra taken from the literature and theoretical X-ray tube spectra evaluated by three different algorithms proposed by Pella et al. (X-Ray Spectrom. 14 (1985) 125-135), Ebel (X-Ray Spectrom. 28 (1999) 255-266), and Finkelshtein and Pavlova (X-Ray Spectrom. 28 (1999) 27-32). In this study, Fe-Cr-Ni system is selected as an example and the calculations are performed for X-ray tubes commonly applied in X-ray fluorescence analysis (XRF), i.e., Cr, Mo, Rh and W. The influence of X-ray tube spectra on FP analysis is evaluated when quantification is performed using various types of calibration samples. FP analysis of bulk samples is performed using pure-element bulk standards and multielement bulk standards similar to the analyzed material, whereas for FP analysis of thin films, the bulk and thin pure-element standards are used. For the evaluation of the influence of X-ray tube spectra on XRF analysis performed by theoretical influence coefficient methods, two algorithms for bulk samples are selected, i.e. Claisse-Quintin (Can. Spectrosc. 12 (1967) 129-134) and COLA algorithms (G.R. Lachance, Paper Presented at the International Conference on Industrial Inorganic Elemental Analysis, Metz, France, June 3, 1981) and two algorithms (constant and linear coefficients) for thin films recently proposed by Sitko (X-Ray Spectrom. 37 (2008) 265-272).

  11. A High Order Element Based Method for the Simulation of Velocity Damping in the Hyporheic Zone of a High Mountain River

    NASA Astrophysics Data System (ADS)

    Preziosi-Ribero, Antonio; Peñaloza-Giraldo, Jorge; Escobar-Vargas, Jorge; Donado-Garzón, Leonardo

    2016-04-01

    Groundwater - Surface water interaction is a topic that has gained relevance among the scientific community over the past decades. However, several questions remain unsolved inside this topic, and almost all the research that has been done in the past regards the transport phenomena and has little to do with understanding the dynamics of the flow patterns of the above mentioned interactions. The aim of this research is to verify the attenuation of the water velocity that comes from the free surface and enters the porous media under the bed of a high mountain river. The understanding of this process is a key feature in order to characterize and quantify the interactions between groundwater and surface water. However, the lack of information and the difficulties that arise when measuring groundwater flows under streams make the physical quantification non reliable for scientific purposes. These issues suggest that numerical simulations and in-stream velocity measurements can be used in order to characterize these flows. Previous studies have simulated the attenuation of a sinusoidal pulse of vertical velocity that comes from a stream and goes into a porous medium. These studies used the Burgers equation and the 1-D Navier-Stokes equations as governing equations. However, the boundary conditions of the problem, and the results when varying the different parameters of the equations show that the understanding of the process is not complete yet. To begin with, a Spectral Multi Domain Penalty Method (SMPM) was proposed for quantifying the velocity damping solving the Navier - Stokes equations in 1D. The main assumptions are incompressibility and a hydrostatic approximation for the pressure distributions. This method was tested with theoretical signals that are mainly trigonometric pulses or functions. Afterwards, in order to test the results with real signals, velocity profiles were captured near the Gualí River bed (Honda, Colombia), with an Acoustic Doppler Velocimeter (ADV). These profiles were filtered, treated and set up to feed the SMPM that solves the Navier - Stokes equations for the theoretical case. Besides, the velocity fluctuations along the river bed were calculated according to the mesh that was proposed to solve the numerical problem. This mesh required more refinement near the boundary conditions in order to calculate all the turbulent flow scales near the boundary. As a result, the velocity damping inside the porous media with real velocity pulses behaves similarly to the damping of the theoretical signals. However, there is still doubt about the use of the Navier - Stokes equations with the assumptions of incompressibility and hydrostatic approximation for the pressure distributions. Furthermore, the boundary conditions of the model suggest a great theme of discussion because of their nature. To sum up, the quantification of the interactions of groundwater and surface water have to be studied using numerical models in order to observe the behavior of the flow. Our research suggests that the velocity damping of water when entering the porous media goes beyond the approximations used for the Navier-Stokes equations and that this is a pressure driven flow that does not hold the hydrostatic simplification.

  12. Implications of Measurement Assay Type in Design of HIV Experiments.

    PubMed

    Cannon, LaMont; Jagarapu, Aditya; Vargas-Garcia, Cesar A; Piovoso, Michael J; Zurakowski, Ryan

    2017-12-01

    Time series measurements of circular viral episome (2-LTR) concentrations enable indirect quantification of persistent low-level Human Immunodeficiency Virus (HIV) replication in patients on Integrase-Inhibitor intensified Combined Antiretroviral Therapy (cART). In order to determine the magnitude of these low level infection events, blood has to be drawn from a patients at a frequency and volume that is strictly regulated by the Institutional Review Board (IRB). Once the blood is drawn, the 2-LTR concentration is determined by quantifying the amount of HIV DNA present in the sample via a PCR (Polymerase Chain Reaction) assay. Real time quantitative Polymerase Chain Reaction (qPCR) is a widely used method of performing PCR; however, a newer droplet digital Polymerase Chain Reaction (ddPCR) method has been shown to provide more accurate quantification of DNA. Using a validated model of HIV viral replication, this paper demonstrates the importance of considering DNA quantification assay type when optimizing experiment design conditions. Experiments are optimized using a Genetic Algorithm (GA) to locate a family of suboptimal sample schedules which yield the highest fitness. Fitness is defined as the expected information gained in the experiment, measured by the Kullback-Leibler Divergence (KLD) between the prior and posterior distributions of the model parameters. We compare the information content of the optimized schedules to uniform schedules as well as two clinical schedules implemented by researchers at UCSF and the University of Melbourne. This work shows that there is a significantly greater gain information in experiments using a ddPCR assay vs. a qPCR assay and that certain experiment design considerations should be taken when using either assay.

  13. cFinder: definition and quantification of multiple haplotypes in a mixed sample.

    PubMed

    Niklas, Norbert; Hafenscher, Julia; Barna, Agnes; Wiesinger, Karin; Pröll, Johannes; Dreiseitl, Stephan; Preuner-Stix, Sandra; Valent, Peter; Lion, Thomas; Gabriel, Christian

    2015-09-07

    Next-generation sequencing allows for determining the genetic composition of a mixed sample. For instance, when performing resistance testing for BCR-ABL1 it is necessary to identify clones and define compound mutations; together with an exact quantification this may complement diagnosis and therapy decisions with additional information. Moreover, that applies not only to oncological issues but also determination of viral, bacterial or fungal infection. The efforts to retrieve multiple haplotypes (more than two) and proportion information from data with conventional software are difficult, cumbersome and demand multiple manual steps. Therefore, we developed a tool called cFinder that is capable of automatic detection of haplotypes and their accurate quantification within one sample. BCR-ABL1 samples containing multiple clones were used for testing and our cFinder could identify all previously found clones together with their abundance and even refine some results. Additionally, reads were simulated using GemSIM with multiple haplotypes, the detection was very close to linear (R(2) = 0.96). Our aim is not to deduce haploblocks over statistics, but to characterize one sample's composition precisely. As a result the cFinder reports the connections of variants (haplotypes) with their readcount and relative occurrence (percentage). Download is available at http://sourceforge.net/projects/cfinder/. Our cFinder is implemented in an efficient algorithm that can be run on a low-performance desktop computer. Furthermore, it considers paired-end information (if available) and is generally open for any current next-generation sequencing technology and alignment strategy. To our knowledge, this is the first software that enables researchers without extensive bioinformatic support to designate multiple haplotypes and how they constitute to a sample.

  14. Nuclear Data Uncertainty Quantification: Past, Present and Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D. L.

    2015-01-01

    An historical overview is provided of the mathematical foundations of uncertainty quantification and the roles played in the more recent past by nuclear data uncertainties in nuclear data evaluations and nuclear applications. Significant advances that have established the mathematical framework for contemporary nuclear data evaluation methods, as well as the use of uncertainty information in nuclear data evaluation and nuclear applications, are described. This is followed by a brief examination of the current status concerning nuclear data evaluation methodology, covariance data generation, and the application of evaluated nuclear data uncertainties in contemporary nuclear technology. A few possible areas for futuremore » investigation of this subject are also suggested.« less

  15. Nuclear Data Uncertainty Quantification: Past, Present and Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D.L., E-mail: Donald.L.Smith@anl.gov

    2015-01-15

    An historical overview is provided of the mathematical foundations of uncertainty quantification and the roles played in the more recent past by nuclear data uncertainties in nuclear data evaluations and nuclear applications. Significant advances that have established the mathematical framework for contemporary nuclear data evaluation methods, as well as the use of uncertainty information in nuclear data evaluation and nuclear applications, are described. This is followed by a brief examination of the current status concerning nuclear data evaluation methodology, covariance data generation, and the application of evaluated nuclear data uncertainties in contemporary nuclear technology. A few possible areas for futuremore » investigation of this subject are also suggested.« less

  16. Recommendations for adaptation and validation of commercial kits for biomarker quantification in drug development.

    PubMed

    Khan, Masood U; Bowsher, Ronald R; Cameron, Mark; Devanarayan, Viswanath; Keller, Steve; King, Lindsay; Lee, Jean; Morimoto, Alyssa; Rhyne, Paul; Stephen, Laurie; Wu, Yuling; Wyant, Timothy; Lachno, D Richard

    2015-01-01

    Increasingly, commercial immunoassay kits are used to support drug discovery and development. Longitudinally consistent kit performance is crucial, but the degree to which kits and reagents are characterized by manufacturers is not standardized, nor are the approaches by users to adapt them and evaluate their performance through validation prior to use. These factors can negatively impact data quality. This paper offers a systematic approach to assessment, method adaptation and validation of commercial immunoassay kits for quantification of biomarkers in drug development, expanding upon previous publications and guidance. These recommendations aim to standardize and harmonize user practices, contributing to reliable biomarker data from commercial immunoassays, thus, enabling properly informed decisions during drug development.

  17. Measuring the Risk of Shortfalls in Air Force Capabilities

    DTIC Science & Technology

    2004-03-01

    quantifying risk and simplifying that quantification in a risk measure is to order different risks and, ultimately, to choose between them. The...the analytic goal of understanding and quantifying risk . The growth in information technology, and the amount of data collected on, for example

  18. From ecological records to big data: the invention of global biodiversity.

    PubMed

    Devictor, Vincent; Bensaude-Vincent, Bernadette

    2016-12-01

    This paper is a critical assessment of the epistemological impact of the systematic quantification of nature with the accumulation of big datasets on the practice and orientation of ecological science. We examine the contents of big databases and argue that it is not just accumulated information; records are translated into digital data in a process that changes their meanings. In order to better understand what is at stake in the 'datafication' process, we explore the context for the emergence and quantification of biodiversity in the 1980s, along with the concept of the global environment. In tracing the origin and development of the global biodiversity information facility (GBIF) we describe big data biodiversity projects as a techno-political construction dedicated to monitoring a new object: the global diversity. We argue that, biodiversity big data became a powerful driver behind the invention of the concept of the global environment, and a way to embed ecological science in the political agenda.

  19. Quantitative real-time single particle analysis of virions.

    PubMed

    Heider, Susanne; Metzner, Christoph

    2014-08-01

    Providing information about single virus particles has for a long time been mainly the domain of electron microscopy. More recently, technologies have been developed-or adapted from other fields, such as nanotechnology-to allow for the real-time quantification of physical virion particles, while supplying additional information such as particle diameter concomitantly. These technologies have progressed to the stage of commercialization increasing the speed of viral titer measurements from hours to minutes, thus providing a significant advantage for many aspects of virology research and biotechnology applications. Additional advantages lie in the broad spectrum of virus species that may be measured and the possibility to determine the ratio of infectious to total particles. A series of disadvantages remain associated with these technologies, such as a low specificity for viral particles. In this review we will discuss these technologies by comparing four systems for real-time single virus particle analysis and quantification. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Modeling of information diffusion in Twitter-like social networks under information overload.

    PubMed

    Li, Pei; Li, Wei; Wang, Hui; Zhang, Xin

    2014-01-01

    Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations.

  1. Modeling of Information Diffusion in Twitter-Like Social Networks under Information Overload

    PubMed Central

    Li, Wei

    2014-01-01

    Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations. PMID:24795541

  2. The unique contributions of perceiver and target characteristics in person perception.

    PubMed

    Hehman, Eric; Sutherland, Clare A M; Flake, Jessica K; Slepian, Michael L

    2017-10-01

    Models of person perception have long asserted that our impressions of others are guided by characteristics of both the target and perceiver. However, research has not yet quantified to what extent perceivers and targets contribute to different impressions. This quantification is theoretically critical, as it addresses how much an impression arises from "our minds" versus "others' faces." Here, we apply cross-classified random effects models to address this fundamental question in social cognition, using approximately 700,000 ratings of faces. With this approach, we demonstrate that (a) different trait impressions have unique causal processes, meaning that some impressions are largely informed by perceiver-level characteristics whereas others are driven more by physical target-level characteristics; (b) modeling of perceiver- and target-variance in impressions informs fundamental models of social perception; (c) Perceiver × Target interactions explain a substantial portion of variance in impressions; (d) greater emotional intensity in stimuli decreases the influence of the perceiver; and (e) more variable, naturalistic stimuli increases variation across perceivers. Important overarching patterns emerged. Broadly, traits and dimensions representing inferences of character (e.g., dominance) are driven more by perceiver characteristics than those representing appearance-based appraisals (e.g., youthful-attractiveness). Moreover, inferences made of more ambiguous traits (e.g., creative) or displays (e.g., faces with less extreme emotions, less-controlled stimuli) are similarly driven more by perceiver than target characteristics. Together, results highlight the large role that perceiver and target variability play in trait impressions, and develop a new topography of trait impressions that considers the source of the impression. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Simultaneous quantification of endogenous and exogenous plasma glucose by isotope dilution LC-MS/MS with indirect MRM of the derivative tag.

    PubMed

    Yu, Lingling; Wen, Chao; Li, Xing; Fang, Shiqi; Yang, Lichuan; Wang, Tony; Hu, Kaifeng

    2018-03-01

    Quantification of endogenous and exogenous plasma glucose can help more comprehensively evaluate the glucose metabolic status. A ratio-based approach using isotope dilution liquid chromatography tandem mass spectrometry (ID LC-MS/MS) with indirect multiple reaction monitoring (MRM) of the derivative tag was developed to simultaneously quantify endo-/exogenous plasma glucose. Using diluted D-[ 13 C 6 ] glucose as tracer of exogenous glucose, 12 C 6 / 13 C 6 glucoses were first derivatized and then data were acquired in MRM mode. The metabolism of exogenous glucose can be tracked and the concentration ratio of endo/exo-genous glucose can be measured by calculating the endo-/exo-genous glucose concentrations from peak area ratio of specific daughter ions. Joint application of selective derivatization and MRM analysis not only improves the sensitivity but also minimizes the interference from the background of plasma, which warrants the accuracy and reproducibility. Good agreement between the theoretical and calculated concentration ratios was obtained with a linear correlation coefficient (R) of 0.9969 in the range of D-glucose from 0.5 to 20.0 mM, which covers the healthy and diabetic physiological scenarios. Satisfactory reproducibility was obtained by evaluation of the intra- and inter-day precisions with relative standard deviations (RSDs) less than 5.16%, and relative recoveries of 85.96 to 95.92% were obtained at low, medium, and high concentration, respectively. The method was successfully applied to simultaneous determination of the endo-/exogenous glucose concentration in plasma of non-diabetic and type II diabetic cynomolgus monkeys. Graphical Abstract The scheme of the proposed ratio-based approach using isotope dilution LC-MS/MS with indirect MRM of the derivative tag for simultaneous quantification of endogenous and exogenous plasma glucose.

  4. From information theory to quantitative description of steric effects.

    PubMed

    Alipour, Mojtaba; Safari, Zahra

    2016-07-21

    Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.

  5. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. JOe; Ronald L. Boring

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less

  6. Calibration transfer of a Raman spectroscopic quantification method for the assessment of liquid detergent compositions between two at-line instruments installed at two liquid detergent production plants.

    PubMed

    Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T

    2017-09-01

    Calibration transfer of partial least squares (PLS) quantification models is established between two Raman spectrometers located at two liquid detergent production plants. As full recalibration of existing calibration models is time-consuming, labour-intensive and costly, it is investigated whether the use of mathematical correction methods requiring only a handful of standardization samples can overcome the dissimilarities in spectral response observed between both measurement systems. Univariate and multivariate standardization approaches are investigated, ranging from simple slope/bias correction (SBC), local centring (LC) and single wavelength standardization (SWS) to more complex direct standardization (DS) and piecewise direct standardization (PDS). The results of these five calibration transfer methods are compared reciprocally, as well as with regard to a full recalibration. Four PLS quantification models, each predicting the concentration of one of the four main ingredients in the studied liquid detergent composition, are aimed at transferring. Accuracy profiles are established from the original and transferred quantification models for validation purposes. A reliable representation of the calibration models performance before and after transfer is thus established, based on β-expectation tolerance intervals. For each transferred model, it is investigated whether every future measurement that will be performed in routine will be close enough to the unknown true value of the sample. From this validation, it is concluded that instrument standardization is successful for three out of four investigated calibration models using multivariate (DS and PDS) transfer approaches. The fourth transferred PLS model could not be validated over the investigated concentration range, due to a lack of precision of the slave instrument. Comparing these transfer results to a full recalibration on the slave instrument allows comparison of the predictive power of both Raman systems and leads to the formulation of guidelines for further standardization projects. It is concluded that it is essential to evaluate the performance of the slave instrument prior to transfer, even when it is theoretically identical to the master apparatus. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Two-stream Convolutional Neural Network for Methane Emissions Quantification

    NASA Astrophysics Data System (ADS)

    Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.

    2017-12-01

    Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair programs.

  8. Vitamin D in foods: an evolution of knowledge (chapter 60)

    USDA-ARS?s Scientific Manuscript database

    Accurate data for vitamin D in foods are essential to support epidemiological and clinical studies seeking to identify associations between total vitamin D “exposure” and health outcomes that require quantification of dietary intake, and also to inform health professionals about wise food choices fo...

  9. A STATISTICAL MODELING METHODOLOGY FOR THE DETECTION, QUANTIFICATION, AND PREDICTION OF ECOLOGICAL THRESHOLDS

    EPA Science Inventory

    This study will provide a general methodology for integrating threshold information from multiple species ecological metrics, allow for prediction of changes of alternative stable states, and provide a risk assessment tool that can be applied to adaptive management. The integr...

  10. Coastal wetland support of Great Lakes fisheries: progress from concept to quantification.

    EPA Science Inventory

    Fishery support is recognized as a valuable ecosystem service provided by aquatic systems but is harder to quantify than to describe conceptually. In this paper, we intersect data on fish inhabiting Great Lakes coastal wetlands with information on commercial and recreational har...

  11. miR-MaGiC improves quantification accuracy for small RNA-seq.

    PubMed

    Russell, Pamela H; Vestal, Brian; Shi, Wen; Rudra, Pratyaydipta D; Dowell, Robin; Radcliffe, Richard; Saba, Laura; Kechris, Katerina

    2018-05-15

    Many tools have been developed to profile microRNA (miRNA) expression from small RNA-seq data. These tools must contend with several issues: the small size of miRNAs, the small number of unique miRNAs, the fact that similar miRNAs can be transcribed from multiple loci, and the presence of miRNA isoforms known as isomiRs. Methods failing to address these issues can return misleading information. We propose a novel quantification method designed to address these concerns. We present miR-MaGiC, a novel miRNA quantification method, implemented as a cross-platform tool in Java. miR-MaGiC performs stringent mapping to a core region of each miRNA and defines a meaningful set of target miRNA sequences by collapsing the miRNA space to "functional groups". We hypothesize that these two features, mapping stringency and collapsing, provide more optimal quantification to a more meaningful unit (i.e., miRNA family). We test miR-MaGiC and several published methods on 210 small RNA-seq libraries, evaluating each method's ability to accurately reflect global miRNA expression profiles. We define accuracy as total counts close to the total number of input reads originating from miRNAs. We find that miR-MaGiC, which incorporates both stringency and collapsing, provides the most accurate counts.

  12. Qualitative and Quantitative Detection of Botulinum Neurotoxins from Complex Matrices: Results of the First International Proficiency Test

    PubMed Central

    Worbs, Sylvia; Fiebig, Uwe; Zeleny, Reinhard; Schimmel, Heinz; Rummel, Andreas; Luginbühl, Werner; Dorner, Brigitte G.

    2015-01-01

    In the framework of the EU project EQuATox, a first international proficiency test (PT) on the detection and quantification of botulinum neurotoxins (BoNT) was conducted. Sample materials included BoNT serotypes A, B and E spiked into buffer, milk, meat extract and serum. Different methods were applied by the participants combining different principles of detection, identification and quantification. Based on qualitative assays, 95% of all results reported were correct. Successful strategies for BoNT detection were based on a combination of complementary immunological, MS-based and functional methods or on suitable functional in vivo/in vitro approaches (mouse bioassay, hemidiaphragm assay and Endopep-MS assay). Quantification of BoNT/A, BoNT/B and BoNT/E was performed by 48% of participating laboratories. It turned out that precise quantification of BoNT was difficult, resulting in a substantial scatter of quantitative data. This was especially true for results obtained by the mouse bioassay which is currently considered as “gold standard” for BoNT detection. The results clearly demonstrate the urgent need for certified BoNT reference materials and the development of methods replacing animal testing. In this context, the BoNT PT provided the valuable information that both the Endopep-MS assay and the hemidiaphragm assay delivered quantitative results superior to the mouse bioassay. PMID:26703724

  13. Methods to Detect Nitric Oxide and its Metabolites in Biological Samples

    PubMed Central

    Bryan, Nathan S.; Grisham, Matthew B.

    2007-01-01

    Nitric oxide (NO) methodology is a complex and often confusing science and the focus of many debates and discussion concerning NO biochemistry. NO is involved in many physiological processes including regulation of blood pressure, immune response and neural communication. Therefore its accurate detection and quantification is critical to understanding health and disease. Due to the extremely short physiological half life of this gaseous free radical, alternative strategies for the detection of reaction products of NO biochemistry have been developed. The quantification of NO metabolites in biological samples provides valuable information with regards to in vivo NO production, bioavailability and metabolism. Simply sampling a single compartment such as blood or plasma may not always provide an accurate assessment of whole body NO status, particularly in tissues. Therefore, extrapolation of plasma or blood NO status to specific tissues of interest is no longer a valid approach. As a result, methods continue to be developed and validated which allow the detection and quantification of NO and NO-related products/metabolites in multiple compartments of experimental animals in vivo. The methods described in this review is not an exhaustive or comprehensive discussion of all methods available for the detection of NO but rather a description of the most commonly used and practical methods which allow accurate and sensitive quantification of NO products/metabolites in multiple biological matrices under normal physiological conditions. PMID:17664129

  14. How can research on anthropogenic greenhouse gas flux quantification be better aligned with US climate change policy needs?

    NASA Astrophysics Data System (ADS)

    Gurney, K. R.

    2014-12-01

    Scientific research on quantification of anthropogenic greenhouse gas emissions at national and sub-national scales within the US has advanced considerably in the last decade. Large investment has been made in building systems capable of observing greenhouse gases in the atmosphere at multiple scales, measuring direct anthropogenic fluxes near sources and modeling the linkages between fluxes and observed concentrations. Much of this research has been focused at improving the "verification" component of "monitoring, reporting, and verification" and indeed, has achieved successes in recent years. However, there are opportunities for ongoing scientific research to contribute critical new information to policymakers. In order to realize this contribution, additional but complementary, research foci must be emphasized. Examples include more focus on anthropogenic emission drivers, quantification at scales relevant to human decision-making, and exploration of cost versus uncertainty in observing/modeling systems. I will review what I think are the opportunities to better align scientific research with current and emerging US climate change policymaking. I will then explore a few examples of where expansion or alteration of greenhouse gas flux quantification research focus could better align with current and emerging US climate change policymaking such as embodied in the proposed EPA rule aimed at reducing emissions from US power plants, California's ongoing emissions reduction policymaking and aspirational emission reduction efforts in multiple US cities.

  15. Verification of Small Hole Theory for Application to Wire Chaffing Resulting in Shield Faults

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2011-01-01

    Our work is focused upon developing methods for wire chafe fault detection through the use of reflectometry to assess shield integrity. When shielded electrical aircraft wiring first begins to chafe typically the resulting evidence is small hole(s) in the shielding. We are focused upon developing algorithms and the signal processing necessary to first detect these small holes prior to incurring damage to the inner conductors. Our approach has been to develop a first principles physics model combined with probabilistic inference, and to verify this model with laboratory experiments as well as through simulation. Previously we have presented the electromagnetic small-hole theory and how it might be applied to coaxial cable. In this presentation, we present our efforts to verify this theoretical approach with high-fidelity electromagnetic simulations (COMSOL). Laboratory observations are used to parameterize the computationally efficient theoretical model with probabilistic inference resulting in quantification of hole size and location. Our efforts in characterizing faults in coaxial cable are subsequently leading to fault detection in shielded twisted pair as well as analysis of intermittent faulty connectors using similar techniques.

  16. Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods.

    PubMed

    Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C W; Lipiński, Wojciech; Bischof, John C

    2016-07-21

    Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.

  17. Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods

    NASA Astrophysics Data System (ADS)

    Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C. W.; Lipiński, Wojciech; Bischof, John C.

    2016-07-01

    Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.

  18. Support Net for Frontline Providers

    DTIC Science & Technology

    2016-03-01

    influencing members’ continuance intentions in professional virtual communities - a longitudinal study. Journal of Information Science, 33(4), 451-467...of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB...from a scientific and theoretically based manner. Results from this project provide critical prevalence information , theoretical development, and

  19. Computed tomographic-based quantification of emphysema and correlation to pulmonary function and mechanics.

    PubMed

    Washko, George R; Criner, Gerald J; Mohsenifar, Zab; Sciurba, Frank C; Sharafkhaneh, Amir; Make, Barry J; Hoffman, Eric A; Reilly, John J

    2008-06-01

    Computed tomographic based indices of emphysematous lung destruction may highlight differences in disease pathogenesis and further enable the classification of subjects with Chronic Obstructive Pulmonary Disease. While there are multiple techniques that can be utilized for such radiographic analysis, there is very little published information comparing the performance of these methods in a clinical case series. Our objective was to examine several quantitative and semi-quantitative methods for the assessment of the burden of emphysema apparent on computed tomographic scans and compare their ability to predict lung mechanics and function. Automated densitometric analysis was performed on 1094 computed tomographic scans collected upon enrollment into the National Emphysema Treatment Trial. Trained radiologists performed an additional visual grading of emphysema on high resolution CT scans. Full pulmonary function test results were available for correlation, with a subset of subjects having additional measurements of lung static recoil. There was a wide range of emphysematous lung destruction apparent on the CT scans and univariate correlations to measures of lung function were of modest strength. No single method of CT scan analysis clearly outperformed the rest of the group. Quantification of the burden of emphysematous lung destruction apparent on CT scan is a weak predictor of lung function and mechanics in severe COPD with no uniformly superior method found to perform this analysis. The CT based quantification of emphysema may augment pulmonary function testing in the characterization of COPD by providing complementary phenotypic information.

  20. Brain activity and cognition: a connection from thermodynamics and information theory.

    PubMed

    Collell, Guillem; Fauquet, Jordi

    2015-01-01

    The connection between brain and mind is an important scientific and philosophical question that we are still far from completely understanding. A crucial point to our work is noticing that thermodynamics provides a convenient framework to model brain activity, whereas cognition can be modeled in information-theoretical terms. In fact, several models have been proposed so far from both approaches. A second critical remark is the existence of deep theoretical connections between thermodynamics and information theory. In fact, some well-known authors claim that the laws of thermodynamics are nothing but principles in information theory. Unlike in physics or chemistry, a formalization of the relationship between information and energy is currently lacking in neuroscience. In this paper we propose a framework to connect physical brain and cognitive models by means of the theoretical connections between information theory and thermodynamics. Ultimately, this article aims at providing further insight on the formal relationship between cognition and neural activity.

  1. Information-theoretical noninvasive damage detection in bridge structures

    NASA Astrophysics Data System (ADS)

    Sudu Ambegedara, Amila; Sun, Jie; Janoyan, Kerop; Bollt, Erik

    2016-11-01

    Damage detection of mechanical structures such as bridges is an important research problem in civil engineering. Using spatially distributed sensor time series data collected from a recent experiment on a local bridge in Upper State New York, we study noninvasive damage detection using information-theoretical methods. Several findings are in order. First, the time series data, which represent accelerations measured at the sensors, more closely follow Laplace distribution than normal distribution, allowing us to develop parameter estimators for various information-theoretic measures such as entropy and mutual information. Second, as damage is introduced by the removal of bolts of the first diaphragm connection, the interaction between spatially nearby sensors as measured by mutual information becomes weaker, suggesting that the bridge is "loosened." Finally, using a proposed optimal mutual information interaction procedure to prune away indirect interactions, we found that the primary direction of interaction or influence aligns with the traffic direction on the bridge even after damaging the bridge.

  2. A Generalized Information Theoretical Model for Quantum Secret Sharing

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming

    2016-11-01

    An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.

  3. Targeted Feature Detection for Data-Dependent Shotgun Proteomics

    PubMed Central

    2017-01-01

    Label-free quantification of shotgun LC–MS/MS data is the prevailing approach in quantitative proteomics but remains computationally nontrivial. The central data analysis step is the detection of peptide-specific signal patterns, called features. Peptide quantification is facilitated by associating signal intensities in features with peptide sequences derived from MS2 spectra; however, missing values due to imperfect feature detection are a common problem. A feature detection approach that directly targets identified peptides (minimizing missing values) but also offers robustness against false-positive features (by assigning meaningful confidence scores) would thus be highly desirable. We developed a new feature detection algorithm within the OpenMS software framework, leveraging ideas and algorithms from the OpenSWATH toolset for DIA/SRM data analysis. Our software, FeatureFinderIdentification (“FFId”), implements a targeted approach to feature detection based on information from identified peptides. This information is encoded in an MS1 assay library, based on which ion chromatogram extraction and detection of feature candidates are carried out. Significantly, when analyzing data from experiments comprising multiple samples, our approach distinguishes between “internal” and “external” (inferred) peptide identifications (IDs) for each sample. On the basis of internal IDs, two sets of positive (true) and negative (decoy) feature candidates are defined. A support vector machine (SVM) classifier is then trained to discriminate between the sets and is subsequently applied to the “uncertain” feature candidates from external IDs, facilitating selection and confidence scoring of the best feature candidate for each peptide. This approach also enables our algorithm to estimate the false discovery rate (FDR) of the feature selection step. We validated FFId based on a public benchmark data set, comprising a yeast cell lysate spiked with protein standards that provide a known ground-truth. The algorithm reached almost complete (>99%) quantification coverage for the full set of peptides identified at 1% FDR (PSM level). Compared with other software solutions for label-free quantification, this is an outstanding result, which was achieved at competitive quantification accuracy and reproducibility across replicates. The FDR for the feature selection was estimated at a low 1.5% on average per sample (3% for features inferred from external peptide IDs). The FFId software is open-source and freely available as part of OpenMS (www.openms.org). PMID:28673088

  4. Targeted Feature Detection for Data-Dependent Shotgun Proteomics.

    PubMed

    Weisser, Hendrik; Choudhary, Jyoti S

    2017-08-04

    Label-free quantification of shotgun LC-MS/MS data is the prevailing approach in quantitative proteomics but remains computationally nontrivial. The central data analysis step is the detection of peptide-specific signal patterns, called features. Peptide quantification is facilitated by associating signal intensities in features with peptide sequences derived from MS2 spectra; however, missing values due to imperfect feature detection are a common problem. A feature detection approach that directly targets identified peptides (minimizing missing values) but also offers robustness against false-positive features (by assigning meaningful confidence scores) would thus be highly desirable. We developed a new feature detection algorithm within the OpenMS software framework, leveraging ideas and algorithms from the OpenSWATH toolset for DIA/SRM data analysis. Our software, FeatureFinderIdentification ("FFId"), implements a targeted approach to feature detection based on information from identified peptides. This information is encoded in an MS1 assay library, based on which ion chromatogram extraction and detection of feature candidates are carried out. Significantly, when analyzing data from experiments comprising multiple samples, our approach distinguishes between "internal" and "external" (inferred) peptide identifications (IDs) for each sample. On the basis of internal IDs, two sets of positive (true) and negative (decoy) feature candidates are defined. A support vector machine (SVM) classifier is then trained to discriminate between the sets and is subsequently applied to the "uncertain" feature candidates from external IDs, facilitating selection and confidence scoring of the best feature candidate for each peptide. This approach also enables our algorithm to estimate the false discovery rate (FDR) of the feature selection step. We validated FFId based on a public benchmark data set, comprising a yeast cell lysate spiked with protein standards that provide a known ground-truth. The algorithm reached almost complete (>99%) quantification coverage for the full set of peptides identified at 1% FDR (PSM level). Compared with other software solutions for label-free quantification, this is an outstanding result, which was achieved at competitive quantification accuracy and reproducibility across replicates. The FDR for the feature selection was estimated at a low 1.5% on average per sample (3% for features inferred from external peptide IDs). The FFId software is open-source and freely available as part of OpenMS ( www.openms.org ).

  5. A rapid and accurate quantification method for real-time dynamic analysis of cellular lipids during microalgal fermentation processes in Chlorella protothecoides with low field nuclear magnetic resonance.

    PubMed

    Wang, Tao; Liu, Tingting; Wang, Zejian; Tian, Xiwei; Yang, Yi; Guo, Meijin; Chu, Ju; Zhuang, Yingping

    2016-05-01

    The rapid and real-time lipid determination can provide valuable information on process regulation and optimization in the algal lipid mass production. In this study, a rapid, accurate and precise quantification method of in vivo cellular lipids of Chlorella protothecoides using low field nuclear magnetic resonance (LF-NMR) was newly developed. LF-NMR was extremely sensitive to the algal lipids with the limits of the detection (LOD) of 0.0026g and 0.32g/L in dry lipid samples and algal broth, respectively, as well as limits of quantification (LOQ) of 0.0093g and 1.18g/L. Moreover, the LF-NMR signal was specifically proportional to the cellular lipids of C. protothecoides, thus the superior regression curves existing in a wide detection range from 0.02 to 0.42g for dry lipids and from 1.12 to 8.97gL(-1) of lipid concentration for in vivo lipid quantification were obtained with all R(2) higher than 0.99, irrespective of the lipid content and fatty acids profile variations. The accuracy of this novel method was further verified to be reliable by comparing lipid quantification results to those obtained by GC-MS. And the relative standard deviation (RSD) of LF-NMR results were smaller than 2%, suggesting the precision of this method. Finally, this method was successfully used in the on-line lipid monitoring during the algal lipid fermentation processes, making it possible for better understanding of the lipid accumulation mechanism and dynamic bioprocess control. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)

    NASA Astrophysics Data System (ADS)

    Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.

    2016-06-01

    We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.

  7. Rapid Quantification of Low-Viscosity Acetyl-Triacylglycerols Using Electrospray Ionization Mass Spectrometry.

    PubMed

    Bansal, Sunil; Durrett, Timothy P

    2016-09-01

    Acetyl-triacylglycerols (acetyl-TAG) possess an sn-3 acetate group, which confers useful chemical and physical properties to these unusual triacylglycerols (TAG). Current methods for quantification of acetyl-TAG are time consuming and do not provide any information on the molecular species profile. Electrospray ionization mass spectrometry (ESI-MS)-based methods can overcome these drawbacks. However, the ESI-MS signal intensity for TAG depends on the aliphatic chain length and unsaturation index of the molecule. Therefore response factors for different molecular species need to be determined before any quantification. The effects of the chain length and the number of double-bonds of the sn-1/2 acyl groups on the signal intensity for the neutral loss of short chain length sn-3 groups were quantified using a series of synthesized sn-3 specific structured TAG. The signal intensity for the neutral loss of the sn-3 acyl group was found to negatively correlated with the aliphatic chain length and unsaturation index of the sn-1/2 acyl groups. The signal intensity of the neutral loss of the sn-3 acyl group was also negatively correlated with the size of that chain. Further, the position of the group undergoing neutral loss was also important, with the signal from an sn-2 acyl group much lower than that from one located at sn-3. Response factors obtained from these analyses were used to develop a method for the absolute quantification of acetyl-TAG. The increased sensitivity of this ESI-MS-based approach allowed successful quantification of acetyl-TAG in various biological settings, including the products of in vitro enzyme activity assays.

  8. A semi-automatic method for quantification and classification of erythrocytes infected with malaria parasites in microscopic images.

    PubMed

    Díaz, Gloria; González, Fabio A; Romero, Eduardo

    2009-04-01

    Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This study presents an original method for quantification and classification of erythrocytes in stained thin blood films infected with Plasmodium falciparum. The proposed approach is composed of three main phases: a preprocessing step, which corrects luminance differences. A segmentation step that uses the normalized RGB color space for classifying pixels either as erythrocyte or background followed by an Inclusion-Tree representation that structures the pixel information into objects, from which erythrocytes are found. Finally, a two step classification process identifies infected erythrocytes and differentiates the infection stage, using a trained bank of classifiers. Additionally, user intervention is allowed when the approach cannot make a proper decision. Four hundred fifty malaria images were used for training and evaluating the method. Automatic identification of infected erythrocytes showed a specificity of 99.7% and a sensitivity of 94%. The infection stage was determined with an average sensitivity of 78.8% and average specificity of 91.2%.

  9. Framework for hyperspectral image processing and quantification for cancer detection during animal tumor surgery.

    PubMed

    Lu, Guolan; Wang, Dongsheng; Qin, Xulei; Halig, Luma; Muller, Susan; Zhang, Hongzheng; Chen, Amy; Pogue, Brian W; Chen, Zhuo Georgia; Fei, Baowei

    2015-01-01

    Hyperspectral imaging (HSI) is an imaging modality that holds strong potential for rapid cancer detection during image-guided surgery. But the data from HSI often needs to be processed appropriately in order to extract the maximum useful information that differentiates cancer from normal tissue. We proposed a framework for hyperspectral image processing and quantification, which includes a set of steps including image preprocessing, glare removal, feature extraction, and ultimately image classification. The framework has been tested on images from mice with head and neck cancer, using spectra from 450- to 900-nm wavelength. The image analysis computed Fourier coefficients, normalized reflectance, mean, and spectral derivatives for improved accuracy. The experimental results demonstrated the feasibility of the hyperspectral image processing and quantification framework for cancer detection during animal tumor surgery, in a challenging setting where sensitivity can be low due to a modest number of features present, but potential for fast image classification can be high. This HSI approach may have potential application in tumor margin assessment during image-guided surgery, where speed of assessment may be the dominant factor.

  10. Framework for hyperspectral image processing and quantification for cancer detection during animal tumor surgery

    NASA Astrophysics Data System (ADS)

    Lu, Guolan; Wang, Dongsheng; Qin, Xulei; Halig, Luma; Muller, Susan; Zhang, Hongzheng; Chen, Amy; Pogue, Brian W.; Chen, Zhuo Georgia; Fei, Baowei

    2015-12-01

    Hyperspectral imaging (HSI) is an imaging modality that holds strong potential for rapid cancer detection during image-guided surgery. But the data from HSI often needs to be processed appropriately in order to extract the maximum useful information that differentiates cancer from normal tissue. We proposed a framework for hyperspectral image processing and quantification, which includes a set of steps including image preprocessing, glare removal, feature extraction, and ultimately image classification. The framework has been tested on images from mice with head and neck cancer, using spectra from 450- to 900-nm wavelength. The image analysis computed Fourier coefficients, normalized reflectance, mean, and spectral derivatives for improved accuracy. The experimental results demonstrated the feasibility of the hyperspectral image processing and quantification framework for cancer detection during animal tumor surgery, in a challenging setting where sensitivity can be low due to a modest number of features present, but potential for fast image classification can be high. This HSI approach may have potential application in tumor margin assessment during image-guided surgery, where speed of assessment may be the dominant factor.

  11. Iron deposition quantification: Applications in the brain and liver.

    PubMed

    Yan, Fuhua; He, Naying; Lin, Huimin; Li, Ruokun

    2018-06-13

    Iron has long been implicated in many neurological and other organ diseases. It is known that over and above the normal increases in iron with age, in certain diseases there is an excessive iron accumulation in the brain and liver. MRI is a noninvasive means by which to image the various structures in the brain in three dimensions and quantify iron over the volume of the object of interest. The quantification of iron can provide information about the severity of iron-related diseases as well as quantify changes in iron for patient follow-up and treatment monitoring. This article provides an overview of current MRI-based methods for iron quantification, specifically for the brain and liver, including: signal intensity ratio, R 2 , R2*, R2', phase, susceptibility weighted imaging and quantitative susceptibility mapping (QSM). Although there are numerous approaches to measuring iron, R 2 and R2* are currently preferred methods in imaging the liver and QSM has become the preferred approach for imaging iron in the brain. 5 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.

  12. Integrative analysis with ChIP-seq advances the limits of transcript quantification from RNA-seq

    PubMed Central

    Liu, Peng; Sanalkumar, Rajendran; Bresnick, Emery H.; Keleş, Sündüz; Dewey, Colin N.

    2016-01-01

    RNA-seq is currently the technology of choice for global measurement of transcript abundances in cells. Despite its successes, isoform-level quantification remains difficult because short RNA-seq reads are often compatible with multiple alternatively spliced isoforms. Existing methods rely heavily on uniquely mapping reads, which are not available for numerous isoforms that lack regions of unique sequence. To improve quantification accuracy in such difficult cases, we developed a novel computational method, prior-enhanced RSEM (pRSEM), which uses a complementary data type in addition to RNA-seq data. We found that ChIP-seq data of RNA polymerase II and histone modifications were particularly informative in this approach. In qRT-PCR validations, pRSEM was shown to be superior than competing methods in estimating relative isoform abundances within or across conditions. Data-driven simulations suggested that pRSEM has a greatly decreased false-positive rate at the expense of a small increase in false-negative rate. In aggregate, our study demonstrates that pRSEM transforms existing capacity to precisely estimate transcript abundances, especially at the isoform level. PMID:27405803

  13. Segmentation and quantification of subcellular structures in fluorescence microscopy images using Squassh.

    PubMed

    Rizk, Aurélien; Paul, Grégory; Incardona, Pietro; Bugarski, Milica; Mansouri, Maysam; Niemann, Axel; Ziegler, Urs; Berger, Philipp; Sbalzarini, Ivo F

    2014-03-01

    Detection and quantification of fluorescently labeled molecules in subcellular compartments is a key step in the analysis of many cell biological processes. Pixel-wise colocalization analyses, however, are not always suitable, because they do not provide object-specific information, and they are vulnerable to noise and background fluorescence. Here we present a versatile protocol for a method named 'Squassh' (segmentation and quantification of subcellular shapes), which is used for detecting, delineating and quantifying subcellular structures in fluorescence microscopy images. The workflow is implemented in freely available, user-friendly software. It works on both 2D and 3D images, accounts for the microscope optics and for uneven image background, computes cell masks and provides subpixel accuracy. The Squassh software enables both colocalization and shape analyses. The protocol can be applied in batch, on desktop computers or computer clusters, and it usually requires <1 min and <5 min for 2D and 3D images, respectively. Basic computer-user skills and some experience with fluorescence microscopy are recommended to successfully use the protocol.

  14. Activity Theory as a Theoretical Framework for Health Self-Quantification: A Systematic Review of Empirical Studies

    PubMed Central

    2016-01-01

    Background Self-quantification (SQ) is a way of working in which, by using tracking tools, people aim to collect, manage, and reflect on personal health data to gain a better understanding of their own body, health behavior, and interaction with the world around them. However, health SQ lacks a formal framework for describing the self-quantifiers’ activities and their contextual components or constructs to pursue these health related goals. Establishing such framework is important because it is the first step to operationalize health SQ fully. This may in turn help to achieve the aims of health professionals and researchers who seek to make or study changes in the self-quantifiers’ health systematically. Objective The aim of this study was to review studies on health SQ in order to answer the following questions: What are the general features of the work and the particular activities that self-quantifiers perform to achieve their health objectives? What constructs of health SQ have been identified in the scientific literature? How have these studies described such constructs? How would it be possible to model these constructs theoretically to characterize the work of health SQ? Methods A systematic review of peer-reviewed literature was conducted. A total of 26 empirical studies were included. The content of these studies was thematically analyzed using Activity Theory as an organizing framework. Results The literature provided varying descriptions of health SQ as data-driven and objective-oriented work mediated by SQ tools. From the literature, we identified two types of SQ work: work on data (ie, data management activities) and work with data (ie, health management activities). Using Activity Theory, these activities could be characterized into 6 constructs: users, tracking tools, health objectives, division of work, community or group setting, and SQ plan and rules. We could not find a reference to any single study that accounted for all these activities and constructs of health SQ activity. Conclusions A Health Self-Quantification Activity Framework is presented, which shows SQ tool use in context, in relation to the goals, plans, and competence of the user. This makes it easier to analyze issues affecting SQ activity, and thereby makes it more feasible to address them. This review makes two significant contributions to research in this field: it explores health SQ work and its constructs thoroughly and it adapts Activity Theory to describe health SQ activity systematically. PMID:27234343

  15. Information-Theoretical Complexity Analysis of Selected Elementary Chemical Reactions

    NASA Astrophysics Data System (ADS)

    Molina-Espíritu, M.; Esquivel, R. O.; Dehesa, J. S.

    We investigate the complexity of selected elementary chemical reactions (namely, the hydrogenic-abstraction reaction and the identity SN2 exchange reaction) by means of the following single and composite information-theoretic measures: disequilibrium (D), exponential entropy(L), Fisher information (I), power entropy (J), I-D, D-L and I-J planes and Fisher-Shannon (FS) and Lopez-Mancini-Calbet (LMC) shape complexities. These quantities, which are functionals of the one-particle density, are computed in both position (r) and momentum (p) spaces. The analysis revealed that the chemically significant regions of these reactions can be identified through most of the single information-theoretic measures and the two-component planes, not only the ones which are commonly revealed by the energy, such as the reactant/product (R/P) and the transition state (TS), but also those that are not present in the energy profile such as the bond cleavage energy region (BCER), the bond breaking/forming regions (B-B/F) and the charge transfer process (CT). The analysis of the complexities shows that the energy profile of the abstraction reaction bears the same information-theoretical features of the LMC and FS measures, however for the identity SN2 exchange reaction does not hold a simple behavior with respect to the LMC and FS measures. Most of the chemical features of interest (BCER, B-B/F and CT) are only revealed when particular information-theoretic aspects of localizability (L or J), uniformity (D) and disorder (I) are considered.

  16. Detection and quantification of fugitive emissions from Colorado oil and gas production operations using remote monitoring

    EPA Science Inventory

    Western states contain vast amounts of oil and gas production. For example, Weld County Colorado contains approximately 25,000 active oil and gas well sites with associated production operations. There is little information on the air pollutant emission potential from this source...

  17. 77 FR 47852 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-10

    ... Beneficiaries Receiving NaF-18 Positron Emission Tomography (PET) to Identify Bone Metastasis in Cancer; Use: In... NaF-18 PET scan to identify bone metastasis in cancer is reasonable and necessary only when the... strategy by the identification, location and quantification of bone [[Page 47853

  18. 75 FR 63484 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-15

    ... Bone Metastasis in Cancer; Use: In Decision Memorandum CAG-00065R, issued on February 26, 2010, the... that for Medicare beneficiaries receiving NaF-18 PET scan to identify bone metastasis in cancer is... or to guide subsequent treatment strategy by the identification, location and quantification of bone...

  19. 78 FR 16270 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-14

    ... Emission Tomography (PET) to Identify Bone Metastasis in Cancer; Use: In Decision Memorandum CAG-00065R... bone metastasis in cancer is reasonable and necessary only when the provider is participating in and..., location and quantification of bone metastases in beneficiaries in whom bone metastases are strongly...

  20. Winter habitat selection patterns of Merriam's turkeys in the southern Black Hills, South Dakota

    Treesearch

    Chad P. Lehman; Mark A. Rumble; Lester D. Flake

    2007-01-01

    In northern areas of their expanded range, information on Merriam's turkeys (Meleagris gallopavo merriami) is lacking, specifically pertaining to wintering behavior and factors associated with winter habitat selection. Forest managers need detailed quantification of the effects of logging and other management practices on wintering habitats...

  1. SW-846 Test Method 3200: Mercury Species Fractionation and Quantification by Microwave Assisted Extraction, Selective Solvent Extraction and/or Solid Phase Extraction

    EPA Pesticide Factsheets

    a sequential extraction and separation procedure that maybe used in conjunction with a determinative method to differentiate mercury species that arepresent in soils and sediments. provides information on both total mercury andvarious mercury species.

  2. Quantification of genomic relationship from DNA pooled samples

    USDA-ARS?s Scientific Manuscript database

    Use of DNA pooling for GWAS has been demonstrated to reduce genotypic costs up to 90% while achieving similar power to individual genotyping. Recent work has focused on use of DNA pooling to inform problems in genomic prediction. This study is designed to demonstrate the efficacy of estimating genom...

  3. Quantification of visual clutter using a computation model of human perception : an application for head-up displays

    DOT National Transportation Integrated Search

    2004-03-20

    A means of quantifying the cluttering effects of symbols is needed to evaluate the impact of displaying an increasing volume of information on aviation displays such as head-up displays. Human visual perception has been successfully modeled by algori...

  4. Cylindrocarpon species associated with apple tree roots in South Africa and their quantification using real-time PCR

    USDA-ARS?s Scientific Manuscript database

    Cylindrocarpon species are known to be a component of the pathogen/pest complex that incites apple replant disease. In South Africa, no information is available on apple associated Cylindrocarpon species and their pathogenicity. Therefore, these aspects were investigated. Additionally, a genus speci...

  5. Use of geographic information systems (GIS) to assess environmental risk factors threatening rare redeye bass (Micropterus coosae) in the southeastern United States

    EPA Science Inventory

    Habitat destruction, pollution, species introductions, and drainage alterations are frequently cited as the principal anthropogenic stressors responsible for wide-scale imperilment of the freshwater ichthyofauna of the southeastern United States. Quantification and assessment of ...

  6. A Generic Privacy Quantification Framework for Privacy-Preserving Data Publishing

    ERIC Educational Resources Information Center

    Zhu, Zutao

    2010-01-01

    In recent years, the concerns about the privacy for the electronic data collected by government agencies, organizations, and industries are increasing. They include individual privacy and knowledge privacy. Privacy-preserving data publishing is a research branch that preserves the privacy while, at the same time, withholding useful information in…

  7. Detection, identification, and quantification techniques for spills of hazardous chemicals

    NASA Technical Reports Server (NTRS)

    Washburn, J. F.; Sandness, G. A.

    1977-01-01

    The first 400 chemicals listed in the Coast Guard's Chemical Hazards Response Information System were evaluated with respect to their detectability, identifiability, and quantifiability by 12 generalized remote and in situ sensing techniques. Identification was also attempted for some key areas in water pollution sensing technology.

  8. Rapid detection of trace amounts of surfactants using nanoparticles in fluorometric assays

    NASA Astrophysics Data System (ADS)

    Härmä, Harri; Laakso, Susana; Pihlasalo, Sari; Hänninen, Pekka; Faure, Bertrand; Rana, Subhasis; Bergström, Lennart

    2010-01-01

    Rapid microtiter assays that utilize the time-resolved fluorescence resonance energy transfer or quenching of dye-labeled proteins adsorbed onto the surfaces of polystyrene or maghemite nanoparticles have been developed for the detection and quantification of trace amounts of surfactants at concentrations down to 10 nM.Rapid microtiter assays that utilize the time-resolved fluorescence resonance energy transfer or quenching of dye-labeled proteins adsorbed onto the surfaces of polystyrene or maghemite nanoparticles have been developed for the detection and quantification of trace amounts of surfactants at concentrations down to 10 nM. Electronic supplementary information (ESI) available: Experimental details and Fig. S1 and S2. See DOI: 10.1039/b9nr00172g

  9. Recognition and Quantification of Area Damaged by Oligonychus Perseae in Avocado Leaves

    NASA Astrophysics Data System (ADS)

    Díaz, Gloria; Romero, Eduardo; Boyero, Juan R.; Malpica, Norberto

    The measure of leaf damage is a basic tool in plant epidemiology research. Measuring the area of a great number of leaves is subjective and time consuming. We investigate the use of machine learning approaches for the objective segmentation and quantification of leaf area damaged by mites in avocado leaves. After extraction of the leaf veins, pixels are labeled with a look-up table generated using a Support Vector Machine with a polynomial kernel of degree 3, on the chrominance components of YCrCb color space. Spatial information is included in the segmentation process by rating the degree of membership to a certain class and the homogeneity of the classified region. Results are presented on real images with different degrees of damage.

  10. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  11. Affine Isoperimetry and Information Theoretic Inequalities

    ERIC Educational Resources Information Center

    Lv, Songjun

    2012-01-01

    There are essential connections between the isoperimetric theory and information theoretic inequalities. In general, the Brunn-Minkowski inequality and the entropy power inequality, as well as the classical isoperimetric inequality and the classical entropy-moment inequality, turn out to be equivalent in some certain sense, respectively. Based on…

  12. Quantitative analysis of H-species in anisotropic minerals by polarized infrared spectroscopy along three orthogonal directions

    NASA Astrophysics Data System (ADS)

    Shuai, Kang; Yang, Xiaozhi

    2017-03-01

    Infrared spectroscopy is a powerful technique for probing H-species in nominally anhydrous minerals, and a particular goal of considerable efforts has been providing a simple yet accurate method for the quantification. The available methods, with either polarized or unpolarized analyses, are usually time-consuming or, in some cases, subjected to larger uncertainty than theoretically expected. It is shown here that an empirical approach for measuring the concentration, by determining three polarized infrared spectra along any three mutually perpendicular directions, is theoretically and in particular experimentally correct. The theoretical background is established by considering the integrated absorbance, and the experimental measurements are based on a careful evaluation of the species and content of H in a series of gem-quality orthogonal, monoclinic and triclinic crystals, including olivine, orthopyroxene, clinopyroxene, orthoclase and albite (natural and H-annealed). The results demonstrate that the sum of the integrated absorbance from two polarized spectra along two perpendicular directions in any given plane is a constant, and that the sum of the integrated absorbance from three polarized spectra along any three orthogonal directions is of essentially the same accuracy as that along the principal axes. It is also shown that this method works well, with a relative accuracy within 10%, even at some extreme cases where the sample absorption bands are both intense and strongly anisotropic.

  13. Detection Identification and Quantification of Keto-Hydroperoxides in Low-Temperature Oxidation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Nils; Moshammer, Kai; Jasper, Ahren W.

    2017-07-01

    Keto-hydroperoxides are reactive partially oxidized intermediates that play a central role in chain-branching reactions during the low-temperature oxidation of hydrocarbons. In this Perspective, we outline how these short lived species can be detected, identified, and quantified using integrated experimental and theoretical approaches. The procedures are based on direct molecular-beam sampling from reactive environments, followed by mass spectrometry with single-photon ionization, identification of fragmentation patterns, and theoretical calculations of ionization thresholds, fragment appearance energies, and photoionization cross sections. Using the oxidation of neo-pentane and tetrahydrofuran as examples, the individual steps of the experimental approaches are described in depth together with amore » detailed description of the theoretical efforts. For neo-pentane, the experimental data are consistent with the calculated ionization and fragment appearance energies of the keto-hydroperoxide, thus adding confidence to the analysis routines and the employed levels of theory. For tetrahydrofuran, multiple keto-hydroperoxide isomers are possible due to the presence of nonequivalent O 2 addition sites. Despite this additional complexity, the experimental data allow for the identification of two to four keto-hydroperoxides. Mole fraction profiles of the keto-hydroperoxides, which are quantified using calculated photoionization cross sections, are provided together with estimated uncertainties as function of the temperature of the reactive mixture and can serve as validation targets for chemically detailed mechanisms.« less

  14. Analytical Implications of Using Practice Theory in Workplace Information Literacy Research

    ERIC Educational Resources Information Center

    Moring, Camilla; Lloyd, Annemaree

    2013-01-01

    Introduction: This paper considers practice theory and the analytical implications of using this theoretical approach in information literacy research. More precisely the aim of the paper is to discuss the translation of practice theoretical assumptions into strategies that frame the analytical focus and interest when researching workplace…

  15. Investigation of Means of Mitigating Congestion in Complex, Distributed Network Systems by Optimization Means and Information Theoretic Procedures

    DTIC Science & Technology

    2008-02-01

    Information Theoretic Proceedures Frank Mufalli Rakesh Nagi Jim Llinas Sumita Mishra SUNY at Buffalo— CUBRC 4455 Genessee Street Buffalo...5f. WORK UNIT NUMBER NY 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) SUNY at Buffalo— CUBRC * Paine College ** 4455 Genessee

  16. NLPIR: A Theoretical Framework for Applying Natural Language Processing to Information Retrieval.

    ERIC Educational Resources Information Center

    Zhou, Lina; Zhang, Dongsong

    2003-01-01

    Proposes a theoretical framework called NLPIR that integrates natural language processing (NLP) into information retrieval (IR) based on the assumption that there exists representation distance between queries and documents. Discusses problems in traditional keyword-based IR, including relevance, and describes some existing NLP techniques.…

  17. Theoretical foundations for information representation and constraint specification

    NASA Technical Reports Server (NTRS)

    Menzel, Christopher P.; Mayer, Richard J.

    1991-01-01

    Research accomplished at the Knowledge Based Systems Laboratory of the Department of Industrial Engineering at Texas A&M University is described. Outlined here are the theoretical foundations necessary to construct a Neutral Information Representation Scheme (NIRS), which will allow for automated data transfer and translation between model languages, procedural programming languages, database languages, transaction and process languages, and knowledge representation and reasoning control languages for information system specification.

  18. Comparison of information theoretic divergences for sensor management

    NASA Astrophysics Data System (ADS)

    Yang, Chun; Kadar, Ivan; Blasch, Erik; Bakich, Michael

    2011-06-01

    In this paper, we compare the information-theoretic metrics of the Kullback-Leibler (K-L) and Renyi (α) divergence formulations for sensor management. Information-theoretic metrics have been well suited for sensor management as they afford comparisons between distributions resulting from different types of sensors under different actions. The difference in distributions can also be measured as entropy formulations to discern the communication channel capacity (i.e., Shannon limit). In this paper, we formulate a sensor management scenario for target tracking and compare various metrics for performance evaluation as a function of the design parameter (α) so as to determine which measures might be appropriate for sensor management given the dynamics of the scenario and design parameter.

  19. Mass-transfer and supersaturation in crystal growth in gels. Application to CaSO 4·2H 2O

    NASA Astrophysics Data System (ADS)

    Prieto, M.; Viedma, C.; López-Acevedo, V.; Martín-Vivaldi, J. L.; López-Andrés, S.

    1988-10-01

    Supersaturation evaluation is an essential requirement to describe, confront and explain crystal growth experiences. However, in the particular case of crystal growth in gels, experiences are often described by attending to the initial concentration of reagent. This fact is connected with deficiencies in the theoretical quantification of mass-transfer, and therefore in both time and location prediction for the first precipitate. In this paper laboratory experiences have been specifically designed to test supersaturation evolution through an actual (finite) diffusion system. The problem is carried out by keeping into account several complexity factors: free ions as well as complexes and silica gel Na + and Cl - "unloading" are considered to evaluate the supersaturation.

  20. Cliophysics: Socio-Political Reliability Theory, Polity Duration and African Political (In)stabilities

    PubMed Central

    Cherif, Alhaji; Barley, Kamal

    2010-01-01

    Quantification of historical sociological processes have recently gained attention among theoreticians in the effort of providing a solid theoretical understanding of the behaviors and regularities present in socio-political dynamics. Here we present a reliability theory of polity processes with emphases on individual political dynamics of African countries. We found that the structural properties of polity failure rates successfully capture the risk of political vulnerability and instabilities in which , , , and of the countries with monotonically increasing, unimodal, U-shaped and monotonically decreasing polity failure rates, respectively, have high level of state fragility indices. The quasi-U-shape relationship between average polity duration and regime types corroborates historical precedents and explains the stability of the autocracies and democracies. PMID:21206911

  1. Comparison of ISS Power System Telemetry with Analytically Derived Data for Shadowed Cases

    NASA Technical Reports Server (NTRS)

    Fincannon, H. James

    2002-01-01

    Accurate International Space Station (ISS) power prediction requires the quantification of solar array shadowing. Prior papers have discussed the NASA Glenn Research Center (GRC) ISS power system tool SPACE (System Power Analysis for Capability Evaluation) and its integrated shadowing algorithms. On-orbit telemetry has become available that permits the correlation of theoretical shadowing predictions with actual data. This paper documents the comparison of a shadowing metric (total solar array current) as derived from SPACE predictions and on-orbit flight telemetry data for representative significant shadowing cases. Images from flight video recordings and the SPACE computer program graphical output are used to illustrate the comparison. The accuracy of the SPACE shadowing capability is demonstrated for the cases examined.

  2. Using the load-velocity relationship for 1RM prediction.

    PubMed

    Jidovtseff, Boris; Harris, Nigel K; Crielaard, Jean-Michel; Cronin, John B

    2011-01-01

    The purpose of this study was to investigate the ability of the load-velocity relationship to accurately predict a bench press 1 repetition maximum (1RM). Data from 3 different bench press studies (n = 112) that incorporated both 1RM assessment and submaximal load-velocity profiling were analyzed. Individual regression analysis was performed to determine the theoretical load at zero velocity (LD0). Data from each of the 3 studies were analyzed separately and also presented as overall group mean. Thereafter, correlation analysis provided quantification of the relationships between 1RM and LD0. Practically perfect correlations (r = ∼0.95) were observed in our samples, confirming the ability of the load-velocity profile to accurately predict bench press 1RM.

  3. Quantification of the multi-streaming effect in redshift space distortion

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Oh, Minji

    2017-05-01

    Both multi-streaming (random motion) and bulk motion cause the Finger-of-God (FoG) effect in redshift space distortion (RSD). We apply a direct measurement of the multi-streaming effect in RSD from simulations, proving that it induces an additional, non-negligible FoG damping to the redshift space density power spectrum. We show that, including the multi-streaming effect, the RSD modelling is significantly improved. We also provide a theoretical explanation based on halo model for the measured effect, including a fitting formula with one to two free parameters. The improved understanding of FoG helps break the fσ8-σv degeneracy in RSD cosmology, and has the potential of significantly improving cosmological constraints.

  4. Uncertainty quantification and propagation in nuclear density functional theory

    DOE PAGES

    Schunck, N.; McDonnell, J. D.; Higdon, D.; ...

    2015-12-23

    Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this study, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statisticalmore » analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.« less

  5. Brain activity and cognition: a connection from thermodynamics and information theory

    PubMed Central

    Collell, Guillem; Fauquet, Jordi

    2015-01-01

    The connection between brain and mind is an important scientific and philosophical question that we are still far from completely understanding. A crucial point to our work is noticing that thermodynamics provides a convenient framework to model brain activity, whereas cognition can be modeled in information-theoretical terms. In fact, several models have been proposed so far from both approaches. A second critical remark is the existence of deep theoretical connections between thermodynamics and information theory. In fact, some well-known authors claim that the laws of thermodynamics are nothing but principles in information theory. Unlike in physics or chemistry, a formalization of the relationship between information and energy is currently lacking in neuroscience. In this paper we propose a framework to connect physical brain and cognitive models by means of the theoretical connections between information theory and thermodynamics. Ultimately, this article aims at providing further insight on the formal relationship between cognition and neural activity. PMID:26136709

  6. Predominant information quality scheme for the essential amino acids: an information-theoretical analysis.

    PubMed

    Esquivel, Rodolfo O; Molina-Espíritu, Moyocoyani; López-Rosa, Sheila; Soriano-Correa, Catalina; Barrientos-Salcedo, Carolina; Kohout, Miroslav; Dehesa, Jesús S

    2015-08-24

    In this work we undertake a pioneer information-theoretical analysis of 18 selected amino acids extracted from a natural protein, bacteriorhodopsin (1C3W). The conformational structures of each amino acid are analyzed by use of various quantum chemistry methodologies at high levels of theory: HF, M062X and CISD(Full). The Shannon entropy, Fisher information and disequilibrium are determined to grasp the spatial spreading features of delocalizability, order and uniformity of the optimized structures. These three entropic measures uniquely characterize all amino acids through a predominant information-theoretic quality scheme (PIQS), which gathers all chemical families by means of three major spreading features: delocalization, narrowness and uniformity. This scheme recognizes four major chemical families: aliphatic (delocalized), aromatic (delocalized), electro-attractive (narrowed) and tiny (uniform). All chemical families recognized by the existing energy-based classifications are embraced by this entropic scheme. Finally, novel chemical patterns are shown in the information planes associated with the PIQS entropic measures. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Assessment of acquisition protocols for routine imaging of Y-90 using PET/CT

    PubMed Central

    2013-01-01

    Background Despite the early theoretical prediction of the 0+-0+ transition of 90Zr, 90Y-PET underwent only recently a growing interest for the development of imaging radioembolization of liver tumors. The aim of this work was to determine the minimum detectable activity (MDA) of 90Y by PET imaging and the impact of time-of-flight (TOF) reconstruction on detectability and quantitative accuracy according to the lesion size. Methods The study was conducted using a Siemens Biograph® mCT with a 22 cm large axial field of view. An IEC torso-shaped phantom containing five coplanar spheres was uniformly filled to achieve sphere-to-background ratios of 40:1. The phantom was imaged nine times in 14 days over 30 min. Sinograms were reconstructed with and without TOF information. A contrast-to-noise ratio (CNR) index was calculated using the Rose criterion, taking partial volume effects into account. The impact of reconstruction parameters on quantification accuracy, detectability, and spatial localization of the signal was investigated. Finally, six patients with hepatocellular carcinoma and four patients included in different 90Y-based radioimmunotherapy protocols were enrolled for the evaluation of the imaging parameters in a clinical situation. Results The highest CNR was achieved with one iteration for both TOF and non-TOF reconstructions. The MDA, however, was found to be lower with TOF than with non-TOF reconstruction. There was no gain by adding TOF information in terms of CNR for concentrations higher than 2 to 3 MBq mL−1, except for infra-centimetric lesions. Recovered activity was highly underestimated when a single iteration or non-TOF reconstruction was used (10% to 150% less depending on the lesion size). The MDA was estimated at 1 MBq mL−1 for a TOF reconstruction and infra-centimetric lesions. Images from patients treated with microspheres were clinically relevant, unlike those of patients who received systemic injections of 90Y. Conclusions Only one iteration and TOF were necessary to achieve an MDA around 1 MBq mL−1 and the most accurate localization of lesions. For precise quantification, at least three iterations gave the best performance, using TOF reconstruction and keeping an MDA of roughly 1 MBq mL−1. One and three iterations were mandatory to prevent false positive results for quantitative analysis of clinical data. Trial registration http://IDRCB 2011-A00043-38 P101103 PMID:23414629

  8. Frequency-area distribution of earthquake-induced landslides

    NASA Astrophysics Data System (ADS)

    Tanyas, H.; Allstadt, K.; Westen, C. J. V.

    2016-12-01

    Discovering the physical explanations behind the power-law distribution of landslides can provide valuable information to quantify triggered landslide events and as a consequence to understand the relation between landslide causes and impacts in terms of environmental settings of landslide affected area. In previous studies, the probability of landslide size was utilized for this quantification and the developed parameter was called a landslide magnitude (mL). The frequency-area distributions (FADs) of several landslide inventories were modelled and theoretical curves were established to identify the mL for any landslide inventory. In the observed landslide inventories, a divergence from the power-law distribution was recognized for the small landslides, referred to as the rollover, and this feature was taken into account in the established model. However, these analyses are based on a relatively limited number of inventories, each with a different triggering mechanism. Existing definition of the mL include some subjectivity, since it is based on a visual comparison between the theoretical curves and the FAD of the medium and large landslides. Additionally, the existed definition of mL introduces uncertainty due to the ambiguity in both the physical explanation of the rollover and its functional form. Here we focus on earthquake-induced landslides (EQIL) and aim to provide a rigorous method to estimate the mL and total landslide area of EQIL. We have gathered 36 EQIL inventories from around the globe. Using these inventories, we have evaluated existing explanations of the rollover and proposed an alternative explanation given the new data. Next, we propose a method to define the EQIL FAD curves, mL and to estimate the total landslide area. We utilize the total landslide areas obtained from inventories to compare them with our estimations and to validate our methodology. The results show that we calculate landslide magnitudes more accurately than previous methods.

  9. Symptomology of ozone injury to pine foliage

    Treesearch

    Kenneth Stolte

    1996-01-01

    Symptoms of ozone injury on western pines, ranging from effects on needles to effects on portions of ecosystems, can be differentiated from symptoms induced by other natural biotic and abiotic stressors occurring in the same area. Once identified in laboratory and field studies, quantification and monitoring of these symptoms can be used to provide reliable information...

  10. Bioinformatics: Current Practice and Future Challenges for Life Science Education

    ERIC Educational Resources Information Center

    Hack, Catherine; Kendall, Gary

    2005-01-01

    It is widely predicted that the application of high-throughput technologies to the quantification and identification of biological molecules will cause a paradigm shift in the life sciences. However, if the biosciences are to evolve from a predominantly descriptive discipline to an information science, practitioners will require enhanced skills in…

  11. 78 FR 31948 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-28

    ... listed below are owned by an agency of the U.S. Government and are available for licensing in the U.S. in... coverage for companies and may also be available for licensing. FOR FURTHER INFORMATION CONTACT: Licensing... precise and reliable quantification. Currently, there is no approved drug to treat FXS. The invention...

  12. Differential Decay of Cattle-associated Fecal Indicator Bacteria and Microbial Source Tracking Markers in Fresh and Marine Water

    EPA Science Inventory

    Background: Fecal indicator bacteria (FIB) have a long history of use in the assessment of the microbial quality of recreational waters. However, quantification of FIB provides no information about the pollution source(s) and relatively little is known about their fate in the amb...

  13. Predicting fire behavior in U.S. Mediterranean ecosystems

    Treesearch

    Frank A. Albini; Earl B. Anderson

    1982-01-01

    Quantification and methods of prediction of wildland fire behavior are discussed briefly and factors of particular relevance to the prediction of fire behavior in Mediterranean ecosystems are reviewed. A computer-based system which uses relevant fuel information and current weather data to predict fire behavior is in operation in southern California. Some of the...

  14. Radio-frequency energy quantification in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Alon, Leeor

    Mapping of radio frequency (RF) energy deposition has been challenging for 50+ years, especially, when scanning patients in the magnetic resonance imaging (MRI) environment. As result, electromagnetic simulation software is often used for estimating the specific absorption rate (SAR), the rate of RF energy deposition in tissue. The thesis work presents challenges associated with aligning information provided by electromagnetic simulation and MRI experiments. As result of the limitations of simulations, experimental methods for the quantification of SAR were established. A system for quantification of the total RF energy deposition was developed for parallel transmit MRI (a system that uses multiple antennas to excite and image the body). The system is capable of monitoring and predicting channel-by-channel RF energy deposition, whole body SAR and capable of tracking potential hardware failures that occur in the transmit chain and may cause the deposition of excessive energy into patients. Similarly, we demonstrated that local RF power deposition can be mapped and predicted for parallel transmit systems based on a series of MRI temperature mapping acquisitions. Resulting from the work, we developed tools for optimal reconstruction temperature maps from MRI acquisitions. The tools developed for temperature mapping paved the way for utilizing MRI as a diagnostic tool for evaluation of RF/microwave emitting device safety. Quantification of the RF energy was demonstrated for both MRI compatible and non-MRI-compatible devices (such as cell phones), while having the advantage of being noninvasive, of providing millimeter resolution and high accuracy.

  15. Background Signal as an in Situ Predictor of Dopamine Oxidation Potential: Improving Interpretation of Fast-Scan Cyclic Voltammetry Data.

    PubMed

    Meunier, Carl J; Roberts, James G; McCarty, Gregory S; Sombers, Leslie A

    2017-02-15

    Background-subtracted fast-scan cyclic voltammetry (FSCV) has emerged as a powerful analytical technique for monitoring subsecond molecular fluctuations in live brain tissue. Despite increasing utilization of FSCV, efforts to improve the accuracy of quantification have been limited due to the complexity of the technique and the dynamic recording environment. It is clear that variable electrode performance renders calibration necessary for accurate quantification; however, the nature of in vivo measurements can make conventional postcalibration difficult, or even impossible. Analyte-specific voltammograms and scaling factors that are critical for quantification can shift or fluctuate in vivo. This is largely due to impedance changes, and the effects of impedance on these measurements have not been characterized. We have previously reported that the background current can be used to predict electrode-specific scaling factors in situ. In this work, we employ model circuits to investigate the impact of impedance on FSCV measurements. Additionally, we take another step toward in situ electrode calibration by using the oxidation potential of quinones on the electrode surface to accurately predict the oxidation potential for dopamine at any point in an electrochemical experiment, as both are dependent on impedance. The model, validated both in adrenal slice and live brain tissue, enables information encoded in the shape of the background voltammogram to determine electrochemical parameters that are critical for accurate quantification. This improves data interpretation and provides a significant next step toward more automated methods for in vivo data analysis.

  16. Making a Traditional Study-Abroad Program Geographic: A Theoretically Informed Regional Approach

    ERIC Educational Resources Information Center

    Jokisch, Brad

    2009-01-01

    Geographers have been active in numerous focused study-abroad programs, but few have created or led language-based programs overseas. This article describes the development of a Spanish language program in Ecuador and how it was made geographic primarily through a theoretically informed regional geography course. The approach employs theoretical…

  17. Content Based Image Retrieval and Information Theory: A General Approach.

    ERIC Educational Resources Information Center

    Zachary, John; Iyengar, S. S.; Barhen, Jacob

    2001-01-01

    Proposes an alternative real valued representation of color based on the information theoretic concept of entropy. A theoretical presentation of image entropy is accompanied by a practical description of the merits and limitations of image entropy compared to color histograms. Results suggest that image entropy is a promising approach to image…

  18. An Everyday and Theoretical Reading of "Perezhivanie" for Informing Research in Early Childhood Education

    ERIC Educational Resources Information Center

    Fleer, Marilyn

    2016-01-01

    The concept of "perezhivanie" has received increasing attention in recent years. However, a clear understanding of this term has not yet been established. Mostly what is highlighted is the need for more informed theoretical discussion. In this paper, discussions centre on what "perezhivanie" means for research in early…

  19. The Public Library User and the Charter Tourist: Two Travellers, One Analogy

    ERIC Educational Resources Information Center

    Eriksson, Catarina A. M.; Michnik, Katarina E.; Nordeborg, Yoshiko

    2013-01-01

    Introduction: A new theoretical model, relevant to library and information science, is implemented in this paper. The aim of this study is to contribute to the theoretical concepts of library and information science by introducing an ethnological model developed for investigating charter tourist styles thereby increasing our knowledge of users'…

  20. Towards Improved Student Experiences in Service Learning in Information Systems Courses

    ERIC Educational Resources Information Center

    Petkova, Olga

    2017-01-01

    The paper explores relevant past research on service-learning in Information Systems courses since 2000. One of the conclusions from this is that most of the publications are not founded on specific theoretical models and are mainly about sharing instructor or student experiences. Then several theoretical frameworks from Education and other…

  1. The Dynamics of Opportunity and Threat Management in Turbulent Environments: The Role of Information Technologies

    ERIC Educational Resources Information Center

    Park, Young Ki

    2011-01-01

    This study explains the role of information technologies in enabling organizations to successfully sense and manage opportunities and threats and achieve competitive advantage in turbulent environments. I use two approaches, a set-theoretic configurational theory approach and a variance theory approach, which are theoretically and methodologically…

  2. A Holistic Theoretical Approach to Intellectual Disability: Going beyond the Four Current Perspectives

    ERIC Educational Resources Information Center

    Schalock, Robert L.; Luckasson, Ruth; Tassé, Marc J.; Verdugo, Miguel Angel

    2018-01-01

    This article describes a holistic theoretical framework that can be used to explain intellectual disability (ID) and organize relevant information into a usable roadmap to guide understanding and application. Developing the framework involved analyzing the four current perspectives on ID and synthesizing this information into a holistic…

  3. Determination of Inorganic Arsenic in a Wide Range of Food Matrices using Hydride Generation - Atomic Absorption Spectrometry.

    PubMed

    de la Calle, Maria B; Devesa, Vicenta; Fiamegos, Yiannis; Vélez, Dinoraz

    2017-09-01

    The European Food Safety Authority (EFSA) underlined in its Scientific Opinion on Arsenic in Food that in order to support a sound exposure assessment to inorganic arsenic through diet, information about distribution of arsenic species in various food types must be generated. A method, previously validated in a collaborative trial, has been applied to determine inorganic arsenic in a wide variety of food matrices, covering grains, mushrooms and food of marine origin (31 samples in total). The method is based on detection by flow injection-hydride generation-atomic absorption spectrometry of the iAs selectively extracted into chloroform after digestion of the proteins with concentrated HCl. The method is characterized by a limit of quantification of 10 µg/kg dry weight, which allowed quantification of inorganic arsenic in a large amount of food matrices. Information is provided about performance scores given to results obtained with this method and which were reported by different laboratories in several proficiency tests. The percentage of satisfactory results obtained with the discussed method is higher than that of the results obtained with other analytical approaches.

  4. Quantification of moving target cyber defenses

    NASA Astrophysics Data System (ADS)

    Farris, Katheryn A.; Cybenko, George

    2015-05-01

    Current network and information systems are static, making it simple for attackers to maintain an advantage. Adaptive defenses, such as Moving Target Defenses (MTD) have been developed as potential "game-changers" in an effort to increase the attacker's workload. With many new methods being developed, it is difficult to accurately quantify and compare their overall costs and effectiveness. This paper compares the tradeoffs between current approaches to the quantification of MTDs. We present results from an expert opinion survey on quantifying the overall effectiveness, upfront and operating costs of a select set of MTD techniques. We find that gathering informed scientific opinions can be advantageous for evaluating such new technologies as it offers a more comprehensive assessment. We end by presenting a coarse ordering of a set of MTD techniques from most to least dominant. We found that seven out of 23 methods rank as the more dominant techniques. Five of which are techniques of either address space layout randomization or instruction set randomization. The remaining two techniques are applicable to software and computer platforms. Among the techniques that performed the worst are those primarily aimed at network randomization.

  5. IsobariQ: software for isobaric quantitative proteomics using IPTL, iTRAQ, and TMT.

    PubMed

    Arntzen, Magnus Ø; Koehler, Christian J; Barsnes, Harald; Berven, Frode S; Treumann, Achim; Thiede, Bernd

    2011-02-04

    Isobaric peptide labeling plays an important role in relative quantitative comparisons of proteomes. Isobaric labeling techniques utilize MS/MS spectra for relative quantification, which can be either based on the relative intensities of reporter ions in the low mass region (iTRAQ and TMT) or on the relative intensities of quantification signatures throughout the spectrum due to isobaric peptide termini labeling (IPTL). Due to the increased quantitative information found in MS/MS fragment spectra generated by the recently developed IPTL approach, new software was required to extract the quantitative information. IsobariQ was specifically developed for this purpose; however, support for the reporter ion techniques iTRAQ and TMT is also included. In addition, to address recently emphasized issues about heterogeneity of variance in proteomics data sets, IsobariQ employs the statistical software package R and variance stabilizing normalization (VSN) algorithms available therein. Finally, the functionality of IsobariQ is validated with data sets of experiments using 6-plex TMT and IPTL. Notably, protein substrates resulting from cleavage by proteases can be identified as shown for caspase targets in apoptosis.

  6. Luminol-Based Chemiluminescent Signals: Clinical and Non-clinical Application and Future Uses

    PubMed Central

    Khan, Parvez; Idrees, Danish; Moxley, Michael A.; Corbett, John A.; Ahmad, Faizan; von Figura, Guido; Sly, William S.; Waheed, Abdul

    2015-01-01

    Chemiluminescence (CL) is an important method for quantification and analysis of various macromolecules. A wide range of CL agents such as luminol, hydrogen peroxide, fluorescein, dioxetanes and derivatives of oxalate, and acridinium dyes are used according to their biological specificity and utility. This review describes the application of luminol chemiluminescence (LCL) in forensic, biomedical, and clinical sciences. LCL is a very useful detection method due to its selectivity, simplicity, low cost, and high sensitivity. LCL has a dynamic range of applications, including quantification and detection of macro and micromolecules such as proteins, carbohydrates, DNA, and RNA. Luminol-based methods are used in environmental monitoring as biosensors, in the pharmaceutical industry for cellular localization and as biological tracers, and in reporter gene-based assays and several other immunoassays. Here, we also provide information about different compounds that may enhance or inhibit the LCL along with the effect of pH and concentration on LCL. This review covers most of the significant information related to the applications of luminol in different fields. PMID:24752935

  7. A 3D Scan Model and Thermal Image Data Fusion Algorithms for 3D Thermography in Medicine

    PubMed Central

    Klima, Ondrej

    2017-01-01

    Objectives At present, medical thermal imaging is still considered a mere qualitative tool enabling us to distinguish between but lacking the ability to quantify the physiological and nonphysiological states of the body. Such a capability would, however, facilitate solving the problem of medical quantification, whose presence currently manifests itself within the entire healthcare system. Methods A generally applicable method to enhance captured 3D spatial data carrying temperature-related information is presented; in this context, all equations required for other data fusions are derived. The method can be utilized for high-density point clouds or detailed meshes at a high resolution but is conveniently usable in large objects with sparse points. Results The benefits of the approach are experimentally demonstrated on 3D thermal scans of injured subjects. We obtained diagnostic information inaccessible via traditional methods. Conclusion Using a 3D model and thermal image data fusion allows the quantification of inflammation, facilitating more precise injury and illness diagnostics or monitoring. The technique offers a wide application potential in medicine and multiple technological domains, including electrical and mechanical engineering. PMID:29250306

  8. Arterial spin labeling in combination with a look-locker sampling strategy: inflow turbo-sampling EPI-FAIR (ITS-FAIR).

    PubMed

    Günther, M; Bock, M; Schad, L R

    2001-11-01

    Arterial spin labeling (ASL) permits quantification of tissue perfusion without the use of MR contrast agents. With standard ASL techniques such as flow-sensitive alternating inversion recovery (FAIR) the signal from arterial blood is measured at a fixed inversion delay after magnetic labeling. As no image information is sampled during this delay, FAIR measurements are inefficient and time-consuming. In this work the FAIR preparation was combined with a Look-Locker acquisition to sample not one but a series of images after each labeling pulse. This new method allows monitoring of the temporal dynamics of blood inflow. To quantify perfusion, a theoretical model for the signal dynamics during the Look-Locker readout was developed and applied. Also, the imaging parameters of the new ITS-FAIR technique were optimized using an expression for the variance of the calculated perfusion. For the given scanner hardware the parameters were: temporal resolution 100 ms, 23 images, flip-angle 25.4 degrees. In a normal volunteer experiment with these parameters an average perfusion value of 48.2 +/- 12.1 ml/100 g/min was measured in the brain. With the ability to obtain ITS-FAIR time series with high temporal resolution arterial transit times in the range of -138 - 1054 ms were measured, where nonphysical negative values were found in voxels containing large vessels. Copyright 2001 Wiley-Liss, Inc.

  9. ProteinInferencer: Confident protein identification and multiple experiment comparison for large scale proteomics projects.

    PubMed

    Zhang, Yaoyang; Xu, Tao; Shan, Bing; Hart, Jonathan; Aslanian, Aaron; Han, Xuemei; Zong, Nobel; Li, Haomin; Choi, Howard; Wang, Dong; Acharya, Lipi; Du, Lisa; Vogt, Peter K; Ping, Peipei; Yates, John R

    2015-11-03

    Shotgun proteomics generates valuable information from large-scale and target protein characterizations, including protein expression, protein quantification, protein post-translational modifications (PTMs), protein localization, and protein-protein interactions. Typically, peptides derived from proteolytic digestion, rather than intact proteins, are analyzed by mass spectrometers because peptides are more readily separated, ionized and fragmented. The amino acid sequences of peptides can be interpreted by matching the observed tandem mass spectra to theoretical spectra derived from a protein sequence database. Identified peptides serve as surrogates for their proteins and are often used to establish what proteins were present in the original mixture and to quantify protein abundance. Two major issues exist for assigning peptides to their originating protein. The first issue is maintaining a desired false discovery rate (FDR) when comparing or combining multiple large datasets generated by shotgun analysis and the second issue is properly assigning peptides to proteins when homologous proteins are present in the database. Herein we demonstrate a new computational tool, ProteinInferencer, which can be used for protein inference with both small- or large-scale data sets to produce a well-controlled protein FDR. In addition, ProteinInferencer introduces confidence scoring for individual proteins, which makes protein identifications evaluable. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.

  10. Source apportionment and sensitivity analysis: two methodologies with two different purposes

    NASA Astrophysics Data System (ADS)

    Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe

    2017-11-01

    This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts (sensitivity analysis) and contributions (source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.

  11. High-performance holographic technologies for fluid-dynamics experiments

    PubMed Central

    Orlov, Sergei S.; Abarzhi, Snezhana I.; Oh, Se Baek; Barbastathis, George; Sreenivasan, Katepalli R.

    2010-01-01

    Modern technologies offer new opportunities for experimentalists in a variety of research areas of fluid dynamics. Improvements are now possible in the state-of-the-art in precision, dynamic range, reproducibility, motion-control accuracy, data-acquisition rate and information capacity. These improvements are required for understanding complex turbulent flows under realistic conditions, and for allowing unambiguous comparisons to be made with new theoretical approaches and large-scale numerical simulations. One of the new technologies is high-performance digital holography. State-of-the-art motion control, electronics and optical imaging allow for the realization of turbulent flows with very high Reynolds number (more than 107) on a relatively small laboratory scale, and quantification of their properties with high space–time resolutions and bandwidth. In-line digital holographic technology can provide complete three-dimensional mapping of the flow velocity and density fields at high data rates (over 1000 frames per second) over a relatively large spatial area with high spatial (1–10 μm) and temporal (better than a few nanoseconds) resolution, and can give accurate quantitative description of the fluid flows, including those of multi-phase and unsteady conditions. This technology can be applied in a variety of problems to study fundamental properties of flow–particle interactions, rotating flows, non-canonical boundary layers and Rayleigh–Taylor mixing. Some of these examples are discussed briefly. PMID:20211881

  12. An information theory account of cognitive control.

    PubMed

    Fan, Jin

    2014-01-01

    Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory.

  13. Urine biomarkers informative of human kidney allograft rejection and tolerance.

    PubMed

    Nissaisorakarn, Voravech; Lee, John Richard; Lubetzky, Michelle; Suthanthiran, Manikkam

    2018-05-01

    We developed urinary cell messenger RNA (mRNA) profiling to monitor in vivo status of human kidney allografts based on our conceptualization that the kidney allograft may function as an in vivo flow cell sorter allowing access of graft infiltrating cells to the glomerular ultrafiltrate and that interrogation of urinary cells is informative of allograft status. For the profiling urinary cells, we developed a two-step preamplification enhanced real-time quantitative PCR (RT-QPCR) assays with a customized amplicon; preamplification compensating for the low RNA yield from urine and the customized amplicon facilitating absolute quantification of mRNA and overcoming the inherent limitations of relative quantification widely used in RT-QPCR assays. Herein, we review our discovery and validation of urinary cell mRNAs as noninvasive biomarkers prognostic and diagnostic of acute cellular rejection (ACR) in kidney allografts. We summarize our results reflecting the utility of urinary cell mRNA profiling for predicting reversal of ACR with anti-rejection therapy; differential diagnosis of kidney allograft dysfunction; and noninvasive diagnosis and prognosis of BK virus nephropathy. Messenger RNA profiles associated with human kidney allograft tolerance are also summarized in this review. Altogether, data supporting the idea that urinary cell mRNA profiles are informative of kidney allograft status and tolerance are reviewed in this report. Copyright © 2018. Published by Elsevier Inc.

  14. Accurate Quantification of T Cells by Measuring Loss of Germline T-Cell Receptor Loci with Generic Single Duplex Droplet Digital PCR Assays.

    PubMed

    Zoutman, Willem H; Nell, Rogier J; Versluis, Mieke; van Steenderen, Debby; Lalai, Rajshri N; Out-Luiting, Jacoba J; de Lange, Mark J; Vermeer, Maarten H; Langerak, Anton W; van der Velden, Pieter A

    2017-03-01

    Quantifying T cells accurately in a variety of tissues of benign, inflammatory, or malignant origin can be of great importance in a variety of clinical applications. Flow cytometry and immunohistochemistry are considered to be gold-standard methods for T-cell quantification. However, these methods require fresh, frozen, or fixated cells and tissue of a certain quality. In addition, conventional and droplet digital PCR (ddPCR), whether followed by deep sequencing techniques, have been used to elucidate T-cell content by focusing on rearranged T-cell receptor (TCR) genes. These approaches typically target the whole TCR repertoire, thereby supplying additional information about TCR use. We alternatively developed and validated two novel generic single duplex ddPCR assays to quantify T cells accurately by measuring loss of specific germline TCR loci and compared them with flow cytometry-based quantification. These assays target sequences between the Dδ2 and Dδ3 genes (TRD locus) and Dβ1 and Jβ1.1 genes (TRB locus) that become deleted systematically early during lymphoid differentiation. Because these ddPCR assays require small amounts of DNA instead of freshly isolated, frozen, or fixated material, initially unanalyzable (scarce) specimens can be assayed from now on, supplying valuable information about T-cell content. Our ddPCR method provides a novel and sensitive way for quantifying T cells relatively fast, accurate, and independent of the cellular context. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  15. Evaluation of dry blood spot technique for quantification of an Anti-CD20 monoclonal antibody drug in human blood samples.

    PubMed

    Lin, Yong-Qing; Zhang, Yilu; Li, Connie; Li, Louis; Zhang, Kelley; Li, Shawn

    2012-01-01

    To evaluate the dried blood spot (DBS) technique in ELISA quantification of larger biomolecular drugs, an anti-CD20 monoclonal antibody drug was used as an example. A method for the quantification of the anti-CD20 drug in human DBS was developed and validated. The drug standard and quality control samples prepared in fresh human blood were spotted on DBS cards and then extracted. A luminescent ELISA was used for quantification of the drug from DBS samples. The assay range of the anti-CD20 drug standards in DBS was 100-2500ng/mL. The intra-assay precision (%CV) ranged from 0.4% to 10.1%, and the accuracy (%Recovery) ranged from 77.9% to 113.9%. The inter assay precision (%CV) ranged from 5.9% to 17.4%, and the accuracy ranged from 81.5% to 110.5%. The DBS samples diluted 500 and 50-fold yielded recovery of 88.7% and 90.7%, respectively. The preparation of DBS in higher and lower hematocrit (53% and 35%) conditions did not affect the recovery of the drug. Furthermore, the storage stability of the anti-CD20 drug on DBS cards was tested at various conditions. It was found that the anti-CD20 drug was stable for one week in DBS stored at room temperature. However, it was determined that the stability was compro]mised in DBS stored at high humidity, high temperature (55°C), and exposed to direct daylight for a week, as well as for samples stored at room temperature and high humidity conditions for a month. Stability did not change significantly in samples that underwent 3 freeze/thaw cycles. Our results demonstrated a successful use of DBS technique in ELISA quantification of an anti-CD20 monoclonal antibody drug in human blood. The stability data provides information regarding sample storage and shipping for future clinical studies. It is, therefore, concluded that the DBS technique is applicable in the quantification of other large biomolecule drugs or biomarkers. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Information Theory Applied to Dolphin Whistle Vocalizations with Possible Application to SETI Signals

    NASA Astrophysics Data System (ADS)

    Doyle, Laurance R.; McCowan, Brenda; Hanser, Sean F.

    2002-01-01

    Information theory allows a quantification of the complexity of a given signaling system. We are applying information theory to dolphin whistle vocalizations, humpback whale songs, squirrel monkey chuck calls, and several other animal communication systems' in order to develop a quantitative and objective way to compare inter species communication systems' complexity. Once signaling units have been correctly classified the communication system must obey certain statistical distributions in order to contain complexity whether it is human languages, dolphin whistle vocalizations, or even a system of communication signals received from an extraterrestrial source.

  17. Histogram analysis for smartphone-based rapid hematocrit determination

    PubMed Central

    Jalal, Uddin M.; Kim, Sang C.; Shim, Joon S.

    2017-01-01

    A novel and rapid analysis technique using histogram has been proposed for the colorimetric quantification of blood hematocrits. A smartphone-based “Histogram” app for the detection of hematocrits has been developed integrating the smartphone embedded camera with a microfluidic chip via a custom-made optical platform. The developed histogram analysis shows its effectiveness in the automatic detection of sample channel including auto-calibration and can analyze the single-channel as well as multi-channel images. Furthermore, the analyzing method is advantageous to the quantification of blood-hematocrit both in the equal and varying optical conditions. The rapid determination of blood hematocrits carries enormous information regarding physiological disorders, and the use of such reproducible, cost-effective, and standard techniques may effectively help with the diagnosis and prevention of a number of human diseases. PMID:28717569

  18. An MRM-based workflow for absolute quantitation of lysine-acetylated metabolic enzymes in mouse liver.

    PubMed

    Xu, Leilei; Wang, Fang; Xu, Ying; Wang, Yi; Zhang, Cuiping; Qin, Xue; Yu, Hongxiu; Yang, Pengyuan

    2015-12-07

    As a key post-translational modification mechanism, protein acetylation plays critical roles in regulating and/or coordinating cell metabolism. Acetylation is a prevalent modification process in enzymes. Protein acetylation modification occurs in sub-stoichiometric amounts; therefore extracting biologically meaningful information from these acetylation sites requires an adaptable, sensitive, specific, and robust method for their quantification. In this work, we combine immunoassays and multiple reaction monitoring-mass spectrometry (MRM-MS) technology to develop an absolute quantification for acetylation modification. With this hybrid method, we quantified the acetylation level of metabolic enzymes, which could demonstrate the regulatory mechanisms of the studied enzymes. The development of this quantitative workflow is a pivotal step for advancing our knowledge and understanding of the regulatory effects of protein acetylation in physiology and pathophysiology.

  19. Using multidimensional scaling to quantify similarity in visual search and beyond

    PubMed Central

    Godwin, Hayward J.; Fitzsimmons, Gemma; Robbins, Arryn; Menneer, Tamaryn; Goldinger, Stephen D.

    2017-01-01

    Visual search is one of the most widely studied topics in vision science, both as an independent topic of interest, and as a tool for studying attention and visual cognition. A wide literature exists that seeks to understand how people find things under varying conditions of difficulty and complexity, and in situations ranging from the mundane (e.g., looking for one’s keys) to those with significant societal importance (e.g., baggage or medical screening). A primary determinant of the ease and probability of success during search are the similarity relationships that exist in the search environment, such as the similarity between the background and the target, or the likeness of the non-targets to one another. A sense of similarity is often intuitive, but it is seldom quantified directly. This presents a problem in that similarity relationships are imprecisely specified, limiting the capacity of the researcher to examine adequately their influence. In this article, we present a novel approach to overcoming this problem that combines multidimensional scaling (MDS) analyses with behavioral and eye-tracking measurements. We propose a method whereby MDS can be repurposed to successfully quantify the similarity of experimental stimuli, thereby opening up theoretical questions in visual search and attention that cannot currently be addressed. These quantifications, in conjunction with behavioral and oculomotor measures, allow for critical observations about how similarity affects performance, information selection, and information processing. We provide a demonstration and tutorial of the approach, identify documented examples of its use, discuss how complementary computer vision methods could also be adopted, and close with a discussion of potential avenues for future application of this technique. PMID:26494381

  20. An information theoretic approach of designing sparse kernel adaptive filters.

    PubMed

    Liu, Weifeng; Park, Il; Principe, José C

    2009-12-01

    This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented.

  1. Integrative analysis with ChIP-seq advances the limits of transcript quantification from RNA-seq.

    PubMed

    Liu, Peng; Sanalkumar, Rajendran; Bresnick, Emery H; Keleş, Sündüz; Dewey, Colin N

    2016-08-01

    RNA-seq is currently the technology of choice for global measurement of transcript abundances in cells. Despite its successes, isoform-level quantification remains difficult because short RNA-seq reads are often compatible with multiple alternatively spliced isoforms. Existing methods rely heavily on uniquely mapping reads, which are not available for numerous isoforms that lack regions of unique sequence. To improve quantification accuracy in such difficult cases, we developed a novel computational method, prior-enhanced RSEM (pRSEM), which uses a complementary data type in addition to RNA-seq data. We found that ChIP-seq data of RNA polymerase II and histone modifications were particularly informative in this approach. In qRT-PCR validations, pRSEM was shown to be superior than competing methods in estimating relative isoform abundances within or across conditions. Data-driven simulations suggested that pRSEM has a greatly decreased false-positive rate at the expense of a small increase in false-negative rate. In aggregate, our study demonstrates that pRSEM transforms existing capacity to precisely estimate transcript abundances, especially at the isoform level. © 2016 Liu et al.; Published by Cold Spring Harbor Laboratory Press.

  2. Legionella detection by culture and qPCR: Comparing apples and oranges.

    PubMed

    Whiley, Harriet; Taylor, Michael

    2016-01-01

    Legionella spp. are the causative agent of Legionnaire's disease and an opportunistic pathogen of significant public health concern. Identification and quantification from environmental sources is crucial for identifying outbreak origins and providing sufficient information for risk assessment and disease prevention. Currently there are a range of methods for Legionella spp. quantification from environmental sources, but the two most widely used and accepted are culture and real-time polymerase chain reaction (qPCR). This paper provides a review of these two methods and outlines their advantages and limitations. Studies from the last 10 years which have concurrently used culture and qPCR to quantify Legionella spp. from environmental sources have been compiled. 26/28 studies detected Legionella at a higher rate using qPCR compared to culture, whilst only one study detected equivalent levels of Legionella spp. using both qPCR and culture. Aggregating the environmental samples from all 28 studies, 2856/3967 (72%) tested positive for the presence of Legionella spp. using qPCR and 1331/3967 (34%) using culture. The lack of correlation between methods highlights the need to develop an acceptable standardized method for quantification that is sufficient for risk assessment and management of this human pathogen.

  3. IDAWG: Metabolic incorporation of stable isotope labels for quantitative glycomics of cultured cells

    PubMed Central

    Orlando, Ron; Lim, Jae-Min; Atwood, James A.; Angel, Peggi M.; Fang, Meng; Aoki, Kazuhiro; Alvarez-Manilla, Gerardo; Moremen, Kelley W.; York, William S.; Tiemeyer, Michael; Pierce, Michael; Dalton, Stephen; Wells, Lance

    2012-01-01

    Robust quantification is an essential component of comparative –omic strategies. In this regard, glycomics lags behind proteomics. Although various isotope-tagging and direct quantification methods have recently enhanced comparative glycan analysis, a cell culture labeling strategy, that could provide for glycomics the advantages that SILAC provides for proteomics, has not been described. Here we report the development of IDAWG, Isotopic Detection of Aminosugars With Glutamine, for the incorporation of differential mass tags into the glycans of cultured cells. In this method, culture media containing amide-15N-Gln is used to metabolically label cellular aminosugars with heavy nitrogen. Because the amide side chain of Gln is the sole source of nitrogen for the biosynthesis of GlcNAc, GalNAc, and sialic acid, we demonstrate that culturing mouse embryonic stems cells for 72 hours in the presence of amide-15N-Gln media results in nearly complete incorporation of 15N into N-linked and O-linked glycans. The isotopically heavy monosaccharide residues provide additional information for interpreting glycan fragmentation and also allow quantification in both full MS and MS/MS modes. Thus, IDAWG is a simple to implement, yet powerful quantitative tool for the glycomics toolbox. PMID:19449840

  4. CEQer: a graphical tool for copy number and allelic imbalance detection from whole-exome sequencing data.

    PubMed

    Piazza, Rocco; Magistroni, Vera; Pirola, Alessandra; Redaelli, Sara; Spinelli, Roberta; Redaelli, Serena; Galbiati, Marta; Valletta, Simona; Giudici, Giovanni; Cazzaniga, Giovanni; Gambacorti-Passerini, Carlo

    2013-01-01

    Copy number alterations (CNA) are common events occurring in leukaemias and solid tumors. Comparative Genome Hybridization (CGH) is actually the gold standard technique to analyze CNAs; however, CGH analysis requires dedicated instruments and is able to perform only low resolution Loss of Heterozygosity (LOH) analyses. Here we present CEQer (Comparative Exome Quantification analyzer), a new graphical, event-driven tool for CNA/allelic-imbalance (AI) coupled analysis of exome sequencing data. By using case-control matched exome data, CEQer performs a comparative digital exonic quantification to generate CNA data and couples this information with exome-wide LOH and allelic imbalance detection. This data is used to build mixed statistical/heuristic models allowing the identification of CNA/AI events. To test our tool, we initially used in silico generated data, then we performed whole-exome sequencing from 20 leukemic specimens and corresponding matched controls and we analyzed the results using CEQer. Taken globally, these analyses showed that the combined use of comparative digital exon quantification and LOH/AI allows generating very accurate CNA data. Therefore, we propose CEQer as an efficient, robust and user-friendly graphical tool for the identification of CNA/AI in the context of whole-exome sequencing data.

  5. Event-based analysis of free-living behaviour.

    PubMed

    Granat, Malcolm H

    2012-11-01

    The quantification of free-living physical activities is important in understanding how physical activity and sedentary behaviour impact on health and also on how interventions might modify free-living behaviour to enhance health. Quantification, and the terminology used, has in many ways been determined by the choice of measurement technique. The inter-related issues around measurement devices and terminology used are explored. This paper proposes a terminology and a systematic approach for the analysis of free-living activity information using event-based activity data. The event-based approach uses a flexible hierarchical classification of events and, dependent on the research question, analysis can then be undertaken on a selection of these events. The quantification of free-living behaviour is therefore the result of the analysis on the patterns of these chosen events. The application of this approach is illustrated with results from a range of published studies by our group showing how event-based analysis provides a flexible yet robust method of addressing the research question(s) and provides a deeper insight into free-living behaviour. It is proposed that it is through event-based analysis we can more clearly understand how behaviour is related to health and also how we can produce more relevant outcome measures.

  6. Library and Information Professionals as Knowledge Engagement Specialists. Theories, Competencies and Current Educational Possibilities in Accredited Graduate Programmes

    ERIC Educational Resources Information Center

    Prado, Javier Calzada; Marzal, Miguel Angel

    2013-01-01

    Introduction: The role of library and information science professionals as knowledge facilitators is solidly grounded in the profession's theoretical foundations as much as connected with its social relevance. Knowledge science is presented in this paper as a convenient theoretical framework for this mission, and knowledge engagement…

  7. What did Erwin mean? The physics of information from the materials genomics of aperiodic crystals and water to molecular information catalysts and life.

    PubMed

    Varn, D P; Crutchfield, J P

    2016-03-13

    Erwin Schrödinger famously and presciently ascribed the vehicle transmitting the hereditary information underlying life to an 'aperiodic crystal'. We compare and contrast this, only later discovered to be stored in the linear biomolecule DNA, with the information-bearing, layered quasi-one-dimensional materials investigated by the emerging field of chaotic crystallography. Despite differences in functionality, the same information measures capture structure and novelty in both, suggesting an intimate coherence between the information character of biotic and abiotic matter-a broadly applicable physics of information. We review layered solids and consider three examples of how information- and computation-theoretic techniques are being applied to understand their structure. In particular, (i) we review recent efforts to apply new kinds of information measures to quantify disordered crystals; (ii) we discuss the structure of ice I in information-theoretic terms; and (iii) we recount recent investigations into the structure of tris(bicyclo[2.1.1]hexeno)benzene, showing how an information-theoretic analysis yields additional insight into its structure. We then illustrate a new Second Law of Thermodynamics that describes information processing in active low-dimensional materials, reviewing Maxwell's Demon and a new class of molecular devices that act as information catalysts. Lastly, we conclude by speculating on how these ideas from informational materials science may impact biology. © 2016 The Author(s).

  8. Python for Information Theoretic Analysis of Neural Data

    PubMed Central

    Ince, Robin A. A.; Petersen, Rasmus S.; Swan, Daniel C.; Panzeri, Stefano

    2008-01-01

    Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources. PMID:19242557

  9. Quantification and Discrimination of in Vitro Regeneration Swertia nervosa at Different Growth Periods using the UPLC/UV Coupled with Chemometric Method.

    PubMed

    Li, Jie; Zhang, Ji; Zuo, Zhitian; Huang, Hengyu; Wang, Yuanzhong

    2018-05-09

    Background : Swertia nervosa (Wall. ex G. Don) C. B. Clarke, a promising traditional herbal medicine for the treatment of liver disorders, is endangered due to its extensive collection and unsustainable harvesting practices. Objective : The aim of this study is to discuss the diversity of metabolites (loganic acid, sweroside, swertiamarin, and gentiopicroside) at different growth stages and organs of Swertia nervosa using the ultra-high-performance LC (UPLC)/UV coupled with chemometric method. Methods : UPLC data, UV data, and data fusion were treated separately to find more useful information by partial least-squares discriminant analysis (PLS-DA). Hierarchical cluster analysis (HCA), an unsupervised method, was then employed for validating the results from PLS-DA. Results : Three strategies displayed different chemical information associated with the sample discrimination. UV information mainly contributed to the classification of different organs; UPLC information was prominently responsible for both organs and growth periods; the data fusion did not perform with apparent superiority compared with single data analysis, although it provided useful information to differentiate leaves that could not be recognized by UPLC. The quantification result showed that the content of swertiamarin was the highest compared with the other three metabolites, especially in leaves at the rooted stage (19.57 ± 5.34 mg/g). Therefore, we speculated that interactive transformations occurred among these four metabolites, facilitated by root formation. Conclusions : This work will contribute to exploitation of bioactive compounds of S. nervosa , as well as its large-scale propagation. Highlights : The roots formation may influence the distribution and accumulation of metabolites.

  10. EPRI/NRC-RES fire human reliability analysis guidelines.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Stuart R.; Cooper, Susan E.; Najafi, Bijan

    2010-03-01

    During the 1990s, the Electric Power Research Institute (EPRI) developed methods for fire risk analysis to support its utility members in the preparation of responses to Generic Letter 88-20, Supplement 4, 'Individual Plant Examination - External Events' (IPEEE). This effort produced a Fire Risk Assessment methodology for operations at power that was used by the majority of U.S. nuclear power plants (NPPs) in support of the IPEEE program and several NPPs overseas. Although these methods were acceptable for accomplishing the objectives of the IPEEE, EPRI and the U.S. Nuclear Regulatory Commission (NRC) recognized that they required upgrades to support currentmore » requirements for risk-informed, performance-based (RI/PB) applications. In 2001, EPRI and the USNRC's Office of Nuclear Regulatory Research (RES) embarked on a cooperative project to improve the state-of-the-art in fire risk assessment to support a new risk-informed environment in fire protection. This project produced a consensus document, NUREG/CR-6850 (EPRI 1011989), entitled 'Fire PRA Methodology for Nuclear Power Facilities' which addressed fire risk for at power operations. NUREG/CR-6850 developed high level guidance on the process for identification and inclusion of human failure events (HFEs) into the fire PRA (FPRA), and a methodology for assigning quantitative screening values to these HFEs. It outlined the initial considerations of performance shaping factors (PSFs) and related fire effects that may need to be addressed in developing best-estimate human error probabilities (HEPs). However, NUREG/CR-6850 did not describe a methodology to develop best-estimate HEPs given the PSFs and the fire-related effects. In 2007, EPRI and RES embarked on another cooperative project to develop explicit guidance for estimating HEPs for human failure events under fire generated conditions, building upon existing human reliability analysis (HRA) methods. This document provides a methodology and guidance for conducting a fire HRA. This process includes identification and definition of post-fire human failure events, qualitative analysis, quantification, recovery, dependency, and uncertainty. This document provides three approaches to quantification: screening, scoping, and detailed HRA. Screening is based on the guidance in NUREG/CR-6850, with some additional guidance for scenarios with long time windows. Scoping is a new approach to quantification developed specifically to support the iterative nature of fire PRA quantification. Scoping is intended to provide less conservative HEPs than screening, but requires fewer resources than a detailed HRA analysis. For detailed HRA quantification, guidance has been developed on how to apply existing methods to assess post-fire fire HEPs.« less

  11. Toward theoretical understanding of the fertility preservation decision-making process: examining information processing among young women with cancer.

    PubMed

    Hershberger, Patricia E; Finnegan, Lorna; Altfeld, Susan; Lake, Sara; Hirshfeld-Cytron, Jennifer

    2013-01-01

    Young women with cancer now face the complex decision about whether to undergo fertility preservation. Yet little is known about how these women process information involved in making this decision. The purpose of this article is to expand theoretical understanding of the decision-making process by examining aspects of information processing among young women diagnosed with cancer. Using a grounded theory approach, 27 women with cancer participated in individual, semistructured interviews. Data were coded and analyzed using constant-comparison techniques that were guided by 5 dimensions within the Contemplate phase of the decision-making process framework. In the first dimension, young women acquired information primarily from clinicians and Internet sources. Experiential information, often obtained from peers, occurred in the second dimension. Preferences and values were constructed in the third dimension as women acquired factual, moral, and ethical information. Women desired tailored, personalized information that was specific to their situation in the fourth dimension; however, women struggled with communicating these needs to clinicians. In the fifth dimension, women offered detailed descriptions of clinician behaviors that enhance or impede decisional debriefing. Better understanding of theoretical underpinnings surrounding women's information processes can facilitate decision support and improve clinical care.

  12. Role of information theoretic uncertainty relations in quantum theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz; ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin; Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk

    2015-04-15

    Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again,more » improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.« less

  13. A Theoretical Sketch of Medical Professionalism as a Normative Complex

    ERIC Educational Resources Information Center

    Holtman, Matthew C.

    2008-01-01

    Validity arguments for assessment tools intended to measure medical professionalism suffer for lack of a clear theoretical statement of what professionalism is and how it should behave. Drawing on several decades of field research addressing deviance and informal social control among physicians, a theoretical sketch of professionalism is presented…

  14. The use of information theory for the evaluation of biomarkers of aging and physiological age.

    PubMed

    Blokh, David; Stambler, Ilia

    2017-04-01

    The present work explores the application of information theoretical measures, such as entropy and normalized mutual information, for research of biomarkers of aging. The use of information theory affords unique methodological advantages for the study of aging processes, as it allows evaluating non-linear relations between biological parameters, providing the precise quantitative strength of those relations, both for individual and multiple parameters, showing cumulative or synergistic effect. Here we illustrate those capabilities utilizing a dataset on heart disease, including diagnostic parameters routinely available to physicians. The use of information-theoretical methods, utilizing normalized mutual information, revealed the exact amount of information that various diagnostic parameters or their combinations contained about the persons' age. Based on those exact informative values for the correlation of measured parameters with age, we constructed a diagnostic rule (a decision tree) to evaluate physiological age, as compared to chronological age. The present data illustrated that younger subjects suffering from heart disease showed characteristics of people of higher age (higher physiological age). Utilizing information-theoretical measures, with additional data, it may be possible to create further clinically applicable information-theory-based markers and models for the evaluation of physiological age, its relation to age-related diseases and its potential modifications by therapeutic interventions. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Towards quantification of vibronic coupling in photosynthetic antenna complexes

    NASA Astrophysics Data System (ADS)

    Singh, V. P.; Westberg, M.; Wang, C.; Dahlberg, P. D.; Gellen, T.; Gardiner, A. T.; Cogdell, R. J.; Engel, G. S.

    2015-06-01

    Photosynthetic antenna complexes harvest sunlight and efficiently transport energy to the reaction center where charge separation powers biochemical energy storage. The discovery of existence of long lived quantum coherence during energy transfer has sparked the discussion on the role of quantum coherence on the energy transfer efficiency. Early works assigned observed coherences to electronic states, and theoretical studies showed that electronic coherences could affect energy transfer efficiency—by either enhancing or suppressing transfer. However, the nature of coherences has been fiercely debated as coherences only report the energy gap between the states that generate coherence signals. Recent works have suggested that either the coherences observed in photosynthetic antenna complexes arise from vibrational wave packets on the ground state or, alternatively, coherences arise from mixed electronic and vibrational states. Understanding origin of coherences is important for designing molecules for efficient light harvesting. Here, we give a direct experimental observation from a mutant of LH2, which does not have B800 chromophores, to distinguish between electronic, vibrational, and vibronic coherence. We also present a minimal theoretical model to characterize the coherences both in the two limiting cases of purely vibrational and purely electronic coherence as well as in the intermediate, vibronic regime.

  16. A theoretical introduction to "combinatory SYBRGreen qPCR screening", a matrix-based approach for the detection of materials derived from genetically modified plants.

    PubMed

    Van den Bulcke, Marc; Lievens, Antoon; Barbau-Piednoir, Elodie; MbongoloMbella, Guillaume; Roosens, Nancy; Sneyers, Myriam; Casi, Amaya Leunda

    2010-03-01

    The detection of genetically modified (GM) materials in food and feed products is a complex multi-step analytical process invoking screening, identification, and often quantification of the genetically modified organisms (GMO) present in a sample. "Combinatory qPCR SYBRGreen screening" (CoSYPS) is a matrix-based approach for determining the presence of GM plant materials in products. The CoSYPS decision-support system (DSS) interprets the analytical results of SYBRGREEN qPCR analysis based on four values: the C(t)- and T(m) values and the LOD and LOQ for each method. A theoretical explanation of the different concepts applied in CoSYPS analysis is given (GMO Universe, "Prime number tracing", matrix/combinatory approach) and documented using the RoundUp Ready soy GTS40-3-2 as an example. By applying a limited set of SYBRGREEN qPCR methods and through application of a newly developed "prime number"-based algorithm, the nature of subsets of corresponding GMO in a sample can be determined. Together, these analyses provide guidance for semi-quantitative estimation of GMO presence in a food and feed product.

  17. Quantification of Operational Risk Using A Data Mining

    NASA Technical Reports Server (NTRS)

    Perera, J. Sebastian

    1999-01-01

    What is Data Mining? - Data Mining is the process of finding actionable information hidden in raw data. - Data Mining helps find hidden patterns, trends, and important relationships often buried in a sea of data - Typically, automated software tools based on advanced statistical analysis and data modeling technology can be utilized to automate the data mining process

  18. Characterization and elimination of undesirable protein residues in plant cell walls for enhancing lignin analysis by solution-state 2D gel-NMR methods

    USDA-ARS?s Scientific Manuscript database

    Proteins exist in every plant cell wall. Certain protein residues interfere with lignin characterization and quantification. The current solution-state 2D-NMR technique (gel-NMR) for whole plant cell wall structural profiling provides detailed information regarding cell walls and proteins. However, ...

  19. Differential Decay of Cattle-associated Fecal Indicator Bacteria and Microbial Source Tracking Markers in Fresh and Marine Water (ASM 2017 Presentation)

    EPA Science Inventory

    Background: Fecal indicator bacteria (FIB) have a long history of use in the assessment of the microbial quality of recreational waters. However, quantification of FIB provides no information about the pollution source(s) and relatively little is known about their fate in the amb...

  20. Odor and odorous chemical emissions from animal buildings: Part 5 - Correlations between odor intensities and chemical concentrations (GC-MS/O)

    USDA-ARS?s Scientific Manuscript database

    Simultaneous chemical and sensory analysis based on gas chromatography-mass spectrometry-olfactometry (GC-MS-O) of air samples from livestock operations is a very useful approach for quantification of target odorous gases and also for ranking of odorous compounds. This information can help link spec...

  1. Emergy Quantification of Educational Attainment in the U.S. and the Emergy Signature of the Nation from 1900 to 2007

    EPA Science Inventory

    The information held in the education and experience of a people is a resource that requires time to develop and a cycle of copying and replacement (i.e., the education system) to maintain. Knowledge itself grows with the expanding research component of universities; however, we ...

  2. Shaping thin film growth and microstructure pathways via plasma and deposition energy: a detailed theoretical, computational and experimental analysis.

    PubMed

    Sahu, Bibhuti Bhusan; Han, Jeon Geon; Kersten, Holger

    2017-02-15

    Understanding the science and engineering of thin films using plasma assisted deposition methods with controlled growth and microstructure is a key issue in modern nanotechnology, impacting both fundamental research and technological applications. Different plasma parameters like electrons, ions, radical species and neutrals play a critical role in nucleation and growth and the corresponding film microstructure as well as plasma-induced surface chemistry. The film microstructure is also closely associated with deposition energy which is controlled by electrons, ions, radical species and activated neutrals. The integrated studies on the fundamental physical properties that govern the plasmas seek to determine their structure and modification capabilities under specific experimental conditions. There is a requirement for identification, determination, and quantification of the surface activity of the species in the plasma. Here, we report a detailed study of hydrogenated amorphous and crystalline silicon (c-Si:H) processes to investigate the evolution of plasma parameters using a theoretical model. The deposition processes undertaken using a plasma enhanced chemical vapor deposition method are characterized by a reactive mixture of hydrogen and silane. Later, various contributions of energy fluxes on the substrate are considered and modeled to investigate their role in the growth of the microstructure of the deposited film. Numerous plasma diagnostic tools are used to compare the experimental data with the theoretical results. The film growth and microstructure are evaluated in light of deposition energy flux under different operating conditions.

  3. A numerical analysis of the Born approximation for image formation modeling of differential interference contrast microscopy for human embryos

    NASA Astrophysics Data System (ADS)

    Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir

    2008-03-01

    The differential interference contrast (DIC) microscope is commonly used for the visualization of live biological specimens. It enables the view of the transparent specimens while preserving their viability, being a non-invasive modality. Fertility clinics often use the DIC microscope for evaluation of human embryos quality. Towards quantification and reconstruction of the visualized specimens, an image formation model for DIC imaging is sought and the interaction of light waves with biological matter is examined. In many image formation models the light-matter interaction is expressed via the first Born approximation. The validity region of this approximation is defined in a theoretical bound which limits its use to very small specimens with low dielectric contrast. In this work the Born approximation is investigated via the Helmholtz equation, which describes the interaction between the specimen and light. A solution on the lens field is derived using the Gaussian Legendre quadrature formulation. This numerical scheme is considered both accurate and efficient and has shortened significantly the computation time as compared to integration methods that required a great amount of sampling for satisfying the Whittaker - Shannon sampling theorem. By comparing the numerical results with the theoretical values it is shown that the theoretical bound is not directly relevant to microscopic imaging and is far too limiting. The numerical exhaustive experiments show that the Born approximation is inappropriate for modeling the visualization of thick human embryos.

  4. Modeling the chemistry of plasma polymerization using mass spectrometry.

    PubMed

    Ihrig, D F; Stockhaus, J; Scheide, F; Winkelhake, Oliver; Streuber, Oliver

    2003-04-01

    The goal of the project is a solvent free painting shop. The environmental technologies laboratory is developing processes of plasma etching and polymerization. Polymerized thin films are first-order corrosion protection and primer for painting. Using pure acetylene we get very nice thin films which were not bonded very well. By using air as bulk gas it is possible to polymerize, in an acetylene plasma, well bonded thin films which are stable first-order corrosion protections and good primers. UV/Vis spectroscopy shows nitrogen oxide radicals in the emission spectra of pure nitrogen and air. But nitrogen oxide is fully suppressed in the presence of acetylene. IR spectroscopy shows only C=O, CH(2) and CH(3) groups but no nitrogen species. With the aid of UV/Vis spectra and the chemistry of ozone formation it is possible to define reactive traps and steps, molecule depletion and processes of proton scavenging and proton loss. Using a numerical model it is possible to evaluate these processes and to calculate theoretical mass spectra. Adjustment of theoretical mass spectra to real measurements leads to specific channels of polymerization which are driven by radicals especially the acetyl radical. The estimated theoretical mass spectra show the specific channels of these chemical processes. It is possible to quantify these channels. This quantification represents the mass flow through this chemical system. With respect to these chemical processes it is possible to have an idea of pollutant production processes.

  5. On-Chip, Amplification-Free Quantification of Nucleic Acid for Point-of-Care Diagnosis

    NASA Astrophysics Data System (ADS)

    Yen, Tony Minghung

    This dissertation demonstrates three physical device concepts to overcome limitations in point-of-care quantification of nucleic acids. Enabling sensitive, high throughput nucleic acid quantification on a chip, outside of hospital and centralized laboratory setting, is crucial for improving pathogen detection and cancer diagnosis and prognosis. Among existing platforms, microarray have the advantages of being amplification free, low instrument cost, and high throughput, but are generally less sensitive compared to sequencing and PCR assays. To bridge this performance gap, this dissertation presents theoretical and experimental progress to develop a platform nucleic acid quantification technology that is drastically more sensitive than current microarrays while compatible with microarray architecture. The first device concept explores on-chip nucleic acid enrichment by natural evaporation of nucleic acid solution droplet. Using a micro-patterned super-hydrophobic black silicon array device, evaporative enrichment is coupled with nano-liter droplet self-assembly workflow to produce a 50 aM concentration sensitivity, 6 orders of dynamic range, and rapid hybridization time at under 5 minutes. The second device concept focuses on improving target copy number sensitivity, instead of concentration sensitivity. A comprehensive microarray physical model taking into account of molecular transport, electrostatic intermolecular interactions, and reaction kinetics is considered to guide device optimization. Device pattern size and target copy number are optimized based on model prediction to achieve maximal hybridization efficiency. At a 100-mum pattern size, a quantum leap in detection limit of 570 copies is achieved using black silicon array device with self-assembled pico-liter droplet workflow. Despite its merits, evaporative enrichment on black silicon device suffers from coffee-ring effect at 100-mum pattern size, and thus not compatible with clinical patient samples. The third device concept utilizes an integrated optomechanical laser system and a Cytop microarray device to reverse coffee-ring effect during evaporative enrichment at 100-mum pattern size. This method, named "laser-induced differential evaporation" is expected to enable 570 copies detection limit for clinical samples in near future. While the work is ongoing as of the writing of this dissertation, a clear research plan is in place to implement this method on microarray platform toward clinical sample testing for disease applications and future commercialization.

  6. An information theory account of cognitive control

    PubMed Central

    Fan, Jin

    2014-01-01

    Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory. PMID:25228875

  7. An integrated organisation-wide data quality management and information governance framework: theoretical underpinnings.

    PubMed

    Liaw, Siaw-Teng; Pearce, Christopher; Liyanage, Harshana; Liaw, Gladys S S; de Lusignan, Simon

    2014-01-01

    Increasing investment in eHealth aims to improve cost effectiveness and safety of care. Data extraction and aggregation can create new data products to improve professional practice and provide feedback to improve the quality of source data. A previous systematic review concluded that locally relevant clinical indicators and use of clinical record systems could support clinical governance. We aimed to extend and update the review with a theoretical framework. We searched PubMed, Medline, Web of Science, ABI Inform (Proquest) and Business Source Premier (EBSCO) using the terms curation, information ecosystem, data quality management (DQM), data governance, information governance (IG) and data stewardship. We focused on and analysed the scope of DQM and IG processes, theoretical frameworks, and determinants of the processing, quality assurance, presentation and sharing of data across the enterprise. There are good theoretical reasons for integrated governance, but there is variable alignment of DQM, IG and health system objectives across the health enterprise. Ethical constraints exist that require health information ecosystems to process data in ways that are aligned with improving health and system efficiency and ensuring patient safety. Despite an increasingly 'big-data' environment, DQM and IG in health services are still fragmented across the data production cycle. We extend current work on DQM and IG with a theoretical framework for integrated IG across the data cycle. The dimensions of this theory-based framework would require testing with qualitative and quantitative studies to examine the applicability and utility, along with an evaluation of its impact on data quality across the health enterprise.

  8. Using a theoretical framework to investigate whether the HIV/AIDS information needs of the AfroAIDSinfo Web portal members are met: a South African eHealth study.

    PubMed

    Van Zyl, Hendra; Kotze, Marike; Laubscher, Ria

    2014-03-28

    eHealth has been identified as a useful approach to disseminate HIV/AIDS information. Together with Consumer Health Informatics (CHI), the Web-to-Public Knowledge Transfer Model (WPKTM) has been applied as a theoretical framework to identify consumer needs for AfroAIDSinfo, a South African Web portal. As part of the CHI practice, regular eSurveys are conducted to determine whether these needs are changing and are continually being met. eSurveys show high rates of satisfaction with the content as well as the modes of delivery. The nature of information is thought of as reliable to reuse; both for education and for referencing of information. Using CHI and the WPKTM as a theoretical framework, it ensures that needs of consumers are being met and that they find the tailored methods of presenting the information agreeable. Combining ICTs and theories in eHealth interventions, this approach can be expanded to deliver information in other sectors of public health.

  9. Using a Theoretical Framework to Investigate Whether the HIV/AIDS Information Needs of the AfroAIDSinfo Web Portal Members Are Met: A South African eHealth Study

    PubMed Central

    Van Zyl, Hendra; Kotze, Marike; Laubscher, Ria

    2014-01-01

    eHealth has been identified as a useful approach to disseminate HIV/AIDS information. Together with Consumer Health Informatics (CHI), the Web-to-Public Knowledge Transfer Model (WPKTM) has been applied as a theoretical framework to identify consumer needs for AfroAIDSinfo, a South African Web portal. As part of the CHI practice, regular eSurveys are conducted to determine whether these needs are changing and are continually being met. eSurveys show high rates of satisfaction with the content as well as the modes of delivery. The nature of information is thought of as reliable to reuse; both for education and for referencing of information. Using CHI and the WPKTM as a theoretical framework, it ensures that needs of consumers are being met and that they find the tailored methods of presenting the information agreeable. Combining ICTs and theories in eHealth interventions, this approach can be expanded to deliver information in other sectors of public health. PMID:24686487

  10. Practice Evaluation Strategies Among Social Workers: Why an Evidence-Informed Dual-Process Theory Still Matters.

    PubMed

    Davis, Thomas D

    2017-01-01

    Practice evaluation strategies range in style from the formal-analytic tools of single-subject designs, rapid assessment instruments, algorithmic steps in evidence-informed practice, and computer software applications, to the informal-interactive tools of clinical supervision, consultation with colleagues, use of client feedback, and clinical experience. The purpose of this article is to provide practice researchers in social work with an evidence-informed theory that is capable of explaining both how and why social workers use practice evaluation strategies to self-monitor the effectiveness of their interventions in terms of client change. The author delineates the theoretical contours and consequences of what is called dual-process theory. Drawing on evidence-informed advances in the cognitive and social neurosciences, the author identifies among everyday social workers a theoretically stable, informal-interactive tool preference that is a cognitively necessary, sufficient, and stand-alone preference that requires neither the supplementation nor balance of formal-analytic tools. The author's delineation of dual-process theory represents a theoretical contribution in the century-old attempt to understand how and why social workers evaluate their practice the way they do.

  11. Information-theoretic decomposition of embodied and situated systems.

    PubMed

    Da Rold, Federico

    2018-07-01

    The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    PubMed

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  13. Quantification of Flavin-containing Monooxygenases 1, 3, and 5 in Human Liver Microsomes by UPLC-MRM-Based Targeted Quantitative Proteomics and Its Application to the Study of Ontogeny.

    PubMed

    Chen, Yao; Zane, Nicole R; Thakker, Dhiren R; Wang, Michael Zhuo

    2016-07-01

    Flavin-containing monooxygenases (FMOs) have a significant role in the metabolism of small molecule pharmaceuticals. Among the five human FMOs, FMO1, FMO3, and FMO5 are the most relevant to hepatic drug metabolism. Although age-dependent hepatic protein expression, based on immunoquantification, has been reported previously for FMO1 and FMO3, there is very little information on hepatic FMO5 protein expression. To overcome the limitations of immunoquantification, an ultra-performance liquid chromatography (UPLC)-multiple reaction monitoring (MRM)-based targeted quantitative proteomic method was developed and optimized for the quantification of FMO1, FMO3, and FMO5 in human liver microsomes (HLM). A post-in silico product ion screening process was incorporated to verify LC-MRM detection of potential signature peptides before their synthesis. The developed method was validated by correlating marker substrate activity and protein expression in a panel of adult individual donor HLM (age 39-67 years). The mean (range) protein expression of FMO3 and FMO5 was 46 (26-65) pmol/mg HLM protein and 27 (11.5-49) pmol/mg HLM protein, respectively. To demonstrate quantification of FMO1, a panel of fetal individual donor HLM (gestational age 14-20 weeks) was analyzed. The mean (range) FMO1 protein expression was 7.0 (4.9-9.7) pmol/mg HLM protein. Furthermore, the ontogenetic protein expression of FMO5 was evaluated in fetal, pediatric, and adult HLM. The quantification of FMO proteins also was compared using two different calibration standards, recombinant proteins versus synthetic signature peptides, to assess the ratio between holoprotein versus total protein. In conclusion, a UPLC-MRM-based targeted quantitative proteomic method has been developed for the quantification of FMO enzymes in HLM. Copyright © 2016 by The American Society for Pharmacology and Experimental Therapeutics.

  14. Quantification of Flavin-containing Monooxygenases 1, 3, and 5 in Human Liver Microsomes by UPLC-MRM-Based Targeted Quantitative Proteomics and Its Application to the Study of Ontogeny

    PubMed Central

    Chen, Yao; Zane, Nicole R.; Thakker, Dhiren R.

    2016-01-01

    Flavin-containing monooxygenases (FMOs) have a significant role in the metabolism of small molecule pharmaceuticals. Among the five human FMOs, FMO1, FMO3, and FMO5 are the most relevant to hepatic drug metabolism. Although age-dependent hepatic protein expression, based on immunoquantification, has been reported previously for FMO1 and FMO3, there is very little information on hepatic FMO5 protein expression. To overcome the limitations of immunoquantification, an ultra-performance liquid chromatography (UPLC)-multiple reaction monitoring (MRM)-based targeted quantitative proteomic method was developed and optimized for the quantification of FMO1, FMO3, and FMO5 in human liver microsomes (HLM). A post-in silico product ion screening process was incorporated to verify LC-MRM detection of potential signature peptides before their synthesis. The developed method was validated by correlating marker substrate activity and protein expression in a panel of adult individual donor HLM (age 39–67 years). The mean (range) protein expression of FMO3 and FMO5 was 46 (26–65) pmol/mg HLM protein and 27 (11.5–49) pmol/mg HLM protein, respectively. To demonstrate quantification of FMO1, a panel of fetal individual donor HLM (gestational age 14–20 weeks) was analyzed. The mean (range) FMO1 protein expression was 7.0 (4.9–9.7) pmol/mg HLM protein. Furthermore, the ontogenetic protein expression of FMO5 was evaluated in fetal, pediatric, and adult HLM. The quantification of FMO proteins also was compared using two different calibration standards, recombinant proteins versus synthetic signature peptides, to assess the ratio between holoprotein versus total protein. In conclusion, a UPLC-MRM-based targeted quantitative proteomic method has been developed for the quantification of FMO enzymes in HLM. PMID:26839369

  15. Automated Quantification and Integrative Analysis of 2D and 3D Mitochondrial Shape and Network Properties

    PubMed Central

    Nikolaisen, Julie; Nilsson, Linn I. H.; Pettersen, Ina K. N.; Willems, Peter H. G. M.; Lorens, James B.; Koopman, Werner J. H.; Tronstad, Karl J.

    2014-01-01

    Mitochondrial morphology and function are coupled in healthy cells, during pathological conditions and (adaptation to) endogenous and exogenous stress. In this sense mitochondrial shape can range from small globular compartments to complex filamentous networks, even within the same cell. Understanding how mitochondrial morphological changes (i.e. “mitochondrial dynamics”) are linked to cellular (patho) physiology is currently the subject of intense study and requires detailed quantitative information. During the last decade, various computational approaches have been developed for automated 2-dimensional (2D) analysis of mitochondrial morphology and number in microscopy images. Although these strategies are well suited for analysis of adhering cells with a flat morphology they are not applicable for thicker cells, which require a three-dimensional (3D) image acquisition and analysis procedure. Here we developed and validated an automated image analysis algorithm allowing simultaneous 3D quantification of mitochondrial morphology and network properties in human endothelial cells (HUVECs). Cells expressing a mitochondria-targeted green fluorescence protein (mitoGFP) were visualized by 3D confocal microscopy and mitochondrial morphology was quantified using both the established 2D method and the new 3D strategy. We demonstrate that both analyses can be used to characterize and discriminate between various mitochondrial morphologies and network properties. However, the results from 2D and 3D analysis were not equivalent when filamentous mitochondria in normal HUVECs were compared with circular/spherical mitochondria in metabolically stressed HUVECs treated with rotenone (ROT). 2D quantification suggested that metabolic stress induced mitochondrial fragmentation and loss of biomass. In contrast, 3D analysis revealed that the mitochondrial network structure was dissolved without affecting the amount and size of the organelles. Thus, our results demonstrate that 3D imaging and quantification are crucial for proper understanding of mitochondrial shape and topology in non-flat cells. In summary, we here present an integrative method for unbiased 3D quantification of mitochondrial shape and network properties in mammalian cells. PMID:24988307

  16. Simultaneous quantification of the viral antigens hemagglutinin and neuraminidase in influenza vaccines by LC-MSE.

    PubMed

    Creskey, Marybeth C; Li, Changgui; Wang, Junzhi; Girard, Michel; Lorbetskie, Barry; Gravel, Caroline; Farnsworth, Aaron; Li, Xuguang; Smith, Daryl G S; Cyr, Terry D

    2012-07-06

    Current methods for quality control of inactivated influenza vaccines prior to regulatory approval include determining the hemagglutinin (HA) content by single radial immunodiffusion (SRID), verifying neuraminidase (NA) enzymatic activity, and demonstrating that the levels of the contaminant protein ovalbumin are below a set threshold of 1 μg/dose. The SRID assays require the availability of strain-specific reference HA antigens and antibodies, the production of which is a potential rate-limiting step in vaccine development and release, particularly during a pandemic. Immune responses induced by neuraminidase also contribute to protection from infection; however, the amounts of NA antigen in influenza vaccines are currently not quantified or standardized. Here, we report a method for vaccine analysis that yields simultaneous quantification of HA and NA levels much more rapidly than conventional HA quantification techniques, while providing additional valuable information on the total protein content. Enzymatically digested vaccine proteins were analyzed by LC-MS(E), a mass spectrometric technology that allows absolute quantification of analytes, including the HA and NA antigens, other structural influenza proteins and chicken egg proteins associated with the manufacturing process. This method has potential application for increasing the accuracy of reference antigen standards and for validating label claims for HA content in formulated vaccines. It can also be used to monitor NA and chicken egg protein content in order to monitor manufacturing consistency. While this is a useful methodology with potential for broad application, we also discuss herein some of the inherent limitations of this approach and the care and caution that must be taken in its use as a tool for absolute protein quantification. The variations in HA, NA and chicken egg protein concentrations in the vaccines analyzed in this study are indicative of the challenges associated with the current manufacturing and quality control testing procedures. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  17. Using Software Simulators to Enhance the Learning of Digital Logic Design for the Information Technology Students

    ERIC Educational Resources Information Center

    Alsadoon, Abeer; Prasad, P. W. C.; Beg, Azam

    2017-01-01

    Making the students understand the theoretical concepts of digital logic design concepts is one of the major issues faced by the academics, therefore the teachers have tried different techniques to link the theoretical information to the practical knowledge. Use of software simulations is a technique for learning and practice that can be applied…

  18. Perturbation of longitudinal relaxation rate in rotating frame (PLRF) analysis for quantification of chemical exchange saturation transfer signal in a transient state.

    PubMed

    Wang, Yi; Zhang, Yaoyu; Zhao, Xuna; Wu, Bing; Gao, Jia-Hong

    2017-11-01

    To develop a novel analytical method for quantification of chemical exchange saturation transfer (CEST) in the transient state. The proposed method aims to reduce the effects of non-chemical-exchange (non-CE) parameters on the CEST signal, emphasizing the effect of chemical exchange. The difference in the longitudinal relaxation rate in the rotating frame ( ΔR1ρ) was calculated based on perturbation of the Z-value by R1ρ, and a saturation-pulse-amplitude-compensated exchange-dependent relaxation rate (SPACER) was determined with a high-exchange-rate approximation. In both phantom and human subject experiments, MTRasym (representative of the traditional CEST index), ΔR1ρ, and SPACER were measured, evaluated, and compared by altering the non-CE parameters in a transient-state continuous-wave CEST sequence. In line with the theoretical expectation, our experimental data demonstrate that the effects of the non-CE parameters can be more effectively reduced using the proposed indices (  ΔR1ρ and SPACER) than using the traditional CEST index ( MTRasym). The proposed method allows for the chemical exchange weight to be better emphasized in the transient-state CEST signal, which is beneficial, in practice, for quantifying the CEST signal. Magn Reson Med 78:1711-1723, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  19. A risk assessment methodology to evaluate the risk failure of managed aquifer recharge in the Mediterranean Basin

    NASA Astrophysics Data System (ADS)

    Rodríguez-Escales, Paula; Canelles, Arnau; Sanchez-Vila, Xavier; Folch, Albert; Kurtzman, Daniel; Rossetto, Rudy; Fernández-Escalante, Enrique; Lobo-Ferreira, João-Paulo; Sapiano, Manuel; San-Sebastián, Jon; Schüth, Christoph

    2018-06-01

    Managed aquifer recharge (MAR) can be affected by many risks. Those risks are related to different technical and non-technical aspects of recharge, like water availability, water quality, legislation, social issues, etc. Many other works have acknowledged risks of this nature theoretically; however, their quantification and definition has not been developed. In this study, the risk definition and quantification has been performed by means of fault trees and probabilistic risk assessment (PRA). We defined a fault tree with 65 basic events applicable to the operation phase. After that, we have applied this methodology to six different managed aquifer recharge sites located in the Mediterranean Basin (Portugal, Spain, Italy, Malta, and Israel). The probabilities of the basic events were defined by expert criteria, based on the knowledge of the different managers of the facilities. From that, we conclude that in all sites, the perception of the expert criteria of the non-technical aspects were as much or even more important than the technical aspects. Regarding the risk results, we observe that the total risk in three of the six sites was equal to or above 0.90. That would mean that the MAR facilities have a risk of failure equal to or higher than 90 % in the period of 2-6 years. The other three sites presented lower risks (75, 29, and 18 % for Malta, Menashe, and Serchio, respectively).

  20. The making of a population: Challenges, implications, and consequences of the quantification of social difference.

    PubMed

    Cruz, Taylor M

    2017-02-01

    How do we make a difference? This paper traces the connections made between quantified knowledge, population health, and social justice by examining the efforts of population scientists to assess sexuality as a point of difference within population-based data systems, including on national health and social surveys, electronic medical records, and the Census. Population scientists emphasize the importance of measuring social difference in order to identify and remedy structural disadvantage. This evaluation requires the assessment of difference and the comparison of distinct groups across standardized outcome measures. In quantifying social difference, however, population scientists obscure or minimize several difficulties in creating comparable populations. I explore some of these challenges by highlighting three central tensions: the separation of difference from other aspects and categories of social experience, the reduction of difference through the use of one over several possible measures, and the enactment of difference as quantified knowledge loops back into society. As a theoretical inquiry into the form of social difference as it is conceptualized, operationalized, and materialized across the science-society nexus, this paper identifies the various commitments made during processes of scientific evaluation. By attending to the values and priorities that exist within and through practices of quantification, I aim to address the problem of measuring social difference as it pertains to the issues of social justice and health equity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging

    PubMed Central

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-01-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325

  2. Quantification of surface charge density and its effect on boundary slip.

    PubMed

    Jing, Dalei; Bhushan, Bharat

    2013-06-11

    Reduction of fluid drag is important in the micro-/nanofluidic systems. Surface charge and boundary slip can affect the fluid drag, and surface charge is also believed to affect boundary slip. The quantification of surface charge and boundary slip at a solid-liquid interface has been widely studied, but there is a lack of understanding of the effect of surface charge on boundary slip. In this paper, the surface charge density of borosilicate glass and octadecyltrichlorosilane (OTS) surfaces immersed in saline solutions with two ionic concentrations and deionized (DI) water with different pH values and electric field values is quantified by fitting experimental atomic force microscopy (AFM) electrostatic force data using a theoretical model relating the surface charge density and electrostatic force. Results show that pH and electric field can affect the surface charge density of glass and OTS surfaces immersed in saline solutions and DI water. The mechanisms of the effect of pH and electric field on the surface charge density are discussed. The slip length of the OTS surface immersed in saline solutions with two ionic concentrations and DI water with different pH values and electric field values is measured, and their effects on the slip length are analyzed from the point of surface charge. Results show that a larger absolute value of surface charge density leads to a smaller slip length for the OTS surface.

  3. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  4. The application of information theory for the research of aging and aging-related diseases.

    PubMed

    Blokh, David; Stambler, Ilia

    2017-10-01

    This article reviews the application of information-theoretical analysis, employing measures of entropy and mutual information, for the study of aging and aging-related diseases. The research of aging and aging-related diseases is particularly suitable for the application of information theory methods, as aging processes and related diseases are multi-parametric, with continuous parameters coexisting alongside discrete parameters, and with the relations between the parameters being as a rule non-linear. Information theory provides unique analytical capabilities for the solution of such problems, with unique advantages over common linear biostatistics. Among the age-related diseases, information theory has been used in the study of neurodegenerative diseases (particularly using EEG time series for diagnosis and prediction), cancer (particularly for establishing individual and combined cancer biomarkers), diabetes (mainly utilizing mutual information to characterize the diseased and aging states), and heart disease (mainly for the analysis of heart rate variability). Few works have employed information theory for the analysis of general aging processes and frailty, as underlying determinants and possible early preclinical diagnostic measures for aging-related diseases. Generally, the use of information-theoretical analysis permits not only establishing the (non-linear) correlations between diagnostic or therapeutic parameters of interest, but may also provide a theoretical insight into the nature of aging and related diseases by establishing the measures of variability, adaptation, regulation or homeostasis, within a system of interest. It may be hoped that the increased use of such measures in research may considerably increase diagnostic and therapeutic capabilities and the fundamental theoretical mathematical understanding of aging and disease. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Prediction of autosomal STR typing success in ancient and Second World War bone samples.

    PubMed

    Zupanič Pajnič, Irena; Zupanc, Tomaž; Balažic, Jože; Geršak, Živa Miriam; Stojković, Oliver; Skadrić, Ivan; Črešnar, Matija

    2017-03-01

    Human-specific quantitative PCR (qPCR) has been developed for forensic use in the last 10 years and is the preferred DNA quantification technique since it is very accurate, sensitive, objective, time-effective and automatable. The amount of information that can be gleaned from a single quantification reaction using commercially available quantification kits has increased from the quantity of nuclear DNA to the amount of male DNA, presence of inhibitors and, most recently, to the degree of DNA degradation. In skeletal remains samples from disaster victims, missing persons and war conflict victims, the DNA is usually degraded. Therefore the new commercial qPCR kits able to assess the degree of degradation are potentially able to predict the success of downstream short tandem repeat (STR) typing. The goal of this study was to verify the quantification step using the PowerQuant kit with regard to its suitability as a screening method for autosomal STR typing success on ancient and Second World War (WWII) skeletal remains. We analysed 60 skeletons excavated from five archaeological sites and four WWII mass graves from Slovenia. The bones were cleaned, surface contamination was removed and the bones ground to a powder. Genomic DNA was obtained from 0.5g of bone powder after total demineralization. The DNA was purified using a Biorobot EZ1 device. Following PowerQuant quantification, DNA samples were subjected to autosomal STR amplification using the NGM kit. Up to 2.51ng DNA/g of powder were extracted. No inhibition was detected in any of bones analysed. 82% of the WWII bones gave full profiles while 73% of the ancient bones gave profiles not suitable for interpretation. Four bone extracts yielded no detectable amplification or zero quantification results and no profiles were obtained from any of them. Full or useful partial profiles were produced only from bone extracts where short autosomal (Auto) and long degradation (Deg) PowerQuant targets were detected. It is concluded that STR typing of old bones after quantification with the PowerQuant should be performed only when both Auto and Deg targets are detected simultaneously with no respect to [Auto]/[Deg] ratio. Prediction of STR typing success could be made according to successful amplification of Deg fragment. The PowerQuant kit is capable of identifying bone DNA samples that will not yield useful STR profiles using the NGM kit, and it can be used as a predictor of autosomal STR typing success of bone extracts obtained from ancient and WWII skeletal remains. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Main-channel slopes of selected streams in Iowa for estimation of flood-frequency discharges

    USGS Publications Warehouse

    Eash, David A.

    2003-01-01

    This report describes a statewide study conducted to develop main-channel slope (MCS) curves for 138 selected streams in Iowa with drainage areas greater than 100 square miles. MCS values determined from the curves can be used in regression equations for estimating floodfrequency discharges. Multivariable regression equations previously developed for two of the three hydrologic regions defined for Iowa require the measurement of MCS. Main-channel slope is a difficult measurement to obtain for large streams using 1:24,000-scale topographic maps. The curves developed in this report provide a simplified method for determining MCS values for sites located along large streams in Iowa within hydrologic Regions 2 and 3. The curves were developed using MCS values quantified for 2,058 selected sites along 138 selected streams in Iowa. A geographic information system (GIS) technique and 1:24,000-scale topographic data were used to quantify MCS values for the stream sites. The sites were selected at about 5-mile intervals along the streams. River miles were quantified for each stream site using a GIS program. Data points for river-mile and MCS values were plotted and a best-fit curve was developed for each stream. An adjustment was applied to all 138 curves to compensate for differences in MCS values between manual measurements and GIS quantifications. The multivariable equations for Regions 2 and 3 were developed using manual measurements of MCS. A comparison of manual measurements and GIS quantifications of MCS indicates that manual measurements typically produce greater values of MCS compared to GIS quantifications. Median differences between manual measurements and GIS quantifications of MCS are 14.8 and 17.7 percent for Regions 2 and 3, respectively. Comparisons of percentage differences between flood-frequency discharges calculated using MCS values of manual measurements and GIS quantifications indicate that use of GIS values of MCS for Region 3 substantially underestimate flood discharges. Mean and median percentage differences for 2- to 500-year recurrence- interval flood discharges ranged from 5.0 to 5.3 and 4.3 to 4.5 percent, respectively, for Region 2 and ranged from 18.3 to 27.1 and 12.3 to 17.3 percent for Region 3. The MCS curves developed from GIS quantifications were adjusted by 14.8 percent for streams located in Region 2 and by 17.7 percent for streams located in Region 3. Comparisons of percentage differences between flood discharges calculated using MCS values of manual measurements and adjusted-GIS quantifications for Regions 2 and 3 indicate that the flood-discharge estimates are comparable. For Region 2, mean percentage differences for 2- to 500-year recurrence- interval flood discharges ranged between 0.6 and 0.8 percent and median differences were 0.0 percent. For Region 3, mean and median differences ranged between 5.4 to 8.4 and 0.0 to 0.3 percent, respectively. A list of selected stream sites presented with each curve provides information about the sites including river miles, drainage areas, the location of U.S. Geological Survey streamflowgaging stations, and the location of streams crossing hydrologic region boundaries or the Des Moines Lobe landform region boundary. Two examples are presented for determining river-mile and MCS values, and two techniques are presented for computing flood-frequency discharges.

  7. Developing a targeted, theory-informed implementation intervention using two theoretical frameworks to address health professional and organisational factors: a case study to improve the management of mild traumatic brain injury in the emergency department.

    PubMed

    Tavender, Emma J; Bosch, Marije; Gruen, Russell L; Green, Sally E; Michie, Susan; Brennan, Sue E; Francis, Jill J; Ponsford, Jennie L; Knott, Jonathan C; Meares, Sue; Smyth, Tracy; O'Connor, Denise A

    2015-05-25

    Despite the availability of evidence-based guidelines for the management of mild traumatic brain injury in the emergency department (ED), variations in practice exist. Interventions designed to implement recommended behaviours can reduce this variation. Using theory to inform intervention development is advocated; however, there is no consensus on how to select or apply theory. Integrative theoretical frameworks, based on syntheses of theories and theoretical constructs relevant to implementation, have the potential to assist in the intervention development process. This paper describes the process of applying two theoretical frameworks to investigate the factors influencing recommended behaviours and the choice of behaviour change techniques and modes of delivery for an implementation intervention. A stepped approach was followed: (i) identification of locally applicable and actionable evidence-based recommendations as targets for change, (ii) selection and use of two theoretical frameworks for identifying barriers to and enablers of change (Theoretical Domains Framework and Model of Diffusion of Innovations in Service Organisations) and (iii) identification and operationalisation of intervention components (behaviour change techniques and modes of delivery) to address the barriers and enhance the enablers, informed by theory, evidence and feasibility/acceptability considerations. We illustrate this process in relation to one recommendation, prospective assessment of post-traumatic amnesia (PTA) by ED staff using a validated tool. Four recommendations for managing mild traumatic brain injury were targeted with the intervention. The intervention targeting the PTA recommendation consisted of 14 behaviour change techniques and addressed 6 theoretical domains and 5 organisational domains. The mode of delivery was informed by six Cochrane reviews. It was delivered via five intervention components : (i) local stakeholder meetings, (ii) identification of local opinion leader teams, (iii) a train-the-trainer workshop for appointed local opinion leaders, (iv) local training workshops for delivery by trained local opinion leaders and (v) provision of tools and materials to prompt recommended behaviours. Two theoretical frameworks were used in a complementary manner to inform intervention development in managing mild traumatic brain injury in the ED. The effectiveness and cost-effectiveness of the developed intervention is being evaluated in a cluster randomised trial, part of the Neurotrauma Evidence Translation (NET) program.

  8. Informational analysis for compressive sampling in radar imaging.

    PubMed

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  9. Large differences in land use emission quantifications implied by definition discrepancies

    NASA Astrophysics Data System (ADS)

    Stocker, B. D.; Joos, F.

    2015-03-01

    The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review the conceptual differences of eLUC quantification methods and apply an Earth System Model to demonstrate that what is claimed to represent total eLUC differs by up to ~20% when quantified from ESM vs. offline vegetation models. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies and global carbon budget accountings should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.

  10. Quantifying differences in land use emission estimates implied by definition discrepancies

    NASA Astrophysics Data System (ADS)

    Stocker, B. D.; Joos, F.

    2015-11-01

    The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review differences of eLUC quantification methods and apply an Earth System Model (ESM) of Intermediate Complexity to quantify them. We find that the magnitude of effects due to merely conceptual differences between ESM and offline vegetation model-based quantifications is ~ 20 % for today. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate secondary component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.

  11. Recent developments in capabilities for analysing chlorinated paraffins in environmental matrices: A review.

    PubMed

    van Mourik, Louise M; Leonards, Pim E G; Gaus, Caroline; de Boer, Jacob

    2015-10-01

    Concerns about the high production volumes, persistency, bioaccumulation potential and toxicity of chlorinated paraffin (CP) mixtures, especially short-chain CPs (SCCPs), are rising. However, information on their levels and fate in the environment is still insufficient, impeding international classifications and regulations. This knowledge gap is mainly due to the difficulties that arise with CP analysis, in particular the chromatographic separation within CPs and between CPs and other compounds. No fully validated routine analytical method is available yet and only semi-quantitative analysis is possible, although the number of studies reporting new and improved methods have rapidly increased since 2010. Better cleanup procedures that remove interfering compounds, and new instrumental techniques, which distinguish between medium-chain CPs (MCCPs) and SCCPs, have been developed. While gas chromatography coupled to an electron capture negative ionisation mass spectrometry (GC/ECNI-MS) remains the most commonly applied technique, novel and promising use of high resolution time of flight MS (TOF-MS) has also been reported. We expect that recent developments in high resolution TOF-MS and Orbitrap technologies will further improve the detection of CPs, including long-chain CPs (LCCPs), and the group separation and quantification of CP homologues. Also, new CP quantification methods have emerged, including the use of mathematical algorithms, multiple linear regression and principal component analysis. These quantification advancements are also reflected in considerably improved interlaboratory agreements since 2010. Analysis of lower chlorinated paraffins (

  12. Simultaneous quantification of protein phosphorylation sites using liquid chromatography-tandem mass spectrometry-based targeted proteomics: a linear algebra approach for isobaric phosphopeptides.

    PubMed

    Xu, Feifei; Yang, Ting; Sheng, Yuan; Zhong, Ting; Yang, Mi; Chen, Yun

    2014-12-05

    As one of the most studied post-translational modifications (PTM), protein phosphorylation plays an essential role in almost all cellular processes. Current methods are able to predict and determine thousands of phosphorylation sites, whereas stoichiometric quantification of these sites is still challenging. Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS)-based targeted proteomics is emerging as a promising technique for site-specific quantification of protein phosphorylation using proteolytic peptides as surrogates of proteins. However, several issues may limit its application, one of which relates to the phosphopeptides with different phosphorylation sites and the same mass (i.e., isobaric phosphopeptides). While employment of site-specific product ions allows for these isobaric phosphopeptides to be distinguished and quantified, site-specific product ions are often absent or weak in tandem mass spectra. In this study, linear algebra algorithms were employed as an add-on to targeted proteomics to retrieve information on individual phosphopeptides from their common spectra. To achieve this simultaneous quantification, a LC-MS/MS-based targeted proteomics assay was first developed and validated for each phosphopeptide. Given the slope and intercept of calibration curves of phosphopeptides in each transition, linear algebraic equations were developed. Using a series of mock mixtures prepared with varying concentrations of each phosphopeptide, the reliability of the approach to quantify isobaric phosphopeptides containing multiple phosphorylation sites (≥ 2) was discussed. Finally, we applied this approach to determine the phosphorylation stoichiometry of heat shock protein 27 (HSP27) at Ser78 and Ser82 in breast cancer cells and tissue samples.

  13. Automated Quantification of Hematopoietic Cell – Stromal Cell Interactions in Histological Images of Undecalcified Bone

    PubMed Central

    Zehentmeier, Sandra; Cseresnyes, Zoltan; Escribano Navarro, Juan; Niesner, Raluca A.; Hauser, Anja E.

    2015-01-01

    Confocal microscopy is the method of choice for the analysis of localization of multiple cell types within complex tissues such as the bone marrow. However, the analysis and quantification of cellular localization is difficult, as in many cases it relies on manual counting, thus bearing the risk of introducing a rater-dependent bias and reducing interrater reliability. Moreover, it is often difficult to judge whether the co-localization between two cells results from random positioning, especially when cell types differ strongly in the frequency of their occurrence. Here, a method for unbiased quantification of cellular co-localization in the bone marrow is introduced. The protocol describes the sample preparation used to obtain histological sections of whole murine long bones including the bone marrow, as well as the staining protocol and the acquisition of high-resolution images. An analysis workflow spanning from the recognition of hematopoietic and non-hematopoietic cell types in 2-dimensional (2D) bone marrow images to the quantification of the direct contacts between those cells is presented. This also includes a neighborhood analysis, to obtain information about the cellular microenvironment surrounding a certain cell type. In order to evaluate whether co-localization of two cell types is the mere result of random cell positioning or reflects preferential associations between the cells, a simulation tool which is suitable for testing this hypothesis in the case of hematopoietic as well as stromal cells, is used. This approach is not limited to the bone marrow, and can be extended to other tissues to permit reproducible, quantitative analysis of histological data. PMID:25938636

  14. Qualitative and quantitative analysis of monomers in polyesters for food contact materials.

    PubMed

    Brenz, Fabrian; Linke, Susanne; Simat, Thomas

    2017-02-01

    Polyesters (PESs) are gaining more importance on the food contact material (FCM) market and the variety of properties and applications is expected to be wide. In order to acquire the desired properties manufacturers can combine several FCM-approved polyvalent carboxylic acids (PCAs) and polyols as monomers. However, information about the qualitative and quantitative composition of FCM articles is often limited. The method presented here describes the analysis of PESs with the identification and quantification of 25 PES monomers (10 PCA, 15 polyols) by HPLC with diode array detection (HPLC-DAD) and GC-MS after alkaline hydrolysis. Accurate identification and quantification were demonstrated by the analysis of seven different FCM articles made of PESs. The results explained between 97.2% and 103.4% w/w of the polymer composition whilst showing equal molar amounts of PCA and polyols. Quantification proved to be precise and sensitive with coefficients of variation (CVs) below 6.0% for PES samples with monomer concentrations typically ranging from 0.02% to 75% w/w. The analysis of 15 PES samples for the FCM market revealed the presence of five different PCAs and 11 different polyols (main monomers, co-monomers, non-intentionally added substances (NIAS)) showing the wide variety of monomers in modern PESs. The presented method provides a useful tool for commercial, state and research laboratories as well as for producers and distributors facing the task of FCM risk assessment. It can be applied for the identification and quantification of migrating monomers and the prediction of oligomer compositions from the identified monomers, respectively.

  15. The smooth entropy formalism for von Neumann algebras

    NASA Astrophysics Data System (ADS)

    Berta, Mario; Furrer, Fabian; Scholz, Volkher B.

    2016-01-01

    We discuss information-theoretic concepts on infinite-dimensional quantum systems. In particular, we lift the smooth entropy formalism as introduced by Renner and collaborators for finite-dimensional systems to von Neumann algebras. For the smooth conditional min- and max-entropy, we recover similar characterizing properties and information-theoretic operational interpretations as in the finite-dimensional case. We generalize the entropic uncertainty relation with quantum side information of Tomamichel and Renner and discuss applications to quantum cryptography. In particular, we prove the possibility to perform privacy amplification and classical data compression with quantum side information modeled by a von Neumann algebra.

  16. The smooth entropy formalism for von Neumann algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berta, Mario, E-mail: berta@caltech.edu; Furrer, Fabian, E-mail: furrer@eve.phys.s.u-tokyo.ac.jp; Scholz, Volkher B., E-mail: scholz@phys.ethz.ch

    2016-01-15

    We discuss information-theoretic concepts on infinite-dimensional quantum systems. In particular, we lift the smooth entropy formalism as introduced by Renner and collaborators for finite-dimensional systems to von Neumann algebras. For the smooth conditional min- and max-entropy, we recover similar characterizing properties and information-theoretic operational interpretations as in the finite-dimensional case. We generalize the entropic uncertainty relation with quantum side information of Tomamichel and Renner and discuss applications to quantum cryptography. In particular, we prove the possibility to perform privacy amplification and classical data compression with quantum side information modeled by a von Neumann algebra.

  17. Quantification of Liver Fat in the Presence of Iron Overload

    PubMed Central

    Horng, Debra E.; Hernando, Diego; Reeder, Scott B.

    2017-01-01

    Purpose To evaluate the accuracy of R2* models (1/T2* = R2*) for chemical shift-encoded magnetic resonance imaging (CSE-MRI)-based proton density fat-fraction (PDFF) quantification in patients with fatty liver and iron overload, using MR spectroscopy (MRS) as the reference standard. Materials and Methods Two Monte Carlo simulations were implemented to compare the root-mean-squared-error (RMSE) performance of single-R2* and dual-R2* correction in a theoretical liver environment with high iron. Fatty liver was defined as hepatic PDFF >5.6% based on MRS; only subjects with fatty liver were considered for analyses involving fat. From a group of 40 patients with known/suspected iron overload, nine patients were identified at 1.5T, and 13 at 3.0T with fatty liver. MRS linewidth measurements were used to estimate R2* values for water and fat peaks. PDFF was measured from CSE-MRI data using single-R2* and dual-R2* correction with magnitude and complex fitting. Results Spectroscopy-based R2* analysis demonstrated that the R2* of water and fat remain close in value, both increasing as iron overload increases: linear regression between R2*W and R2*F resulted in slope = 0.95 [0.79–1.12] (95% limits of agreement) at 1.5T and slope = 0.76 [0.49–1.03] at 3.0T. MRI-PDFF using dual-R2* correction had severe artifacts. MRI-PDFF using single-R2* correction had good agreement with MRS-PDFF: Bland–Altman analysis resulted in −0.7% (bias) ± 2.9% (95% limits of agreement) for magnitude-fit and −1.3% ± 4.3% for complex-fit at 1.5T, and −1.5% ± 8.4% for magnitude-fit and −2.2% ± 9.6% for complex-fit at 3.0T. Conclusion Single-R2* modeling enables accurate PDFF quantification, even in patients with iron overload. PMID:27405703

  18. Quantification of functional genes from procaryotes in soil by PCR.

    PubMed

    Sharma, Shilpi; Radl, Viviane; Hai, Brigitte; Kloos, Karin; Fuka, Mirna Mrkonjic; Engel, Marion; Schauss, Kristina; Schloter, Michael

    2007-03-01

    Controlling turnover processes and fluxes in soils and other environments requires information about the gene pool and possibilities for its in situ induction. Therefore in the recent years there has been a growing interest in genes and transcripts coding for metabolic enzymes. Besides questions addressing redundancy and diversity, more and more attention is given on the abundance of specific DNA and mRNA in the different habitats. This review will describe several PCR techniques that are suitable for quantification of functional genes and transcripts such as MPN-PCR, competitive PCR and real-time PCR. The advantages and disadvantages of the mentioned methods are discussed. In addition, the problems of quantitative extraction of nucleic acid and substances that inhibit polymerase are described. Finally, some examples from recent papers are given to demonstrate the applicability and usefulness of the different approaches.

  19. Recurrence Quantification of Fractal Structures

    PubMed Central

    Webber, Charles L.

    2012-01-01

    By definition, fractal structures possess recurrent patterns. At different levels repeating patterns can be visualized at higher magnifications. The purpose of this chapter is threefold. First, general characteristics of dynamical systems are addressed from a theoretical mathematical perspective. Second, qualitative and quantitative recurrence analyses are reviewed in brief, but the reader is directed to other sources for explicit details. Third, example mathematical systems that generate strange attractors are explicitly defined, giving the reader the ability to reproduce the rich dynamics of continuous chaotic flows or discrete chaotic iterations. The challenge is then posited for the reader to study for themselves the recurrent structuring of these different dynamics. With a firm appreciation of the power of recurrence analysis, the reader will be prepared to turn their sights on real-world systems (physiological, psychological, mechanical, etc.). PMID:23060808

  20. Quantification of the multi-streaming effect in redshift space distortion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yi; Oh, Minji; Zhang, Pengjie, E-mail: yizheng@kasi.re.kr, E-mail: zhangpj@sjtu.edu.cn, E-mail: minjioh@kasi.re.kr

    Both multi-streaming (random motion) and bulk motion cause the Finger-of-God (FoG) effect in redshift space distortion (RSD). We apply a direct measurement of the multi-streaming effect in RSD from simulations, proving that it induces an additional, non-negligible FoG damping to the redshift space density power spectrum. We show that, including the multi-streaming effect, the RSD modelling is significantly improved. We also provide a theoretical explanation based on halo model for the measured effect, including a fitting formula with one to two free parameters. The improved understanding of FoG helps break the f σ{sub 8}−σ {sub v} degeneracy in RSD cosmology,more » and has the potential of significantly improving cosmological constraints.« less

  1. Experimental evidence links volcanic particle characteristics to pyroclastic flow hazard

    NASA Astrophysics Data System (ADS)

    Dellino, Pierfrancesco; Büttner, Ralf; Dioguardi, Fabio; Doronzo, Domenico M.; La Volpe, Luigi; Mele, Daniela; Sonder, Ingo; Sulpizio, Roberto; Zimanowski, Bernd

    2010-06-01

    Pyroclastic flows represent the most hazardous events of explosive volcanism, one striking example being the famous historical eruption of Vesuvius that destroyed Pompeii (AD 79). Much of our knowledge of the mechanics of pyroclastic flows comes from theoretical models and numerical simulations. Valuable data are also stored in the geological record of past eruptions, including the particles contained in pyroclastic deposits, but the deposit characteristics are rarely used for quantifying the destructive potential of pyroclastic flows. By means of experiments, we validate a model that is based on data from pyroclastic deposits. The model allows the reconstruction of the current's fluid-dynamic behaviour. Model results are consistent with measured values of dynamic pressure in the experiments, and allow the quantification of the damage potential of pyroclastic flows.

  2. Thermal rectification in mass-graded next-nearest-neighbor Fermi-Pasta-Ulam lattices

    NASA Astrophysics Data System (ADS)

    Romero-Bastida, M.; Miranda-Peña, Jorge-Orlando; López, Juan M.

    2017-03-01

    We study the thermal rectification efficiency, i.e., quantification of asymmetric heat flow, of a one-dimensional mass-graded anharmonic oscillator Fermi-Pasta-Ulam lattice both with nearest-neighbor (NN) and next-nearest-neighbor (NNN) interactions. The system presents a maximum rectification efficiency for a very precise value of the parameter that controls the coupling strength of the NNN interactions, which also optimizes the rectification figure when its dependence on mass asymmetry and temperature differences is considered. The origin of the enhanced rectification is the asymmetric local heat flow response as the heat reservoirs are swapped when a finely tuned NNN contribution is taken into account. A simple theoretical analysis gives an estimate of the optimal NNN coupling in excellent agreement with our simulation results.

  3. {sup 90}Y -PET imaging: Exploring limitations and accuracy under conditions of low counts and high random fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlier, Thomas, E-mail: thomas.carlier@chu-nantes.fr; Willowson, Kathy P.; Fourkal, Eugene

    Purpose: {sup 90}Y -positron emission tomography (PET) imaging is becoming a recognized modality for postinfusion quantitative assessment following radioembolization therapy. However, the extremely low counts and high random fraction associated with {sup 90}Y -PET may significantly impair both qualitative and quantitative results. The aim of this work was to study image quality and noise level in relation to the quantification and bias performance of two types of Siemens PET scanners when imaging {sup 90}Y and to compare experimental results with clinical data from two types of commercially available {sup 90}Y microspheres. Methods: Data were acquired on both Siemens Biograph TruePointmore » [non-time-of-flight (TOF)] and Biograph microcomputed tomography (mCT) (TOF) PET/CT scanners. The study was conducted in three phases. The first aimed to assess quantification and bias for different reconstruction methods according to random fraction and number of true counts in the scan. The NEMA 1994 PET phantom was filled with water with one cylindrical insert left empty (air) and the other filled with a solution of {sup 90}Y . The phantom was scanned for 60 min in the PET/CT scanner every one or two days. The second phase used the NEMA 2001 PET phantom to derive noise and image quality metrics. The spheres and the background were filled with a {sup 90}Y solution in an 8:1 contrast ratio and four 30 min acquisitions were performed over a one week period. Finally, 32 patient data (8 treated with Therasphere{sup ®} and 24 with SIR-Spheres{sup ®}) were retrospectively reconstructed and activity in the whole field of view and the liver was compared to theoretical injected activity. Results: The contribution of both bremsstrahlung and LSO trues was found to be negligible, allowing data to be decay corrected to obtain correct quantification. In general, the recovered activity for all reconstruction methods was stable over the range studied, with a small bias appearing at extremely high random fraction and low counts for iterative algorithms. Point spread function (PSF) correction and TOF reconstruction in general reduce background variability and noise and increase recovered concentration. Results for patient data indicated a good correlation between the expected and PET reconstructed activities. A linear relationship between the expected and the measured activities in the organ of interest was observed for all reconstruction method used: a linearity coefficient of 0.89 ± 0.05 for the Biograph mCT and 0.81 ± 0.05 for the Biograph TruePoint. Conclusions: Due to the low counts and high random fraction, accurate image quantification of {sup 90}Y during selective internal radionuclide therapy is affected by random coincidence estimation, scatter correction, and any positivity constraint of the algorithm. Nevertheless, phantom and patient studies showed that the impact of number of true and random coincidences on quantitative results was found to be limited as long as ordinary Poisson ordered subsets expectation maximization reconstruction algorithms with random smoothing are used. Adding PSF correction and TOF information to the reconstruction greatly improves the image quality in terms of bias, variability, noise reduction, and detectability. On the patient studies, the total activity in the field of view is in general accurately measured by Biograph mCT and slightly overestimated by the Biograph TruePoint.« less

  4. Performance of different reflectance and diffuse optical imaging tomographic approaches in fluorescence molecular imaging of small animals

    NASA Astrophysics Data System (ADS)

    Dinten, Jean-Marc; Petié, Philippe; da Silva, Anabela; Boutet, Jérôme; Koenig, Anne; Hervé, Lionel; Berger, Michel; Laidevant, Aurélie; Rizo, Philippe

    2006-03-01

    Optical imaging of fluorescent probes is an essential tool for investigation of molecular events in small animals for drug developments. In order to get localization and quantification information of fluorescent labels, CEA-LETI has developed efficient approaches in classical reflectance imaging as well as in diffuse optical tomographic imaging with continuous and temporal signals. This paper presents an overview of the different approaches investigated and their performances. High quality fluorescence reflectance imaging is obtained thanks to the development of an original "multiple wavelengths" system. The uniformity of the excitation light surface area is better than 15%. Combined with the use of adapted fluorescent probes, this system enables an accurate detection of pathological tissues, such as nodules, beneath the animal's observed area. Performances for the detection of ovarian nodules on a nude mouse are shown. In order to investigate deeper inside animals and get 3D localization, diffuse optical tomography systems are being developed for both slab and cylindrical geometries. For these two geometries, our reconstruction algorithms are based on analytical expression of light diffusion. Thanks to an accurate introduction of light/matter interaction process in the algorithms, high quality reconstructions of tumors in mice have been obtained. Reconstruction of lung tumors on mice are presented. By the use of temporal diffuse optical imaging, localization and quantification performances can be improved at the price of a more sophisticated acquisition system and more elaborate information processing methods. Such a system based on a pulsed laser diode and a time correlated single photon counting system has been set up. Performances of this system for localization and quantification of fluorescent probes are presented.

  5. Morphomics: An integral part of systems biology of the human placenta.

    PubMed

    Mayhew, T M

    2015-04-01

    The placenta is a transient organ the functioning of which has health consequences far beyond the embryo/fetus. Understanding the biology of any system (organ, organism, single cell, etc) requires a comprehensive and inclusive approach which embraces all the biomedical disciplines and 'omic' technologies and then integrates information obtained from all of them. Among the latest 'omics' is morphomics. The terms morphome and morphomics have been applied incoherently in biology and biomedicine but, recently, they have been given clear and widescale definitions. Morphomics is placed in the context of other 'omics' and its pertinent technologies and tools for sampling and quantitation are reviewed. Emphasis is accorded to the importance of random sampling principles in systems biology and the value of combining 3D quantification with alternative imaging techniques to advance knowledge and understanding of the human placental morphome. By analogy to other 'omes', the morphome is the totality of morphological features within a system and morphomics is the systematic study of those structures. Information about structure is required at multiple levels of resolution in order to understand better the processes by which a given system alters with time, experimental treatment or environmental insult. Therefore, morphomics research includes all imaging techniques at all levels of achievable resolution from gross anatomy and medical imaging, via optical and electron microscopy, to molecular characterisation. Quantification is an important element of all 'omics' studies and, because biological systems exist and operate in 3-dimensional (3D) space, precise descriptions of form, content and spatial relationships require the quantification of structure in 3D. These considerations are relevant to future study contributions to the Human Placenta Project. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Quantification of multiple gene expression in individual cells.

    PubMed

    Peixoto, António; Monteiro, Marta; Rocha, Benedita; Veiga-Fernandes, Henrique

    2004-10-01

    Quantitative gene expression analysis aims to define the gene expression patterns determining cell behavior. So far, these assessments can only be performed at the population level. Therefore, they determine the average gene expression within a population, overlooking possible cell-to-cell heterogeneity that could lead to different cell behaviors/cell fates. Understanding individual cell behavior requires multiple gene expression analyses of single cells, and may be fundamental for the understanding of all types of biological events and/or differentiation processes. We here describe a new reverse transcription-polymerase chain reaction (RT-PCR) approach allowing the simultaneous quantification of the expression of 20 genes in the same single cell. This method has broad application, in different species and any type of gene combination. RT efficiency is evaluated. Uniform and maximized amplification conditions for all genes are provided. Abundance relationships are maintained, allowing the precise quantification of the absolute number of mRNA molecules per cell, ranging from 2 to 1.28 x 10(9) for each individual gene. We evaluated the impact of this approach on functional genetic read-outs by studying an apparently homogeneous population (monoclonal T cells recovered 4 d after antigen stimulation), using either this method or conventional real-time RT-PCR. Single-cell studies revealed considerable cell-to-cell variation: All T cells did not express all individual genes. Gene coexpression patterns were very heterogeneous. mRNA copy numbers varied between different transcripts and in different cells. As a consequence, this single-cell assay introduces new and fundamental information regarding functional genomic read-outs. By comparison, we also show that conventional quantitative assays determining population averages supply insufficient information, and may even be highly misleading.

  7. Quantification of Confocal Images Using LabVIEW for Tissue Engineering Applications

    PubMed Central

    Sfakis, Lauren; Kamaldinov, Tim; Larsen, Melinda; Castracane, James

    2016-01-01

    Quantifying confocal images to enable location of specific proteins of interest in three-dimensional (3D) is important for many tissue engineering (TE) applications. Quantification of protein localization is essential for evaluation of specific scaffold constructs for cell growth and differentiation for application in TE and tissue regeneration strategies. Although obtaining information regarding protein expression levels is important, the location of proteins within cells grown on scaffolds is often the key to evaluating scaffold efficacy. Functional epithelial cell monolayers must be organized with apicobasal polarity with proteins specifically localized to the apical or basolateral regions of cells in many organs. In this work, a customized program was developed using the LabVIEW platform to quantify protein positions in Z-stacks of confocal images of epithelial cell monolayers. The program's functionality is demonstrated through salivary gland TE, since functional salivary epithelial cells must correctly orient many proteins on the apical and basolateral membranes. Bio-LabVIEW Image Matrix Evaluation (Bio-LIME) takes 3D information collected from confocal Z-stack images and processes the fluorescence at each pixel to determine cell heights, nuclei heights, nuclei widths, protein localization, and cell count. As a demonstration of its utility, Bio-LIME was used to quantify the 3D location of the Zonula occludens-1 protein contained within tight junctions and its change in 3D position in response to chemical modification of the scaffold with laminin. Additionally, Bio-LIME was used to demonstrate that there is no advantage of sub-100 nm poly lactic-co-glycolic acid nanofibers over 250 nm fibers for epithelial apicobasal polarization. Bio-LIME will be broadly applicable for quantification of proteins in 3D that are grown in many different contexts. PMID:27758134

  8. Quantification of Confocal Images Using LabVIEW for Tissue Engineering Applications.

    PubMed

    Sfakis, Lauren; Kamaldinov, Tim; Larsen, Melinda; Castracane, James; Khmaladze, Alexander

    2016-11-01

    Quantifying confocal images to enable location of specific proteins of interest in three-dimensional (3D) is important for many tissue engineering (TE) applications. Quantification of protein localization is essential for evaluation of specific scaffold constructs for cell growth and differentiation for application in TE and tissue regeneration strategies. Although obtaining information regarding protein expression levels is important, the location of proteins within cells grown on scaffolds is often the key to evaluating scaffold efficacy. Functional epithelial cell monolayers must be organized with apicobasal polarity with proteins specifically localized to the apical or basolateral regions of cells in many organs. In this work, a customized program was developed using the LabVIEW platform to quantify protein positions in Z-stacks of confocal images of epithelial cell monolayers. The program's functionality is demonstrated through salivary gland TE, since functional salivary epithelial cells must correctly orient many proteins on the apical and basolateral membranes. Bio-LabVIEW Image Matrix Evaluation (Bio-LIME) takes 3D information collected from confocal Z-stack images and processes the fluorescence at each pixel to determine cell heights, nuclei heights, nuclei widths, protein localization, and cell count. As a demonstration of its utility, Bio-LIME was used to quantify the 3D location of the Zonula occludens-1 protein contained within tight junctions and its change in 3D position in response to chemical modification of the scaffold with laminin. Additionally, Bio-LIME was used to demonstrate that there is no advantage of sub-100 nm poly lactic-co-glycolic acid nanofibers over 250 nm fibers for epithelial apicobasal polarization. Bio-LIME will be broadly applicable for quantification of proteins in 3D that are grown in many different contexts.

  9. Information theoretic analysis of canny edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2011-06-01

    In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.

  10. Proceedings of the Meeting of the Study Committee of International Federation for Documentation "Research on the Theoretical Basis of Information" (Moscow, 24-26 February 1970).

    ERIC Educational Resources Information Center

    All-Union Inst. for Scientific and Technical Information, Moscow (USSR).

    Reports given before the Committee on "Research on the Theoretical Basis of Information" of the International Federation for Documentation (FID/RI) are presented unaltered and unabridged in English or in Russian -- the language of their presentation. Each report is accompanied by an English or Russian resume. Generally, only original…

  11. Information Diffusion in Facebook-Like Social Networks Under Information Overload

    NASA Astrophysics Data System (ADS)

    Li, Pei; Xing, Kai; Wang, Dapeng; Zhang, Xin; Wang, Hui

    2013-07-01

    Research on social networks has received remarkable attention, since many people use social networks to broadcast information and stay connected with their friends. However, due to the information overload in social networks, it becomes increasingly difficult for users to find useful information. This paper takes Facebook-like social networks into account, and models the process of information diffusion under information overload. The term view scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated is proposed to characterize the information diffusion efficiency. Through theoretical analysis, we find that factors such as network structure and view scope number have no impact on the information diffusion efficiency, which is a surprising result. To verify the results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly.

  12. Information theoretical assessment of image gathering and coding for digital restoration

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Reichenbach, Stephen E.

    1990-01-01

    The process of image-gathering, coding, and restoration is presently treated in its entirety rather than as a catenation of isolated tasks, on the basis of the relationship between the spectral information density of a transmitted signal and the restorability of images from the signal. This 'information-theoretic' assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system's design and radiance-field statistics, as well as for the information efficiency and data compression that are obtainable through the combination of image gathering with coding to reduce signal redundancy. It is found that high information efficiency is achievable only through minimization of image-gathering degradation as well as signal redundancy.

  13. Spreading dynamics of an e-commerce preferential information model on scale-free networks

    NASA Astrophysics Data System (ADS)

    Wan, Chen; Li, Tao; Guan, Zhi-Hong; Wang, Yuanmei; Liu, Xiongding

    2017-02-01

    In order to study the influence of the preferential degree and the heterogeneity of underlying networks on the spread of preferential e-commerce information, we propose a novel susceptible-infected-beneficial model based on scale-free networks. The spreading dynamics of the preferential information are analyzed in detail using the mean-field theory. We determine the basic reproductive number and equilibria. The theoretical analysis indicates that the basic reproductive number depends mainly on the preferential degree and the topology of the underlying networks. We prove the global stability of the information-elimination equilibrium. The permanence of preferential information and the global attractivity of the information-prevailing equilibrium are also studied in detail. Some numerical simulations are presented to verify the theoretical results.

  14. Does clear-cut harvesting accelerate initial wood decomposition? A five-year study with standard wood material

    Treesearch

    L. Finer; M. Jurgensen; M. Palviainen; S. Piirainen; Deborah Page-Dumroese

    2016-01-01

    Coarse woody debris (CWD) serves a variety of ecological functions in forests, and the understanding of its decomposition is needed for estimating changes in CWD-dependent forest biodiversity, and for the quantification of forest ecosystem carbon and nutrient pools and fluxes. Boreal forests are often intensively managed, so information is needed on the effects of...

  15. 78 FR 60886 - Submission for OMB Review; 30-day Comment Request; Quantification of Behavioral and Physiological...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-02

    ... experimental manipulations. The findings will provide valuable information concerning the utility and.... Driving Survey Adults 1 15/60 18 Realism Survey Adults 1 3/60 4 Sleep and Intake Questionnaire Adults 2 3/60 7 Stanford Sleepiness Scale..... Adults 72 6 1/60 7 Wellness Survey Adults 2 2/60 5 Dosing/Driving...

  16. Use of Landsat and environmental satellite data in evapotranspiration estimation from a wildland area

    NASA Technical Reports Server (NTRS)

    Khorram, S.; Smith, H. G.

    1979-01-01

    A remote sensing-aided procedure was applied to the watershed-wide estimation of water loss to the atmosphere (evapotranspiration, ET). The approach involved a spatially referenced databank based on both remotely sensed and ground-acquired information. Physical models for both estimation of ET and quantification of input parameters are specified, and results of the investigation are outlined.

  17. Applying a World-City Network Approach to Globalizing Higher Education: Conceptualization, Data Collection and the Lists of World Cities

    ERIC Educational Resources Information Center

    Chow, Alice S. Y.; Loo, Becky P. Y.

    2015-01-01

    Both the commercial and education sectors experience an increase in inter-city exchanges in the forms of goods, capital, commands, people and information/knowledge under globalization. The quantification of flows and structural relations among cities in globalizing education are under-researched compared to the well-established world/global cities…

  18. Formative Information Using Student Growth Percentiles for the Quantification of English Language Learners' Progress in Language Acquisition

    ERIC Educational Resources Information Center

    Taherbhai, Husein; Seo, Daeryong; O'Malley, Kimberly

    2014-01-01

    English language learners (ELLs) are the fastest growing subgroup in American schools. These students, by a provision in the reauthorization of the Elementary and Secondary Education Act, are to be supported in their quest for language proficiency through the creation of systems that more effectively measure ELLs' progress across years. In…

  19. Quantifying understorey vegetation in the US Lake States: a proposed framework to inform regional forest carbon stocks

    Treesearch

    Matthew B. Russell; Anthony W. D' Amato; Bethany K. Schulz; Christopher W. Woodall; Grant M. Domke; John B. Bradford

    2014-01-01

    The contribution of understorey vegetation (UVEG) to forest ecosystem biomass and carbon (C) across diverse forest types has, to date, eluded quantification at regional and national scales. Efforts to quantify UVEG C have been limited to field-intensive studies or broad-scalemodelling approaches lacking fieldmeasurements. Although large-scale inventories of UVEG C are...

  20. Toward theoretical understanding of the fertility preservation decision-making process: Examining information processing among young women with cancer

    PubMed Central

    Hershberger, Patricia E.; Finnegan, Lorna; Altfeld, Susan; Lake, Sara; Hirshfeld-Cytron, Jennifer

    2014-01-01

    Background Young women with cancer now face the complex decision about whether to undergo fertility preservation. Yet little is known about how these women process information involved in making this decision. Objective The purpose of this paper is to expand theoretical understanding of the decision-making process by examining aspects of information processing among young women diagnosed with cancer. Methods Using a grounded theory approach, 27 women with cancer participated in individual, semi-structured interviews. Data were coded and analyzed using constant-comparison techniques that were guided by five dimensions within the Contemplate phase of the decision-making process framework. Results In the first dimension, young women acquired information primarily from clinicians and Internet sources. Experiential information, often obtained from peers, occurred in the second dimension. Preferences and values were constructed in the third dimension as women acquired factual, moral, and ethical information. Women desired tailored, personalized information that was specific to their situation in the fourth dimension; however, women struggled with communicating these needs to clinicians. In the fifth dimension, women offered detailed descriptions of clinician behaviors that enhance or impede decisional debriefing. Conclusion Better understanding of theoretical underpinnings surrounding women’s information processes can facilitate decision support and improve clinical care. PMID:24552086

  1. Judging nursing information on the WWW: a theoretical understanding.

    PubMed

    Cader, Raffik; Campbell, Steve; Watson, Don

    2009-09-01

    This paper is a report of a study of the judgement processes nurses use when evaluating World Wide Web information related to nursing practice. The World Wide Web has increased the global accessibility of online health information. However, the variable nature of the quality of World Wide Web information and its perceived level of reliability may lead to misinformation. This makes demands on healthcare professionals, and on nurses in particular, to ensure that health information of reliable quality is selected for use in practice. A grounded theory approach was adopted. Semi-structured interviews and focus groups were used to collect data, between 2004 and 2005, from 20 nurses undertaking a postqualification graduate course at a university and 13 nurses from a local hospital in the United Kingdom. A theoretical framework emerged that gave insight into the judgement process nurses use when evaluating World Wide Web information. Participants broke the judgement process down into specific tasks. In addition, they used tacit, process and propositional knowledge and intuition, quasi-rational cognition and analysis to undertake these tasks. World Wide Web information cues, time available and nurses' critical skills were influencing factors in their judgement process. Addressing the issue of quality and reliability associated with World Wide Web information is a global challenge. This theoretical framework could contribute towards meeting this challenge.

  2. Three-dimensional Hessian matrix-based quantitative vascular imaging of rat iris with optical-resolution photoacoustic microscopy in vivo

    NASA Astrophysics Data System (ADS)

    Zhao, Huangxuan; Wang, Guangsong; Lin, Riqiang; Gong, Xiaojing; Song, Liang; Li, Tan; Wang, Wenjia; Zhang, Kunya; Qian, Xiuqing; Zhang, Haixia; Li, Lin; Liu, Zhicheng; Liu, Chengbo

    2018-04-01

    For the diagnosis and evaluation of ophthalmic diseases, imaging and quantitative characterization of vasculature in the iris are very important. The recently developed photoacoustic imaging, which is ultrasensitive in imaging endogenous hemoglobin molecules, provides a highly efficient label-free method for imaging blood vasculature in the iris. However, the development of advanced vascular quantification algorithms is still needed to enable accurate characterization of the underlying vasculature. We have developed a vascular information quantification algorithm by adopting a three-dimensional (3-D) Hessian matrix and applied for processing iris vasculature images obtained with a custom-built optical-resolution photoacoustic imaging system (OR-PAM). For the first time, we demonstrate in vivo 3-D vascular structures of a rat iris with a the label-free imaging method and also accurately extract quantitative vascular information, such as vessel diameter, vascular density, and vascular tortuosity. Our results indicate that the developed algorithm is capable of quantifying the vasculature in the 3-D photoacoustic images of the iris in-vivo, thus enhancing the diagnostic capability of the OR-PAM system for vascular-related ophthalmic diseases in vivo.

  3. Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method

    DOE PAGES

    Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan

    2015-07-29

    Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less

  4. An expert system for the quantification of fault rates in construction fall accidents.

    PubMed

    Talat Birgonul, M; Dikmen, Irem; Budayan, Cenk; Demirel, Tuncay

    2016-01-01

    Expert witness reports, prepared with the aim of quantifying fault rates among parties, play an important role in a court's final decision. However, conflicting fault rates assigned by different expert witness boards lead to iterative objections raised by the related parties. This unfavorable situation mainly originates due to the subjectivity of expert judgments and unavailability of objective information about the causes of accidents. As a solution to this shortcoming, an expert system based on a rule-based system was developed for the quantification of fault rates in construction fall accidents. The aim of developing DsSafe is decreasing the subjectivity inherent in expert witness reports. Eighty-four inspection reports prepared by the official and authorized inspectors were examined and root causes of construction fall accidents in Turkey were identified. Using this information, an evaluation form was designed and submitted to the experts. Experts were asked to evaluate the importance level of the factors that govern fall accidents and determine the fault rates under different scenarios. Based on expert judgments, a rule-based expert system was developed. The accuracy and reliability of DsSafe were tested with real data as obtained from finalized court cases. DsSafe gives satisfactory results.

  5. Detection, location, and quantification of structural damage by neural-net-processed moiré profilometry

    NASA Astrophysics Data System (ADS)

    Grossman, Barry G.; Gonzalez, Frank S.; Blatt, Joel H.; Hooker, Jeffery A.

    1992-03-01

    The development of efficient high speed techniques to recognize, locate, and quantify damage is vitally important for successful automated inspection systems such as ones used for the inspection of undersea pipelines. Two critical problems must be solved to achieve these goals: the reduction of nonuseful information present in the video image and automatic recognition and quantification of extent and location of damage. Artificial neural network processed moire profilometry appears to be a promising technique to accomplish this. Real time video moire techniques have been developed which clearly distinguish damaged and undamaged areas on structures, thus reducing the amount of extraneous information input into an inspection system. Artificial neural networks have demonstrated advantages for image processing, since they can learn the desired response to a given input and are inherently fast when implemented in hardware due to their parallel computing architecture. Video moire images of pipes with dents of different depths were used to train a neural network, with the desired output being the location and severity of the damage. The system was then successfully tested with a second series of moire images. The techniques employed and the results obtained are discussed.

  6. An online sleep apnea detection method based on recurrence quantification analysis.

    PubMed

    Nguyen, Hoa Dinh; Wilkins, Brek A; Cheng, Qi; Benjamin, Bruce Allen

    2014-07-01

    This paper introduces an online sleep apnea detection method based on heart rate complexity as measured by recurrence quantification analysis (RQA) statistics of heart rate variability (HRV) data. RQA statistics can capture nonlinear dynamics of a complex cardiorespiratory system during obstructive sleep apnea. In order to obtain a more robust measurement of the nonstationarity of the cardiorespiratory system, we use different fixed amount of neighbor thresholdings for recurrence plot calculation. We integrate a feature selection algorithm based on conditional mutual information to select the most informative RQA features for classification, and hence, to speed up the real-time classification process without degrading the performance of the system. Two types of binary classifiers, i.e., support vector machine and neural network, are used to differentiate apnea from normal sleep. A soft decision fusion rule is developed to combine the results of these classifiers in order to improve the classification performance of the whole system. Experimental results show that our proposed method achieves better classification results compared with the previous recurrence analysis-based approach. We also show that our method is flexible and a strong candidate for a real efficient sleep apnea detection system.

  7. Hop limited epidemic-like information spreading in mobile social networks with selfish nodes

    NASA Astrophysics Data System (ADS)

    Wu, Yahui; Deng, Su; Huang, Hongbin

    2013-07-01

    Similar to epidemics, information can be transmitted directly among users in mobile social networks. Different from epidemics, we can control the spreading process by adjusting the corresponding parameters (e.g., hop count) directly. This paper proposes a theoretical model to evaluate the performance of an epidemic-like spreading algorithm, in which the maximal hop count of the information is limited. In addition, our model can be used to evaluate the impact of users’ selfish behavior. Simulations show the accuracy of our theoretical model. Numerical results show that the information hop count can have an important impact. In addition, the impact of selfish behavior is related to the information hop count.

  8. Sound-diffracting flap in the ear of a bat generates spatial information.

    PubMed

    Müller, Rolf; Lu, Hongwang; Buck, John R

    2008-03-14

    Sound diffraction by the mammalian ear generates source-direction information. We have obtained an immediate quantification of this information from numerical predictions. We demonstrate the power of our approach by showing that a small flap in a bat's pinna generates useful information over a large set of directions in a central band of frequencies: presence of the flap more than doubled the solid angle with direction information above a given threshold. From the workings of the employed information measure, the Cramér-Rao lower bound, we can explain how physical shape is linked to sensory information via a strong sidelobe with frequency-dependent orientation in the directivity pattern. This method could be applied to any other mammal species with pinnae to quantify the relative importance of pinna structures' contributions to directional information and to facilitate interspecific comparisons of pinna directivity patterns.

  9. Analysis of host-cell proteins in biotherapeutic proteins by comprehensive online two-dimensional liquid chromatography/mass spectrometry

    PubMed Central

    Xenopoulos, Alex; Fadgen, Keith; Murphy, Jim; Skilton, St. John; Prentice, Holly; Stapels, Martha

    2012-01-01

    Assays for identification and quantification of host-cell proteins (HCPs) in biotherapeutic proteins over 5 orders of magnitude in concentration are presented. The HCP assays consist of two types: HCP identification using comprehensive online two-dimensional liquid chromatography coupled with high resolution mass spectrometry (2D-LC/MS), followed by high-throughput HCP quantification by liquid chromatography, multiple reaction monitoring (LC-MRM). The former is described as a “discovery” assay, the latter as a “monitoring” assay. Purified biotherapeutic proteins (e.g., monoclonal antibodies) were digested with trypsin after reduction and alkylation, and the digests were fractionated using reversed-phase (RP) chromatography at high pH (pH 10) by a step gradient in the first dimension, followed by a high-resolution separation at low pH (pH 2.5) in the second dimension. As peptides eluted from the second dimension, a quadrupole time-of-flight mass spectrometer was used to detect the peptides and their fragments simultaneously by alternating the collision cell energy between a low and an elevated energy (MSE methodology). The MSE data was used to identify and quantify the proteins in the mixture using a proven label-free quantification technique (“Hi3” method). The same data set was mined to subsequently develop target peptides and transitions for monitoring the concentration of selected HCPs on a triple quadrupole mass spectrometer in a high-throughput manner (20 min LC-MRM analysis). This analytical methodology was applied to the identification and quantification of low-abundance HCPs in six samples of PTG1, a recombinant chimeric anti-phosphotyrosine monoclonal antibody (mAb). Thirty three HCPs were identified in total from the PTG1 samples among which 21 HCP isoforms were selected for MRM monitoring. The absolute quantification of three selected HCPs was undertaken on two different LC-MRM platforms after spiking isotopically labeled peptides in the samples. Finally, the MRM quantitation results were compared with TOF-based quantification based on the Hi3 peptides, and the TOF and MRM data sets correlated reasonably well. The results show that the assays provide detailed valuable information to understand the relative contributions of purification schemes to the nature and concentrations of HCP impurities in biopharmaceutical samples, and the assays can be used as generic methods for HCP analysis in the biopharmaceutical industry. PMID:22327428

  10. Confidence in outcome estimates from systematic reviews used in informed consent.

    PubMed

    Fritz, Robert; Bauer, Janet G; Spackman, Sue S; Bains, Amanjyot K; Jetton-Rangel, Jeanette

    2016-12-01

    Evidence-based dentistry now guides informed consent in which clinicians are obliged to provide patients with the most current, best evidence, or best estimates of outcomes, of regimens, therapies, treatments, procedures, materials, and equipment or devices when developing personal oral health care, treatment plans. Yet, clinicians require that the estimates provided from systematic reviews be verified to their validity, reliability, and contextualized as to performance competency so that clinicians may have confidence in explaining outcomes to patients in clinical practice. The purpose of this paper was to describe types of informed estimates from which clinicians may have confidence in their capacity to assist patients in competent decision-making, one of the most important concepts of informed consent. Using systematic review methodology, researchers provide clinicians with valid best estimates of outcomes regarding a subject of interest from best evidence. Best evidence is verified through critical appraisals using acceptable sampling methodology either by scoring instruments (Timmer analysis) or checklist (grade), a Cochrane Collaboration standard that allows transparency in open reviews. These valid best estimates are then tested for reliability using large databases. Finally, valid and reliable best estimates are assessed for meaning using quantification of margins and uncertainties. Through manufacturer and researcher specifications, quantification of margins and uncertainties develops a performance competency continuum by which valid, reliable best estimates may be contextualized for their performance competency: at a lowest margin performance competency (structural failure), high margin performance competency (estimated true value of success), or clinically determined critical values (clinical failure). Informed consent may be achieved when clinicians are confident of their ability to provide useful and accurate best estimates of outcomes regarding regimens, therapies, treatments, and equipment or devices to patients in their clinical practices and when developing personal, oral health care, treatment plans. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Quantification of water resources uncertainties in the Luvuvhu sub-basin of the Limpopo river basin

    NASA Astrophysics Data System (ADS)

    Oosthuizen, N.; Hughes, D.; Kapangaziwiri, E.; Mwenge Kahinda, J.; Mvandaba, V.

    2018-06-01

    In the absence of historical observed data, models are generally used to describe the different hydrological processes and generate data and information that will inform management and policy decision making. Ideally, any hydrological model should be based on a sound conceptual understanding of the processes in the basin and be backed by quantitative information for the parameterization of the model. However, these data are often inadequate in many sub-basins, necessitating the incorporation of the uncertainty related to the estimation process. This paper reports on the impact of the uncertainty related to the parameterization of the Pitman monthly model and water use data on the estimates of the water resources of the Luvuvhu, a sub-basin of the Limpopo river basin. The study reviews existing information sources associated with the quantification of water balance components and gives an update of water resources of the sub-basin. The flows generated by the model at the outlet of the basin were between 44.03 Mm3 and 45.48 Mm3 per month when incorporating +20% uncertainty to the main physical runoff generating parameters. The total predictive uncertainty of the model increased when water use data such as small farm and large reservoirs and irrigation were included. The dam capacity data was considered at an average of 62% uncertainty mainly as a result of the large differences between the available information in the national water resources database and that digitised from satellite imagery. Water used by irrigated crops was estimated with an average of about 50% uncertainty. The mean simulated monthly flows were between 38.57 Mm3 and 54.83 Mm3 after the water use uncertainty was added. However, it is expected that the uncertainty could be reduced by using higher resolution remote sensing imagery.

  12. Satellite data driven modeling system for predicting air quality and visibility during wildfire and prescribed burn events

    NASA Astrophysics Data System (ADS)

    Nair, U. S.; Keiser, K.; Wu, Y.; Maskey, M.; Berendes, D.; Glass, P.; Dhakal, A.; Christopher, S. A.

    2012-12-01

    The Alabama Forestry Commission (AFC) is responsible for wildfire control and also prescribed burn management in the state of Alabama. Visibility and air quality degradation resulting from smoke are two pieces of information that are crucial for this activity. Currently the tools available to AFC are the dispersion index available from the National Weather Service and also surface smoke concentrations. The former provides broad guidance for prescribed burning activities but does not provide specific information regarding smoke transport, areas affected and quantification of air quality and visibility degradation. While the NOAA operational air quality guidance includes surface smoke concentrations from existing fire events, it does not account for contributions from background aerosols, which are important for the southeastern region including Alabama. Also lacking is the quantification of visibility. The University of Alabama in Huntsville has developed a state-of-the-art integrated modeling system to address these concerns. This system based on the Community Air Quality Modeling System (CMAQ) that ingests satellite derived smoke emissions and also assimilates NASA MODIS derived aerosol optical thickness. In addition, this operational modeling system also simulates the impact of potential prescribed burn events based on location information derived from the AFC prescribed burn permit database. A lagrangian model is used to simulate smoke plumes for the prescribed burns requests. The combined air quality and visibility degradation resulting from these smoke plumes and background aerosols is computed and the information is made available through a web based decision support system utilizing open source GIS components. This system provides information regarding intersections between highways and other critical facilities such as old age homes, hospitals and schools. The system also includes satellite detected fire locations and other satellite derived datasets relevant for fire and smoke management.

  13. Extraction of spatiotemporal response information from sorption-based cross-reactive sensor arrays for the identification and quantification of analyte mixtures

    NASA Astrophysics Data System (ADS)

    Woodka, Marc D.; Brunschwig, Bruce S.; Lewis, Nathan S.

    2008-03-01

    Linear sensor arrays made from small molecule/carbon black composite chemiresistors placed in a low headspace volume chamber, with vapor delivered at low flow rates, allowed for the extraction of chemical information that significantly increased the ability of the sensor arrays to identify vapor mixture components and to quantify their concentrations. Each sensor sorbed vapors from the gas stream to various degrees. Similar to gas chromatography, species having high vapor pressures were separated from species having low vapor pressures. Instead of producing typical sensor responses representative of thermodynamic equilibrium between each sensor and an unchanging vapor phase, sensor responses varied depending on the position of the sensor in the chamber and the time from the beginning of the analyte exposure. This spatiotemporal (ST) array response provided information that was a function of time as well as of the position of the sensor in the chamber. The responses to pure analytes and to multi-component analyte mixtures comprised of hexane, decane, ethyl acetate, chlorobenzene, ethanol, and/or butanol, were recorded along each of the sensor arrays. Use of a non-negative least squares (NNLS) method for analysis of the ST data enabled the correct identification and quantification of the composition of 2-, 3-, 4- and 5-component mixtures from arrays using only 4 chemically different sorbent films and sensor training on pure vapors only. In contrast, when traditional time- and position-independent sensor response information was used, significant errors in mixture identification were observed. The ability to correctly identify and quantify constituent components of vapor mixtures through the use of such ST information significantly expands the capabilities of such broadly cross-reactive arrays of sensors.

  14. SU-E-J-155: Utilizing Varian TrueBeam Developer Mode for the Quantification of Mechanical Limits and the Simulation of 4D Respiratory Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moseley, D; Dave, M

    Purpose: Use Varian TrueBeam Developer mode to quantify the mechanical limits of the couch and to simulate 4D respiratory motion. Methods: An in-house MATLAB based GUI was created to make the BEAM XML files. The couch was moved in a triangular wave in the S/I direction with varying amplitudes (1mm, 5mm, 10mm, and 50mm) and periods (3s, 6s, and 9s). The periods were determined by specifying the speed. The theoretical positions were compared to the values recorded by the machine at 50 Hz. HD videos were taken for certain tests as external validation. 4D Respiratory motion was simulated by anmore » A/P MV beam being delivered while the couch moved in an elliptical manner. The ellipse had a major axis of 2 cm (S/I) and a minor axis of 1 cm (A/P). Results: The path planned by the TrueBeam deviated from the theoretical triangular form as the speed increased. Deviations were noticed starting at a speed of 3.33 cm/s (50mm amplitude, 6s period). The greatest deviation occurred in the 50mm- 3s sequence with a correlation value of −0.13 and a 27% time increase; the plan essentially became out of phase. Excluding these two, the plans had correlation values of 0.99. The elliptical sequence effectively simulated a respiratory pattern with a period of 6s. The period could be controlled by changing the speeds or the dose rate. Conclusion: The work first shows the quantification of the mechanical limits of the couch and the speeds at which the proposed plans begin to deviate. These limits must be kept in mind when programming other couch sequences. The methodology can be used to quantify the limits of other axes. Furthermore, the work shows the possibility of creating 4D respiratory simulations without using specialized phantoms or motion-platforms. This can be further developed to program patient-specific breathing patterns.« less

  15. Relative equilibrium plot improves graphical analysis and allows bias correction of standardized uptake value ratio in quantitative 11C-PiB PET studies.

    PubMed

    Zhou, Yun; Sojkova, Jitka; Resnick, Susan M; Wong, Dean F

    2012-04-01

    Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVRs) in ligand-receptor dynamic PET studies. The objective of this study was to use a recently developed relative equilibrium-based graphical (RE) plot method to improve and simplify the 2 commonly used methods for quantification of (11)C-Pittsburgh compound B ((11)C-PiB) PET. The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight (11)C-PiB dynamic PET scans (66 from controls and 12 from participants with mild cognitive impaired [MCI] from the Baltimore Longitudinal Study of Aging) were acquired over 90 min. Regions of interest (ROIs) were defined on coregistered MR images. Both the ROI and the pixelwise time-activity curves were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI time-activity curves were used as a reference for comparison of DVR estimates. Results from the theoretic analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI time-activity curves. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, and cingulate regions and the striatum were underestimated by the Logan plot (controls, 4%-12%; MCI, 9%-16%) and overestimated by the SUVR (controls, 8%-16%; MCI, 16%-24%). This bias was higher in the MCI group than in controls (P < 0.01) but was not present when data were analyzed using either the RE plot or the bcSUVR. The RE plot improves pixelwise quantification of (11)C-PiB dynamic PET, compared with the conventional Logan plot. The bcSUVR results in lower bias and higher consistency of DVR estimates than of SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of (11)C-PiB studies.

  16. Relative equilibrium plot improves graphical analysis and allows bias correction of SUVR in quantitative [11C]PiB PET studies

    PubMed Central

    Zhou, Yun; Sojkova, Jitka; Resnick, Susan M.; Wong, Dean F.

    2012-01-01

    Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVR) in ligand-receptor dynamic PET studies. The objective of this study is to use a recently developed relative equilibrium-based graphical plot (RE plot) method to improve and simplify the two commonly used methods for quantification of [11C]PiB PET. Methods The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight [11C]PiB dynamic PET scans (66 from controls and 12 from mildly cognitively impaired participants (MCI) from the Baltimore Longitudinal Study of Aging (BLSA)) were acquired over 90 minutes. Regions of interest (ROIs) were defined on coregistered MRIs. Both the ROI and pixelwise time activity curves (TACs) were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI TACs were used as a reference for comparison of DVR estimates. Results Results from the theoretical analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI TACs. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, cingulate regions, and the striatum were underestimated by the Logan plot (controls 4 – 12%; MCI 9 – 16%) and overestimated by the SUVR (controls 8 – 16%; MCI 16 – 24%). This bias was higher in the MCI group than in controls (p < 0.01) but was not present when data were analyzed using either the RE plot or the bcSUVR. Conclusion The RE plot improves pixel-wise quantification of [11C]PiB dynamic PET compared to the conventional Logan plot. The bcSUVR results in lower bias and higher consistency of DVR estimates compared to SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of [11C]PiB studies. PMID:22414634

  17. Expedited quantification of mutant ribosomal RNA by binary deoxyribozyme (BiDz) sensors.

    PubMed

    Gerasimova, Yulia V; Yakovchuk, Petro; Dedkova, Larisa M; Hecht, Sidney M; Kolpashchikov, Dmitry M

    2015-10-01

    Mutations in ribosomal RNA (rRNA) have traditionally been detected by the primer extension assay, which is a tedious and multistage procedure. Here, we describe a simple and straightforward fluorescence assay based on binary deoxyribozyme (BiDz) sensors. The assay uses two short DNA oligonucleotides that hybridize specifically to adjacent fragments of rRNA, one of which contains a mutation site. This hybridization results in the formation of a deoxyribozyme catalytic core that produces the fluorescent signal and amplifies it due to multiple rounds of catalytic action. This assay enables us to expedite semi-quantification of mutant rRNA content in cell cultures starting from whole cells, which provides information useful for optimization of culture preparation prior to ribosome isolation. The method requires less than a microliter of a standard Escherichia coli cell culture and decreases analysis time from several days (for primer extension assay) to 1.5 h with hands-on time of ∼10 min. It is sensitive to single-nucleotide mutations. The new assay simplifies the preliminary analysis of RNA samples and cells in molecular biology and cloning experiments and is promising in other applications where fast detection/quantification of specific RNA is required. © 2015 Gerasimova et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  18. A Novel Computer-Assisted Approach to evaluate Multicellular Tumor Spheroid Invasion Assay

    PubMed Central

    Cisneros Castillo, Liliana R.; Oancea, Andrei-Dumitru; Stüllein, Christian; Régnier-Vigouroux, Anne

    2016-01-01

    Multicellular tumor spheroids (MCTSs) embedded in a matrix are re-emerging as a powerful alternative to monolayer-based cultures. The primary information gained from a three-dimensional model is the invasiveness of treatment-exposed MCTSs through the acquisition of light microscopy images. The amount and complexity of the acquired data and the bias arisen by their manual analysis are disadvantages calling for an automated, high-throughput analysis. We present a universal algorithm we developed with the scope of being robust enough to handle images of various qualities and various invasion profiles. The novelty and strength of our algorithm lie in: the introduction of a multi-step segmentation flow, where each step is optimized for each specific MCTS area (core, halo, and periphery); the quantification through the density of the two-dimensional representation of a three-dimensional object. This latter offers a fine-granular differentiation of invasive profiles, facilitating a quantification independent of cell lines and experimental setups. Progression of density from the core towards the edges influences the resulting density map thus providing a measure no longer dependent on the sole area size of MCTS, but also on its invasiveness. In sum, we propose a new method in which the concept of quantification of MCTS invasion is completely re-thought. PMID:27731418

  19. Magnetic Particle Spectroscopy Reveals Dynamic Changes in the Magnetic Behavior of Very Small Superparamagnetic Iron Oxide Nanoparticles During Cellular Uptake and Enables Determination of Cell-Labeling Efficacy.

    PubMed

    Poller, Wolfram C; Löwa, Norbert; Wiekhorst, Frank; Taupitz, Matthias; Wagner, Susanne; Möller, Konstantin; Baumann, Gert; Stangl, Verena; Trahms, Lutz; Ludwig, Antje

    2016-02-01

    In vivo tracking of nanoparticle-labeled cells by magnetic resonance imaging (MRI) crucially depends on accurate determination of cell-labeling efficacy prior to transplantation. Here, we analyzed the feasibility and accuracy of magnetic particle spectroscopy (MPS) for estimation of cell-labeling efficacy in living THP-1 cells incubated with very small superparamagnetic iron oxide nanoparticles (VSOP). Cell viability and proliferation capacity were not affected by the MPS measurement procedure. In VSOP samples without cell contact, MPS enabled highly accurate quantification. In contrast, MPS constantly overestimated the amount of cell associated and internalized VSOP. Analyses of the MPS spectrum shape expressed as harmonic ratio A₅/A₃ revealed distinct changes in the magnetic behavior of VSOP in response to cellular uptake. These changes were proportional to the deviation between MPS and actual iron amount, therefore allowing for adjusted iron quantification. Transmission electron microscopy provided visual evidence that changes in the magnetic properties correlated with cell surface interaction of VSOP as well as with alterations of particle structure and arrangement during the phagocytic process. Altogether, A₅/A₃-adjusted MPS enables highly accurate, cell-preserving VSOP quantification and furthermore provides information on the magnetic characteristics of internalized VSOP.

  20. MPN- and Real-Time-Based PCR Methods for the Quantification of Alkane Monooxygenase Homologous Genes (alkB) in Environmental Samples

    NASA Astrophysics Data System (ADS)

    Pérez-de-Mora, Alfredo; Schulz, Stephan; Schloter, Michael

    Hydrocarbons are major contaminants of soil ecosystems as a result of uncontrolled oil spills and wastes disposal into the environment. Ecological risk assessment and remediation of affected sites is often constrained due to lack of suitable prognostic and diagnostic tools that provide information of abiotic-biotic interactions occurring between contaminants and biological targets. Therefore, the identification and quantification of genes involved in the degradation of hydrocarbons may play a crucial role for evaluating the natural attenuation potential of contaminated sites and the development of successful bioremediation strategies. Besides other gene clusters, the alk operon has been identified as a major player for alkane degradation in different soils. An oxygenase gene (alkB) codes for the initial step of the degradation of aliphatic alkanes under aerobic conditions. In this work, we present an MPN- and a real-time PCR method for the quantification of the bacterial gene alkB (coding for rubredoxin-dependent alkane monooxygenase) in environmental samples. Both approaches enable a rapid culture-independent screening of the alkB gene in the environment, which can be used to assess the intrinsic natural attenuation potential of a site or to follow up the on-going progress of bioremediation assays.

  1. Synchronized arousal between performers and related spectators in a fire-walking ritual

    PubMed Central

    Konvalinka, Ivana; Xygalatas, Dimitris; Bulbulia, Joseph; Schjødt, Uffe; Jegindø, Else-Marie; Wallot, Sebastian; Van Orden, Guy; Roepstorff, Andreas

    2011-01-01

    Collective rituals are present in all known societies, but their function is a matter of long-standing debates. Field observations suggest that they may enhance social cohesion and that their effects are not limited to those actively performing but affect the audience as well. Here we show physiological effects of synchronized arousal in a Spanish fire-walking ritual, between active participants and related spectators, but not participants and other members of the audience. We assessed arousal by heart rate dynamics and applied nonlinear mathematical analysis to heart rate data obtained from 38 participants. We compared synchronized arousal between fire-walkers and spectators. For this comparison, we used recurrence quantification analysis on individual data and cross-recurrence quantification analysis on pairs of participants' data. These methods identified fine-grained commonalities of arousal during the 30-min ritual between fire-walkers and related spectators but not unrelated spectators. This indicates that the mediating mechanism may be informational, because participants and related observers had very different bodily behavior. This study demonstrates that a collective ritual may evoke synchronized arousal over time between active participants and bystanders. It links field observations to a physiological basis and offers a unique approach for the quantification of social effects on human physiology during real-world interactions. PMID:21536887

  2. Identification and Quantification of N-Acyl Homoserine Lactones Involved in Bacterial Communication by Small-Scale Synthesis of Internal Standards and Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry.

    PubMed

    Leipert, Jan; Treitz, Christian; Leippe, Matthias; Tholey, Andreas

    2017-12-01

    N-acyl homoserine lactones (AHL) are small signal molecules involved in the quorum sensing of many gram-negative bacteria, and play an important role in biofilm formation and pathogenesis. Present analytical methods for identification and quantification of AHL require time-consuming sample preparation steps and are hampered by the lack of appropriate standards. By aiming at a fast and straightforward method for AHL analytics, we investigated the applicability of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Suitable MALDI matrices, including crystalline and ionic liquid matrices, were tested and the fragmentation of different AHL in collision-induced dissociation MS/MS was studied, providing information about characteristic marker fragments ions. Employing small-scale synthesis protocols, we established a versatile and cost-efficient procedure for fast generation of isotope-labeled AHL standards, which can be used without extensive purification and yielded accurate standard curves. Quantitative analysis was possible in the low pico-molar range, with lower limits of quantification reaching from 1 to 5 pmol for different AHL. The developed methodology was successfully applied in a quantitative MALDI MS analysis of low-volume culture supernatants of Pseudomonas aeruginosa. Graphical abstract ᅟ.

  3. Spectral Analysis of Dynamic PET Studies: A Review of 20 Years of Method Developments and Applications.

    PubMed

    Veronese, Mattia; Rizzo, Gaia; Bertoldo, Alessandra; Turkheimer, Federico E

    2016-01-01

    In Positron Emission Tomography (PET), spectral analysis (SA) allows the quantification of dynamic data by relating the radioactivity measured by the scanner in time to the underlying physiological processes of the system under investigation. Among the different approaches for the quantification of PET data, SA is based on the linear solution of the Laplace transform inversion whereas the measured arterial and tissue time-activity curves of a radiotracer are used to calculate the input response function of the tissue. In the recent years SA has been used with a large number of PET tracers in brain and nonbrain applications, demonstrating that it is a very flexible and robust method for PET data analysis. Differently from the most common PET quantification approaches that adopt standard nonlinear estimation of compartmental models or some linear simplifications, SA can be applied without defining any specific model configuration and has demonstrated very good sensitivity to the underlying kinetics. This characteristic makes it useful as an investigative tool especially for the analysis of novel PET tracers. The purpose of this work is to offer an overview of SA, to discuss advantages and limitations of the methodology, and to inform about its applications in the PET field.

  4. Reliable quantification of phthalates in environmental matrices (air, water, sludge, sediment and soil): a review.

    PubMed

    Net, Sopheak; Delmont, Anne; Sempéré, Richard; Paluselli, Andrea; Ouddane, Baghdad

    2015-05-15

    Because of their widespread application, phthalates or phthalic acid esters (PAEs) are ubiquitous in the environment. Their presence has attracted considerable attention due to their potential impacts on ecosystem functioning and on public health, so their quantification has become a necessity. Various extraction procedures as well as gas/liquid chromatography and mass spectrometry detection techniques are found as suitable for reliable detection of such compounds. However, PAEs are ubiquitous in the laboratory environment including ambient air, reagents, sampling equipment, and various analytical devices, that induces difficult analysis of real samples with a low PAE background. Therefore, accurate PAE analysis in environmental matrices is a challenging task. This paper reviews the extensive literature data on the techniques for PAE quantification in natural media. Sampling, sample extraction/pretreatment and detection for quantifying PAEs in different environmental matrices (air, water, sludge, sediment and soil) have been reviewed and compared. The concept of "green analytical chemistry" for PAE determination is also discussed. Moreover useful information about the material preparation and the procedures of quality control and quality assurance are presented to overcome the problem of sample contamination and these encountered due to matrix effects in order to avoid overestimating PAE concentrations in the environment. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Identification and Quantification of N-Acyl Homoserine Lactones Involved in Bacterial Communication by Small-Scale Synthesis of Internal Standards and Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Leipert, Jan; Treitz, Christian; Leippe, Matthias; Tholey, Andreas

    2017-12-01

    N-acyl homoserine lactones (AHL) are small signal molecules involved in the quorum sensing of many gram-negative bacteria, and play an important role in biofilm formation and pathogenesis. Present analytical methods for identification and quantification of AHL require time-consuming sample preparation steps and are hampered by the lack of appropriate standards. By aiming at a fast and straightforward method for AHL analytics, we investigated the applicability of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Suitable MALDI matrices, including crystalline and ionic liquid matrices, were tested and the fragmentation of different AHL in collision-induced dissociation MS/MS was studied, providing information about characteristic marker fragments ions. Employing small-scale synthesis protocols, we established a versatile and cost-efficient procedure for fast generation of isotope-labeled AHL standards, which can be used without extensive purification and yielded accurate standard curves. Quantitative analysis was possible in the low pico-molar range, with lower limits of quantification reaching from 1 to 5 pmol for different AHL. The developed methodology was successfully applied in a quantitative MALDI MS analysis of low-volume culture supernatants of Pseudomonas aeruginosa. [Figure not available: see fulltext.

  6. CEQer: A Graphical Tool for Copy Number and Allelic Imbalance Detection from Whole-Exome Sequencing Data

    PubMed Central

    Piazza, Rocco; Magistroni, Vera; Pirola, Alessandra; Redaelli, Sara; Spinelli, Roberta; Redaelli, Serena; Galbiati, Marta; Valletta, Simona; Giudici, Giovanni; Cazzaniga, Giovanni; Gambacorti-Passerini, Carlo

    2013-01-01

    Copy number alterations (CNA) are common events occurring in leukaemias and solid tumors. Comparative Genome Hybridization (CGH) is actually the gold standard technique to analyze CNAs; however, CGH analysis requires dedicated instruments and is able to perform only low resolution Loss of Heterozygosity (LOH) analyses. Here we present CEQer (Comparative Exome Quantification analyzer), a new graphical, event-driven tool for CNA/allelic-imbalance (AI) coupled analysis of exome sequencing data. By using case-control matched exome data, CEQer performs a comparative digital exonic quantification to generate CNA data and couples this information with exome-wide LOH and allelic imbalance detection. This data is used to build mixed statistical/heuristic models allowing the identification of CNA/AI events. To test our tool, we initially used in silico generated data, then we performed whole-exome sequencing from 20 leukemic specimens and corresponding matched controls and we analyzed the results using CEQer. Taken globally, these analyses showed that the combined use of comparative digital exon quantification and LOH/AI allows generating very accurate CNA data. Therefore, we propose CEQer as an efficient, robust and user-friendly graphical tool for the identification of CNA/AI in the context of whole-exome sequencing data. PMID:24124457

  7. Adaptive selection and validation of models of complex systems in the presence of uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell-Maupin, Kathryn; Oden, J. T.

    This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.

  8. Adaptive selection and validation of models of complex systems in the presence of uncertainty

    DOE PAGES

    Farrell-Maupin, Kathryn; Oden, J. T.

    2017-08-01

    This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.

  9. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    PubMed Central

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  10. The capacity to transmit classical information via black holes

    NASA Astrophysics Data System (ADS)

    Adami, Christoph; Ver Steeg, Greg

    2013-03-01

    One of the most vexing problems in theoretical physics is the relationship between quantum mechanics and gravity. According to an argument originally by Hawking, a black hole must destroy any information that is incident on it because the only radiation that a black hole releases during its evaporation (the Hawking radiation) is precisely thermal. Surprisingly, this claim has never been investigated within a quantum information-theoretic framework, where the black hole is treated as a quantum channel to transmit classical information. We calculate the capacity of the quantum black hole channel to transmit classical information (the Holevo capacity) within curved-space quantum field theory, and show that the information carried by late-time particles sent into a black hole can be recovered with arbitrary accuracy, from the signature left behind by the stimulated emission of radiation that must accompany any absorption event. We also show that this stimulated emission turns the black hole into an almost-optimal quantum cloning machine, where the violation of the no-cloning theorem is ensured by the noise provided by the Hawking radiation. Thus, rather than threatening the consistency of theoretical physics, Hawking radiation manages to save it instead.

  11. Health information systems: a survey of frameworks for developing countries.

    PubMed

    Marcelo, A B

    2010-01-01

    The objective of this paper is to perform a survey of excellent research on health information systems (HIS) analysis and design, and their underlying theoretical frameworks. It classifies these frameworks along major themes, and analyzes the different approaches to HIS development that are practical in resource-constrained environments. Literature review based on PubMed citations and conference proceedings, as well as Internet searches on information systems in general, and health information systems in particular. The field of health information systems development has been studied extensively. Despite this, failed implementations are still common. Theoretical frameworks for HIS development are available that can guide implementers. As awareness, acceptance, and demand for health information systems increase globally, the variety of approaches and strategies will also follow. For developing countries with scarce resources, a trial-and-error approach can be very costly. Lessons from the successes and failures of initial HIS implementations have been abstracted into theoretical frameworks. These frameworks organize complex HIS concepts into methodologies that standardize techniques in implementation. As globalization continues to impact healthcare in the developing world, demand for more responsive health systems will become urgent. More comprehensive frameworks and practical tools to guide HIS implementers will be imperative.

  12. Multiplex Droplet Digital PCR Protocols for Quantification of GM Maize Events.

    PubMed

    Dobnik, David; Spilsberg, Bjørn; Bogožalec Košir, Alexandra; Štebih, Dejan; Morisset, Dany; Holst-Jensen, Arne; Žel, Jana

    2018-01-01

    The standard-curve based simplex quantitative polymerase chain reaction (qPCR) has been the gold standard for DNA target quantification for more than a decade. The large and growing number of individual analyses needed to test for genetically modified organisms (GMOs) is reducing the cost-effectiveness of qPCR. Droplet digital PCR (ddPCR) enables absolute quantification without standard curves, avoids the amplification efficiency bias observed with qPCR, allows more accurate estimations at low target copy numbers and, in combination with multiplexing, significantly improves cost efficiency. Here we describe two protocols for multiplex quantification of GM maize events: (1) nondiscriminating, with multiplex quantification of targets as a group (12 GM maize lines) and (2) discriminating, with multiplex quantification of individual targets (events). The first enables the quantification of twelve European Union authorized GM maize events as a group with only two assays, but does not permit determination of the individual events present. The second protocol enables the quantification of four individual targets (three GM events and one endogene) in a single reaction. Both protocols can be modified for quantification of any other DNA target.

  13. Physical-layer encryption on the public internet: A stochastic approach to the Kish-Sethuraman cipher

    NASA Astrophysics Data System (ADS)

    Gunn, Lachlan J.; Chappell, James M.; Allison, Andrew; Abbott, Derek

    2014-09-01

    While information-theoretic security is often associated with the one-time pad and quantum key distribution, noisy transport media leave room for classical techniques and even covert operation. Transit times across the public internet exhibit a degree of randomness, and cannot be determined noiselessly by an eavesdropper. We demonstrate the use of these measurements for information-theoretically secure communication over the public internet.

  14. Spiked Models of Large Dimensional Random Matrices Applied to Wireless Communications and Array Signal Processing

    DTIC Science & Technology

    2013-12-14

    population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC

  15. Effects of microhabitat and land use on stream salamander abundance in the southwest Virginia coalfields

    USGS Publications Warehouse

    Sweeten, Sara E.; Ford, W. Mark

    2015-01-01

    Large-scale land uses such as residential wastewater discharge and coal mining practices, particularly surface coal extraction and associated valley fills, are of particular ecological concern in central Appalachia. Identification and quantification of both alterations across scales are a necessary first-step to mitigate negative consequences to biota. In central Appalachian headwater streams absent of fish, salamanders are the dominant, most abundant vertebrate predator providing a significant intermediate trophic role. Stream salamander species are considered to be sensitive to aquatic stressors and environmental alterations, and past research has shown linkages among microhabitat parameters, large-scale land use such as urbanization and logging with salamander abundances. However, little is known about these linkages in the coalfields of central Appalachia. In the summer of 2013, we visited 70 sites (sampled three times each) in the southwest Virginia coalfields to survey salamanders and quantify stream and riparian microhabitat parameters. Using an information-theoretic framework we compared the effects of microhabitat and large-scale land use on salamander abundances. Our findings indicate that dusky salamander (Desmognathus spp.) abundances are more correlated to microhabitat parameters such as canopy cover than to subwatershed land uses. Brook salamander (Eurycea spp.) abundances show strong negative associations to the suspended sediments and stream substrate embeddedness. Neither Desmognathus spp. nor Eurycea spp. abundances were influenced by water conductivity. These suggest protection or restoration of riparian habitats and erosion control is an important conservation component for maintaining stream salamanders in the mined landscapes of central Appalachia.

  16. Reproducible Tissue Homogenization and Protein Extraction for Quantitative Proteomics Using MicroPestle-Assisted Pressure-Cycling Technology.

    PubMed

    Shao, Shiying; Guo, Tiannan; Gross, Vera; Lazarev, Alexander; Koh, Ching Chiek; Gillessen, Silke; Joerger, Markus; Jochum, Wolfram; Aebersold, Ruedi

    2016-06-03

    The reproducible and efficient extraction of proteins from biopsy samples for quantitative analysis is a critical step in biomarker and translational research. Recently, we described a method consisting of pressure-cycling technology (PCT) and sequential windowed acquisition of all theoretical fragment ions-mass spectrometry (SWATH-MS) for the rapid quantification of thousands of proteins from biopsy-size tissue samples. As an improvement of the method, we have incorporated the PCT-MicroPestle into the PCT-SWATH workflow. The PCT-MicroPestle is a novel, miniaturized, disposable mechanical tissue homogenizer that fits directly into the microTube sample container. We optimized the pressure-cycling conditions for tissue lysis with the PCT-MicroPestle and benchmarked the performance of the system against the conventional PCT-MicroCap method using mouse liver, heart, brain, and human kidney tissues as test samples. The data indicate that the digestion of the PCT-MicroPestle-extracted proteins yielded 20-40% more MS-ready peptide mass from all tissues tested with a comparable reproducibility when compared to the conventional PCT method. Subsequent SWATH-MS analysis identified a higher number of biologically informative proteins from a given sample. In conclusion, we have developed a new device that can be seamlessly integrated into the PCT-SWATH workflow, leading to increased sample throughput and improved reproducibility at both the protein extraction and proteomic analysis levels when applied to the quantitative proteomic analysis of biopsy-level samples.

  17. Quantification of soil structure based on Minkowski functions

    NASA Astrophysics Data System (ADS)

    Vogel, H.-J.; Weller, U.; Schlüter, S.

    2010-10-01

    The structure of soils and other geologic media is a complex three-dimensional object. Most of the physical material properties including mechanical and hydraulic characteristics are immediately linked to the structure given by the pore space and its spatial distribution. It is an old dream and still a formidable challenge to relate structural features of porous media to their functional properties. Using tomographic techniques, soil structure can be directly observed at a range of spatial scales. In this paper we present a scale-invariant concept to quantify complex structures based on a limited set of meaningful morphological functions. They are based on d+1 Minkowski functionals as defined for d-dimensional bodies. These basic quantities are determined as a function of pore size or aggregate size obtained by filter procedures using mathematical morphology. The resulting Minkowski functions provide valuable information on the size of pores and aggregates, the pore surface area and the pore topology having the potential to be linked to physical properties. The theoretical background and the related algorithms are presented and the approach is demonstrated for the pore structure of an arable soil and the pore structure of a sand both obtained by X-ray micro-tomography. We also analyze the fundamental problem of limited resolution which is critical for any attempt to quantify structural features at any scale using samples of different size recorded at different resolutions. The results demonstrate that objects smaller than 5 voxels are critical for quantitative analysis.

  18. Identification and quantification of flavonoids and chromes in Baeckea frutescens by using HPLC coupled with diode-array detection and quadruple time-of-flight mass spectrometry.

    PubMed

    Jia, Bei-Xi; Huangfu, Qian-Qian; Ren, Feng-Xiao; Jia, Lu; Zhang, Yan-Bing; Liu, Hong-Min; Yang, Jie; Wang, Qiang

    2015-01-01

    This article marks the first report on high-performance liquid chromatography (HPLC) coupled with diode-array detection (DAD) and quadruple time-of-flight mass spectrometry (Q-TOF/MS) for the identification and quantification of main bioactive constituents in Baeckea frutescens. In total, 24 compounds were identified or tentatively characterised based on their retention behaviours, UV profiles and MS fragment information. Furthermore, a validated method with good linearity, sensitivity, precision, stability, repeatability and accuracy was successfully applied for simultaneous determination of five flavonoids and one chromone in different plant parts of B. frutescens collected at different harvest times, and their dynamic contents revealed the appropriate harvest times. The established HPLC-DAD-Q-TOF/MS using multi-bioactive markers was proved to be a validated strategy for the quality evaluation on both raw materials and related products of B. frutescens.

  19. Tracking ink composition on Herculaneum papyrus scrolls: quantification and speciation of lead by X-ray based techniques and Monte Carlo simulations.

    PubMed

    Tack, Pieter; Cotte, Marine; Bauters, Stephen; Brun, Emmanuel; Banerjee, Dipanjan; Bras, Wim; Ferrero, Claudio; Delattre, Daniel; Mocella, Vito; Vincze, Laszlo

    2016-02-08

    The writing in carbonized Herculaneum scrolls, covered and preserved by the pyroclastic events of the Vesuvius in 79 AD, was recently revealed using X-ray phase-contrast tomography, without the need of unrolling the sensitive scrolls. Unfortunately, some of the text is difficult to read due to the interference of the papyrus fibers crossing the written text vertically and horizontally. Recently, lead was found as an elemental constituent in the writing, rendering the text more clearly readable when monitoring the lead X-ray fluorescence signal. Here, several hypotheses are postulated for the origin and state of lead in the papyrus writing. Multi-scale X-ray fluorescence micro-imaging, Monte Carlo quantification and X-ray absorption microspectroscopy experiments are used to provide additional information on the ink composition, in an attempt to determine the origin of the lead in the Herculaneum scrolls and validate the postulated hypotheses.

  20. A subsystem identification method based on the path concept with coupling strength estimation

    NASA Astrophysics Data System (ADS)

    Magrans, Francesc Xavier; Poblet-Puig, Jordi; Rodríguez-Ferran, Antonio

    2018-02-01

    For complex geometries, the definition of the subsystems is not a straightforward task. We present here a subsystem identification method based on the direct transfer matrix, which represents the first-order paths. The key ingredient is a cluster analysis of the rows of the powers of the transfer matrix. These powers represent high-order paths in the system and are more affected than low-order paths by damping. Once subsystems are identified, the proposed approach also provides a quantification of the degree of coupling between subsystems. This information is relevant to decide whether a subsystem may be analysed in a computer model or measured in the laboratory independently of the rest or subsystems or not. The two features (subsystem identification and quantification of the degree of coupling) are illustrated by means of numerical examples: plates coupled by means of springs and rooms connected by means of a cavity.

  1. Targeted profiling of hydrophilic constituents of royal jelly by hydrophilic interaction liquid chromatography-tandem mass spectrometry.

    PubMed

    Pina, Athanasia; Begou, Olga; Kanelis, Dimitris; Gika, Helen; Kalogiannis, Stavros; Tananaki, Chrysoula; Theodoridis, Georgios; Zotou, Anastasia

    2018-01-05

    In the present work a Hydrophilic Interaction Liquid Chromatography-tandem Mass Spectrometry (HILIC-MS/MS) method was developed for the efficient separation and quantification of a large number of small polar bioactive molecules in Royal Jelly. The method was validated and provided satisfactory detection sensitivity for 88 components. Quantification was proven to be precise for 64 components exhibiting good linearity, recoveries R% >90% for the majority of analytes and intra- and inter-day precision from 0.14 to 20% RSD. Analysis of 125 fresh royal jelly samples of Greek origin provided useful information on royal jelly's hydrophilic bioactive components revealing lysine, ribose, proline, melezitose and glutamic acid to be in high abundance. In addition the occurrence of 18 hydrophilic nutrients which have not been reported previously as royal jelly constituents is shown. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Preclinical Biokinetic Modelling of Tc-99m Radiophamaceuticals Obtained from Semi-Automatic Image Processing.

    PubMed

    Cornejo-Aragón, Luz G; Santos-Cuevas, Clara L; Ocampo-García, Blanca E; Chairez-Oria, Isaac; Diaz-Nieto, Lorenza; García-Quiroz, Janice

    2017-01-01

    The aim of this study was to develop a semi automatic image processing algorithm (AIPA) based on the simultaneous information provided by X-ray and radioisotopic images to determine the biokinetic models of Tc-99m radiopharmaceuticals from quantification of image radiation activity in murine models. These radioisotopic images were obtained by a CCD (charge couple device) camera coupled to an ultrathin phosphorous screen in a preclinical multimodal imaging system (Xtreme, Bruker). The AIPA consisted of different image processing methods for background, scattering and attenuation correction on the activity quantification. A set of parametric identification algorithms was used to obtain the biokinetic models that characterize the interaction between different tissues and the radiopharmaceuticals considered in the study. The set of biokinetic models corresponded to the Tc-99m biodistribution observed in different ex vivo studies. This fact confirmed the contribution of the semi-automatic image processing technique developed in this study.

  3. Reach Envelope and Field of Vision Quantification in Mark III Space Suit Using Delaunay Triangulation

    NASA Technical Reports Server (NTRS)

    Abercromby, Andrew F. J.; Thaxton, Sherry S.; Onady, Elizabeth A.; Rajulu, Sudhakar L.

    2006-01-01

    The Science Crew Operations and Utility Testbed (SCOUT) project is focused on the development of a rover vehicle that can be utilized by two crewmembers during extra vehicular activities (EVAs) on the moon and Mars. The current SCOUT vehicle can transport two suited astronauts riding in open cockpit seats. Among the aspects currently being developed is the cockpit design and layout. This process includes the identification of possible locations for a socket to which a crewmember could connect a portable life support system (PLSS) for recharging power, air, and cooling while seated in the vehicle. The spaces in which controls and connectors may be situated within the vehicle are constrained by the reach and vision capabilities of the suited crewmembers. Accordingly, quantification of the volumes within which suited crewmembers can both see and reach relative to the vehicle represents important information during the design process.

  4. Hair as an alternative matrix in bioanalysis.

    PubMed

    Barbosa, Joana; Faria, Juliana; Carvalho, Félix; Pedro, Madalena; Queirós, Odília; Moreira, Roxana; Dinis-Oliveira, Ricardo Jorge

    2013-04-01

    Alternative matrices are steadily gaining recognition as biological samples for toxicological analyses. Hair presents many advantages over traditional matrices, such as urine and blood, since it provides retrospective information regarding drug exposure, can distinguish between chronic and acute or recent drug use by segmental analysis, is easy to obtain, and has considerable stability for long periods of time. For this reason, it has been employed in a wide variety of contexts, namely to evaluate workplace drug exposure, drug-facilitated sexual assault, pre-natal drug exposure, anti-doping control, pharmacological monitoring and alcohol abuse. In this article, issues concerning hair structure, collection, storage and analysis are reviewed. The mechanisms of drug incorporation into hair are briefly discussed. Analytical techniques for simultaneous drug quantification in hair are addressed. Finally, representative examples of drug quantification using hair are summarized, emphasizing its potentialities and limitations as an alternative biological matrix for toxicological analyses.

  5. Quantification of sound instability in embouchure tremor based on the time-varying fundamental frequency.

    PubMed

    Lee, André; Voget, Jakob; Furuya, Shinichi; Morise, Masanori; Altenmüller, Eckart

    2016-05-01

    Task-specific tremor in musicians is an involuntary oscillating muscular activity mostly of the hand or the embouchure, which predominantly occurs while playing the instrument. In contrast to arm or hand tremors, which have been examined and objectified based on movement kinematics and muscular activity, embouchure tremor has not yet been investigated. To quantify and describe embouchure tremor we analysed sound production and investigated the fluctuation of the time-varying fundamental frequency of sustained notes. A comparison between patients with embouchure tremor and healthy controls showed a significantly higher fluctuation of the fundamental frequency for the patients in the high pitch with a tremor frequency range between 3 and 8 Hz. The present findings firstly provide further information about a scarcely described movement disorder and secondly further evaluate a new quantification method for embouchure tremor, which has recently been established for embouchure dystonia.

  6. Estimation of construction and demolition waste volume generation in new residential buildings in Spain.

    PubMed

    Villoria Sáez, Paola; del Río Merino, Mercedes; Porras-Amores, César

    2012-02-01

    The management planning of construction and demolition (C&D) waste uses a single indicator which does not provide enough detailed information. Therefore the determination and implementation of other innovative and precise indicators should be determined. The aim of this research work is to improve existing C&D waste quantification tools in the construction of new residential buildings in Spain. For this purpose, several housing projects were studied to determine an estimation of C&D waste generated during their construction process. This paper determines the values of three indicators to estimate the generation of C&D waste in new residential buildings in Spain, itemizing types of waste and construction stages. The inclusion of two more accurate indicators, in addition to the global one commonly in use, provides a significant improvement in C&D waste quantification tools and management planning.

  7. Quantification of chemical elements in blood of patients affected by multiple sclerosis.

    PubMed

    Forte, Giovanni; Visconti, Andrea; Santucci, Simone; Ghazaryan, Anna; Figà-Talamanca, Lorenzo; Cannoni, Stefania; Bocca, Beatrice; Pino, Anna; Violante, Nicola; Alimonti, Alessandro; Salvetti, Marco; Ristori, Giovanni

    2005-01-01

    Although some studies suggested a link between exposure to trace elements and development of multiple sclerosis (MS), clear information on their role in the aetiology of MS is still lacking. In this study the concentrations of Al, Ba, Be, Bi, Ca, Cd, Co, Cr, Cu, Fe, Hg, Li, Mg, Mn, Mo, Ni, Pb, Sb, Si, Sn, Sr, Tl, V, W, Zn and Zr were determined in the blood of 60 patients with MS and 60 controls. Quantifications were performed by inductively coupled plasma (ICP) atomic emission spectrometry and sector field ICP mass spectrometry. When the two groups were compared, an increased level of Co, Cu and Ni and a decrement of Be, Fe, Hg, Mg, Mo, Pb and Zn in blood of patients were observed. In addition, the discriminant analysis pointed out that Cu, Be, Hg, Co and Mo were able to discriminate between MS patients and controls (92.5% of cases correctly classified).

  8. Theoretical aspects of cellular decision-making and information-processing.

    PubMed

    Kobayashi, Tetsuya J; Kamimura, Atsushi

    2012-01-01

    Microscopic biological processes have extraordinary complexity and variety at the sub-cellular, intra-cellular, and multi-cellular levels. In dealing with such complex phenomena, conceptual and theoretical frameworks are crucial, which enable us to understand seemingly different intra- and inter-cellular phenomena from unified viewpoints. Decision-making is one such concept that has attracted much attention recently. Since a number of cellular behavior can be regarded as processes to make specific actions in response to external stimuli, decision-making can cover and has been used to explain a broad range of different cellular phenomena [Balázsi et al. (Cell 144(6):910, 2011), Zeng et al. (Cell 141(4):682, 2010)]. Decision-making is also closely related to cellular information-processing because appropriate decisions cannot be made without exploiting the information that the external stimuli contain. Efficiency of information transduction and processing by intra-cellular networks determines the amount of information obtained, which in turn limits the efficiency of subsequent decision-making. Furthermore, information-processing itself can serve as another concept that is crucial for understanding of other biological processes than decision-making. In this work, we review recent theoretical developments on cellular decision-making and information-processing by focusing on the relation between these two concepts.

  9. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  10. IMMAN: free software for information theory-based chemometric analysis.

    PubMed

    Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

    2015-05-01

    The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.

  11. Information-theoretic signatures of biodiversity in the barcoding gene.

    PubMed

    Barbosa, Valmir C

    2018-08-14

    Analyzing the information content of DNA, though holding the promise to help quantify how the processes of evolution have led to information gain throughout the ages, has remained an elusive goal. Paradoxically, one of the main reasons for this has been precisely the great diversity of life on the planet: if on the one hand this diversity is a rich source of data for information-content analysis, on the other hand there is so much variation as to make the task unmanageable. During the past decade or so, however, succinct fragments of the COI mitochondrial gene, which is present in all animal phyla and in a few others, have been shown to be useful for species identification through DNA barcoding. A few million such fragments are now publicly available through the BOLD systems initiative, thus providing an unprecedented opportunity for relatively comprehensive information-theoretic analyses of DNA to be attempted. Here we show how a generalized form of total correlation can yield distinctive information-theoretic descriptors of the phyla represented in those fragments. In order to illustrate the potential of this analysis to provide new insight into the evolution of species, we performed principal component analysis on standardized versions of the said descriptors for 23 phyla. Surprisingly, we found that, though based solely on the species represented in the data, the first principal component correlates strongly with the natural logarithm of the number of all known living species for those phyla. The new descriptors thus constitute clear information-theoretic signatures of the processes whereby evolution has given rise to current biodiversity, which suggests their potential usefulness in further related studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Thermodynamic description of non-Markovian information flux of nonequilibrium open quantum systems

    NASA Astrophysics Data System (ADS)

    Chen, Hong-Bin; Chen, Guang-Yin; Chen, Yueh-Nan

    2017-12-01

    One of the fundamental issues in the field of open quantum systems is the classification and quantification of non-Markovianity. In the contest of quantity-based measures of non-Markovianity, the intuition of non-Markovianity in terms of information backflow is widely discussed. However, it is not easy to characterize the information flux for a given system state and show its connection to non-Markovianity. Here, by using the concepts from thermodynamics and information theory, we discuss a potential definition of information flux of an open quantum system, valid for static environments. We present a simple protocol to show how a system attempts to share information with its environment and how it builds up system-environment correlations. We also show that the information returned from the correlations characterizes the non-Markovianity and a hierarchy of indivisibility of the system dynamics.

  13. Density functional reactivity theory study of SN2 reactions from the information-theoretic perspective.

    PubMed

    Wu, Zemin; Rong, Chunying; Lu, Tian; Ayers, Paul W; Liu, Shubin

    2015-10-28

    As a continuation of our recent efforts to quantify chemical reactivity with quantities from the information-theoretic approach within the framework of density functional reactivity theory, the effectiveness of applying these quantities to quantify electrophilicity for the bimolecular nucleophilic substitution (SN2) reactions in both gas phase and aqueous solvent is presented in this work. We examined a total of 21 self-exchange SN2 reactions for the compound with the general chemical formula of R1R2R3C-F, where R1, R2, and R3 represent substituting alkyl groups such as -H, -CH3, -C2H5, -C3H7, and -C4H9 in both gas and solvent phases. Our findings confirm that scaling properties for information-theoretic quantities found elsewhere are still valid. It has also been verified that the barrier height has the strongest correlation with the electrostatic interaction, but the contributions from the exchange-correlation and steric effects, though less significant, are indispensable. We additionally unveiled that the barrier height of these SN2 reactions can reliably be predicted not only by the Hirshfeld charge and information gain at the regioselective carbon atom, as previously reported by us for other systems, but also by other information-theoretic descriptors such as Shannon entropy, Fisher information, and Ghosh-Berkowitz-Parr entropy on the same atom. These new findings provide further insights for the better understanding of the factors impacting the chemical reactivity of this vastly important category of chemical transformations.

  14. Theoretical information measurement in nonrelativistic time-dependent approach

    NASA Astrophysics Data System (ADS)

    Najafizade, S. A.; Hassanabadi, H.; Zarrinkamar, S.

    2018-02-01

    The information-theoretic measures of time-dependent Schrödinger equation are investigated via the Shannon information entropy, variance and local Fisher quantities. In our calculations, we consider the two first states n = 0,1 and obtain the position Sx (t) and momentum Sp (t) Shannon entropies as well as Fisher information Ix (t) in position and momentum Ip (t) spaces. Using the Fourier transformed wave function, we obtain the results in momentum space. Some interesting features of the information entropy densities ρs (x,t) and γs (p,t), as well as the probability densities ρ (x,t) and γ (p,t) for time-dependent states are demonstrated. We establish a general relation between variance and Fisher's information. The Bialynicki-Birula-Mycielski inequality is tested and verified for the states n = 0,1.

  15. Absolute quantification of Dehalococcoides proteins: enzyme bioindicators of chlorinated ethene dehalorespiration.

    PubMed

    Werner, Jeffrey J; Ptak, A Celeste; Rahm, Brian G; Zhang, Sheng; Richardson, Ruth E

    2009-10-01

    The quantification of trace proteins in complex environmental samples and mixed microbial communities would be a valuable monitoring tool in countless applications, including the bioremediation of groundwater contaminated with chlorinated solvents. Measuring the concentrations of specific proteins provides unique information about the activity and physiological state of organisms in a sample. We developed sensitive (< 5 fmol), selective bioindicator assays for the absolute quantification of select proteins used by Dehalococcoides spp. when reducing carbon atoms in the common pollutants trichloroethene (TCE) and tetrachloroethene (PCE). From complex whole-sample digests of two different dechlorinating mixed communities, we monitored the chromatographic peaks of selected tryptic peptides chosen to represent 19 specific Dehalococcoides proteins. This was accomplished using multiple-reaction monitoring (MRM) assays using nano-liquid chromatography-tandem mass spectrometry (nLC-MS/MS), which provided the selectivity, sensitivity and reproducibility required to quantify Dehalococcoides proteins in complex samples. We observed reproducible peak areas (average CV = 0.14 over 4 days, n = 3) and linear responses in standard curves (n = 5, R(2) > 0.98) using synthetic peptide standards spiked into a background matrix of sediment peptides. We detected and quantified TCE reductive dehalogenase (TceA) at 7.6 +/- 1.7 x 10(3) proteins cell(-1) in the KB1 bioaugmentation culture, previously thought to be lacking TceA. Fragmentation data from MS/MS shotgun proteomics experiments were helpful in developing the MRM targets. Similar shotgun proteomics data are emerging in labs around the world for many environmentally relevant microbial proteins, and these data are a valuable resource for the future development of MRM assays. We expect targeted peptide quantification in environmental samples to be a useful tool in environmental monitoring.

  16. Isotope-coded ESI-enhancing derivatization reagents for differential analysis, quantification and profiling of metabolites in biological samples by LC/MS: A review.

    PubMed

    Higashi, Tatsuya; Ogawa, Shoujiro

    2016-10-25

    The analysis of the qualitative and quantitative changes of metabolites in body fluids and tissues yields valuable information for the diagnosis, pathological analysis and treatment of many diseases. Recently, liquid chromatography/electrospray ionization-(tandem) mass spectrometry [LC/ESI-MS(/MS)] has been widely used for these purposes due to the high separation capability of LC, broad coverage of ESI for various compounds and high specificity of MS(/MS). However, there are still two major problems to be solved regarding the biological sample analysis; lack of sensitivity and limited availability of stable isotope-labeled analogues (internal standards, ISs) for most metabolites. Stable isotope-coded derivatization (ICD) can be the answer for these problems. By the ICD, different isotope-coded moieties are introduced to the metabolites and one of the resulting derivatives can serve as the IS, which minimize the matrix effects. Furthermore, the derivatization can improve the ESI efficiency, fragmentation property in the MS/MS and chromatographic behavior of the metabolites, which lead to a high sensitivity and specificity in the various detection modes. Based on this background, this article reviews the recently-reported isotope-coded ESI-enhancing derivatization (ICEED) reagents, which are key components for the ICD-based LC/MS(/MS) studies, and their applications to the detection, identification, quantification and profiling of metabolites in human and animal samples. The LC/MS(/MS) using the ICEED reagents is the powerful method especially for the differential analysis (relative quantification) of metabolites in two comparative samples, simultaneous quantification of multiple metabolites whose stable isotope-labeled ISs are not available, and submetabolome profiling. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. On the complex quantification of risk: systems-based perspective on terrorism.

    PubMed

    Haimes, Yacov Y

    2011-08-01

    This article highlights the complexity of the quantification of the multidimensional risk function, develops five systems-based premises on quantifying the risk of terrorism to a threatened system, and advocates the quantification of vulnerability and resilience through the states of the system. The five premises are: (i) There exists interdependence between a specific threat to a system by terrorist networks and the states of the targeted system, as represented through the system's vulnerability, resilience, and criticality-impact. (ii) A specific threat, its probability, its timing, the states of the targeted system, and the probability of consequences can be interdependent. (iii) The two questions in the risk assessment process: "What is the likelihood?" and "What are the consequences?" can be interdependent. (iv) Risk management policy options can reduce both the likelihood of a threat to a targeted system and the associated likelihood of consequences by changing the states (including both vulnerability and resilience) of the system. (v) The quantification of risk to a vulnerable system from a specific threat must be built on a systemic and repeatable modeling process, by recognizing that the states of the system constitute an essential step to construct quantitative metrics of the consequences based on intelligence gathering, expert evidence, and other qualitative information. The fact that the states of all systems are functions of time (among other variables) makes the time frame pivotal in each component of the process of risk assessment, management, and communication. Thus, risk to a system, caused by an initiating event (e.g., a threat) is a multidimensional function of the specific threat, its probability and time frame, the states of the system (representing vulnerability and resilience), and the probabilistic multidimensional consequences. © 2011 Society for Risk Analysis.

  18. Game Theory and Uncertainty Quantification for Cyber Defense Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna

    Cyber-system defenders face the challenging task of protecting critical assets and information continually against multiple types of malicious attackers. Defenders typically operate within resource constraints while attackers operate at relatively low costs. As a result, design and development of resilient cyber-systems that can support mission goals under attack while accounting for the dynamics between attackers and defenders is an important research problem.

  19. Quantifying legacies of clearcut on carbon fluxes and biomass carbon stock in northern temperate forests

    Treesearch

    W. Wang; J. Xiao; S. V. Ollinger; J. Chen; A. Noormets

    2014-01-01

    Stand-replacing disturbances including harvests have substantial impacts on forest carbon (C) fluxes and stocks. The quantification and simulation of these effects is essential for better understanding forest C dynamics and informing forest management 5 in the context of global change. We evaluated the process-based forest ecosystem model, PnET-CN, for how well and by...

  20. Suitability of frequency modulated thermal wave imaging for skin cancer detection-A theoretical prediction.

    PubMed

    Bhowmik, Arka; Repaka, Ramjee; Mulaveesala, Ravibabu; Mishra, Subhash C

    2015-07-01

    A theoretical study on the quantification of surface thermal response of cancerous human skin using the frequency modulated thermal wave imaging (FMTWI) technique has been presented in this article. For the first time, the use of the FMTWI technique for the detection and the differentiation of skin cancer has been demonstrated in this article. A three dimensional multilayered skin has been considered with the counter-current blood vessels in individual skin layers along with different stages of cancerous lesions based on geometrical, thermal and physical parameters available in the literature. Transient surface thermal responses of melanoma during FMTWI of skin cancer have been obtained by integrating the heat transfer model for biological tissue along with the flow model for blood vessels. It has been observed from the numerical results that, flow of blood in the subsurface region leads to a substantial alteration on the surface thermal response of the human skin. The alteration due to blood flow further causes a reduction in the performance of the thermal imaging technique during the thermal evaluation of earliest melanoma stages (small volume) compared to relatively large volume. Based on theoretical study, it has been predicted that the method is suitable for detection and differentiation of melanoma with comparatively large volume than the earliest development stages (small volume). The study has also performed phase based image analysis of the raw thermograms to resolve the different stages of melanoma volume. The phase images have been found to be clearly individuate the different development stages of melanoma compared to raw thermograms. Copyright © 2015 Elsevier Ltd. All rights reserved.

Top