Science.gov

Sample records for accurate quantitative description

  1. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  2. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  3. Accurate Variational Description of Adiabatic Quantum Optimization

    NASA Astrophysics Data System (ADS)

    Carleo, Giuseppe; Bauer, Bela; Troyer, Matthias

    Adiabatic quantum optimization (AQO) is a quantum computing protocol where a system is driven by a time-dependent Hamiltonian. The initial Hamiltonian has an easily prepared ground-state and the final Hamiltonian encodes some desired optimization problem. An adiabatic time evolution then yields a solution to the optimization problem. Several challenges emerge in the theoretical description of this protocol: on one hand, the exact simulation of quantum dynamics is exponentially complex in the size of the optimization problem. On the other hand, approximate approaches such as tensor network states (TNS) are limited to small instances by the amount of entanglement that can be encoded. I will present here an extension of the time-dependent Variational Monte Carlo approach to problems in AQO. This approach is based on a general class of (Jastrow-Feenberg) entangled states, whose parameters are evolved in time according to a stochastic variational principle. We demonstrate this approach for optimization problems of the Ising spin-glass type. A very good accuracy is achieved when compared to exact time-dependent TNS on small instances. We then apply this approach to larger problems, and discuss the efficiency of the quantum annealing scheme in comparison with its classical counterpart.

  4. A new and accurate continuum description of moving fronts

    NASA Astrophysics Data System (ADS)

    Johnston, S. T.; Baker, R. E.; Simpson, M. J.

    2017-03-01

    Processes that involve moving fronts of populations are prevalent in ecology and cell biology. A common approach to describe these processes is a lattice-based random walk model, which can include mechanisms such as crowding, birth, death, movement and agent–agent adhesion. However, these models are generally analytically intractable and it is computationally expensive to perform sufficiently many realisations of the model to obtain an estimate of average behaviour that is not dominated by random fluctuations. To avoid these issues, both mean-field (MF) and corrected mean-field (CMF) continuum descriptions of random walk models have been proposed. However, both continuum descriptions are inaccurate outside of limited parameter regimes, and CMF descriptions cannot be employed to describe moving fronts. Here we present an alternative description in terms of the dynamics of groups of contiguous occupied lattice sites and contiguous vacant lattice sites. Our description provides an accurate prediction of the average random walk behaviour in all parameter regimes. Critically, our description accurately predicts the persistence or extinction of the population in situations where previous continuum descriptions predict the opposite outcome. Furthermore, unlike traditional MF models, our approach provides information about the spatial clustering within the population and, subsequently, the moving front.

  5. A quantitative description for efficient financial markets

    NASA Astrophysics Data System (ADS)

    Immonen, Eero

    2015-09-01

    In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.

  6. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  7. Quantitative proteomic analysis by accurate mass retention time pairs.

    PubMed

    Silva, Jeffrey C; Denny, Richard; Dorschel, Craig A; Gorenstein, Marc; Kass, Ignatius J; Li, Guo-Zhong; McKenna, Therese; Nold, Michael J; Richardson, Keith; Young, Phillip; Geromanos, Scott

    2005-04-01

    Current methodologies for protein quantitation include 2-dimensional gel electrophoresis techniques, metabolic labeling, and stable isotope labeling methods to name only a few. The current literature illustrates both pros and cons for each of the previously mentioned methodologies. Keeping with the teachings of William of Ockham, "with all things being equal the simplest solution tends to be correct", a simple LC/MS based methodology is presented that allows relative changes in abundance of proteins in highly complex mixtures to be determined. Utilizing a reproducible chromatographic separations system along with the high mass resolution and mass accuracy of an orthogonal time-of-flight mass spectrometer, the quantitative comparison of tens of thousands of ions emanating from identically prepared control and experimental samples can be made. Using this configuration, we can determine the change in relative abundance of a small number of ions between the two conditions solely by accurate mass and retention time. Employing standard operating procedures for both sample preparation and ESI-mass spectrometry, one typically obtains under 5 ppm mass precision and quantitative variations between 10 and 15%. The principal focus of this paper will demonstrate the quantitative aspects of the methodology and continue with a discussion of the associated, complementary qualitative capabilities.

  8. A Quantitative Description of FBI Public Relations.

    ERIC Educational Resources Information Center

    Gibson, Dirk C.

    1997-01-01

    States that the Federal Bureau of Investigation (FBI) had the most successful media relations program of all government agencies from the 1930s to the 1980s. Uses quantitative analysis to show why those media efforts were successful. Identifies themes that typified the verbal component of FBI publicity and the broad spectrum of mass communication…

  9. From information theory to quantitative description of steric effects.

    PubMed

    Alipour, Mojtaba; Safari, Zahra

    2016-07-21

    Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.

  10. Accurate description of argon and water adsorption on surfaces of graphene-based carbon allotropes.

    PubMed

    Kysilka, Jiří; Rubeš, Miroslav; Grajciar, Lukáš; Nachtigall, Petr; Bludský, Ota

    2011-10-20

    Accurate interaction energies of nonpolar (argon) and polar (water) adsorbates with graphene-based carbon allotropes were calculated by means of a combined density functional theory (DFT)-ab initio computational scheme. The calculated interaction energy of argon with graphite (-9.7 kJ mol(-1)) is in excellent agreement with the available experimental data. The calculated interaction energy of water with graphene and graphite is -12.8 and -14.6 kJ mol(-1), respectively. The accuracy of combined DFT-ab initio methods is discussed in detail based on a comparison with the highly precise interaction energies of argon and water with coronene obtained at the coupled-cluster CCSD(T) level extrapolated to the complete basis set (CBS) limit. A new strategy for a reliable estimate of the CBS limit is proposed for systems where numerical instabilities occur owing to basis-set near-linear dependence. The most accurate estimate of the argon and water interaction with coronene (-8.1 and -14.0 kJ mol(-1), respectively) is compared with the results of other methods used for the accurate description of weak intermolecular interactions.

  11. FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads

    PubMed Central

    Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya

    2012-01-01

    The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith–Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets. PMID:22379138

  12. FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads.

    PubMed

    Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya

    2012-06-01

    The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith-Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets.

  13. Metal cutting simulation of 4340 steel using an accurate mechanical description of meterial strength and fracture

    SciTech Connect

    Maudlin, P.J.; Stout, M.G.

    1996-09-01

    Strength and fracture constitutive relationships containing strain rate dependence and thermal softening are important for accurate simulation of metal cutting. The mechanical behavior of a hardened 4340 steel was characterized using the von Mises yield function, the Mechanical Threshold Stress model and the Johnson- Cook fracture model. This constitutive description was implemented into the explicit Lagrangian FEM continuum-mechanics code EPIC, and orthogonal plane-strain metal cutting calculations were performed. Heat conduction and friction at the toolwork-piece interface were included in the simulations. These transient calculations were advanced in time until steady state machining behavior (force) was realized. Experimental cutting force data (cutting and thrust forces) were measured for a planning operation and compared to the calculations. 13 refs., 6 figs.

  14. Accurate electronic-structure description of Mn complexes: a GGA+U approach

    NASA Astrophysics Data System (ADS)

    Li, Elise Y.; Kulik, Heather; Marzari, Nicola

    2008-03-01

    Conventional density-functional approach often fail in offering an accurate description of the spin-resolved energetics in transition metals complexes. We will focus here on Mn complexes, where many aspects of the molecular structure and the reaction mechanisms are still unresolved - most notably in the oxygen-evolving complex (OEC) of photosystem II and the manganese catalase (MC). We apply a self-consistent GGA + U approach [1], originally designed within the DFT framework for the treatment of strongly correlated materials, to describe the geometry, the electronic and the magnetic properties of various manganese oxide complexes, finding very good agreement with higher-order ab-initio calculations. In particular, the different oxidation states of dinuclear systems containing the [Mn2O2]^n+ (n= 2, 3, 4) core are investigated, in order to mimic the basic face unit of the OEC complex. [1]. H. J. Kulik, M. Cococcioni, D. A. Scherlis, N. Marzari, Phys. Rev. Lett., 2006, 97, 103001

  15. A General Pairwise Interaction Model Provides an Accurate Description of In Vivo Transcription Factor Binding Sites

    PubMed Central

    Santolini, Marc; Mora, Thierry; Hakim, Vincent

    2014-01-01

    The identification of transcription factor binding sites (TFBSs) on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs), in which each DNA base pair contributes independently to the transcription factor (TF) binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM), a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting TFBSs beyond

  16. Molecular acidity: A quantitative conceptual density functional theory description.

    PubMed

    Liu, Shubin; Schauer, Cynthia K; Pedersen, Lee G

    2009-10-28

    Accurate predictions of molecular acidity using ab initio and density functional approaches are still a daunting task. Using electronic and reactivity properties, one can quantitatively estimate pKa values of acids. In a recent paper [S. B. Liu and L. G. Pedersen, J. Phys. Chem. A 113, 3648 (2009)], we employed the molecular electrostatic potential (MEP) on the nucleus and the sum of valence natural atomic orbital (NAO) energies for the purpose. In this work, we reformulate these relationships on the basis of conceptual density functional theory and compare the results with those from the thermodynamic cycle method. We show that MEP and NAO properties of the dissociating proton of an acid should satisfy the same relationships with experimental pKa data. We employ 27 main groups and first to third row transition metal-water complexes as illustrative examples to numerically verify the validity of these strong linear correlations. Results also show that the accuracy of our approach and that of the conventional method through the thermodynamic cycle are statistically similar.

  17. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations

    NASA Astrophysics Data System (ADS)

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-01

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  18. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations.

    PubMed

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-15

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  19. Learning to write without writing: writing accurate descriptions of interactions after learning graph-printed description relations.

    PubMed

    Spear, Jack; Fields, Lanny

    2015-12-01

    Interpreting and describing complex information shown in graphs are essential skills to be mastered by students in many disciplines; both are skills that are difficult to learn. Thus, interventions that produce these outcomes are of great value. Previous research showed that conditional discrimination training that established stimulus control by some elements of graphs and their printed descriptions produced some improvement in the accuracy of students' written descriptions of graphs. In the present experiment, students wrote nearly perfect descriptions of the information conveyed in interaction-based graphs after the establishment of conditional relations between graphs and their printed descriptions. This outcome was achieved with the use of special conditional discrimination training procedures that required participants to attend to many of the key elements of the graphs and the phrases in the printed descriptions that corresponded to the elements in the graphs. Thus, students learned to write full descriptions of the information represented by complex graphs by an automated training procedure that did not involve the direct training of writing.

  20. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe

    NASA Astrophysics Data System (ADS)

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-12-01

    Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pHi. Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pHi, in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF4:Yb3+, Tm3+ UCNPs were used as pHi response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pHi value 3.0–7.0 with deviation less than 0.43. This approach shall facilitate the researches in pHi related areas and development of the intracellular drug delivery systems.

  1. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe

    PubMed Central

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-01-01

    Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pHi. Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pHi, in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF4:Yb3+, Tm3+ UCNPs were used as pHi response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pHi value 3.0–7.0 with deviation less than 0.43. This approach shall facilitate the researches in pHi related areas and development of the intracellular drug delivery systems. PMID:27934889

  2. A fluorescence-based quantitative real-time PCR assay for accurate Pocillopora damicornis species identification

    NASA Astrophysics Data System (ADS)

    Thomas, Luke; Stat, Michael; Evans, Richard D.; Kennington, W. Jason

    2016-09-01

    Pocillopora damicornis is one of the most extensively studied coral species globally, but high levels of phenotypic plasticity within the genus make species identification based on morphology alone unreliable. As a result, there is a compelling need to develop cheap and time-effective molecular techniques capable of accurately distinguishing P. damicornis from other congeneric species. Here, we develop a fluorescence-based quantitative real-time PCR (qPCR) assay to genotype a single nucleotide polymorphism that accurately distinguishes P. damicornis from other morphologically similar Pocillopora species. We trial the assay across colonies representing multiple Pocillopora species and then apply the assay to screen samples of Pocillopora spp. collected at regional scales along the coastline of Western Australia. This assay offers a cheap and time-effective alternative to Sanger sequencing and has broad applications including studies on gene flow, dispersal, recruitment and physiological thresholds of P. damicornis.

  3. Quantitative proteomics using the high resolution accurate mass capabilities of the quadrupole-orbitrap mass spectrometer.

    PubMed

    Gallien, Sebastien; Domon, Bruno

    2014-08-01

    High resolution/accurate mass hybrid mass spectrometers have considerably advanced shotgun proteomics and the recent introduction of fast sequencing capabilities has expanded its use for targeted approaches. More specifically, the quadrupole-orbitrap instrument has a unique configuration and its new features enable a wide range of experiments. An overview of the analytical capabilities of this instrument is presented, with a focus on its application to quantitative analyses. The high resolution, the trapping capability and the versatility of the instrument have allowed quantitative proteomic workflows to be redefined and new data acquisition schemes to be developed. The initial proteomic applications have shown an improvement of the analytical performance. However, as quantification relies on ion trapping, instead of ion beam, further refinement of the technique can be expected.

  4. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  5. Optimization of sample preparation for accurate results in quantitative NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Yamazaki, Taichi; Nakamura, Satoe; Saito, Takeshi

    2017-04-01

    Quantitative nuclear magnetic resonance (qNMR) spectroscopy has received high marks as an excellent measurement tool that does not require the same reference standard as the analyte. Measurement parameters have been discussed in detail and high-resolution balances have been used for sample preparation. However, the high-resolution balances, such as an ultra-microbalance, are not general-purpose analytical tools and many analysts may find those balances difficult to use, thereby hindering accurate sample preparation for qNMR measurement. In this study, we examined the relationship between the resolution of the balance and the amount of sample weighed during sample preparation. We were able to confirm the accuracy of the assay results for samples weighed on a high-resolution balance, such as the ultra-microbalance. Furthermore, when an appropriate tare and amount of sample was weighed on a given balance, accurate assay results were obtained with another high-resolution balance. Although this is a fundamental result, it offers important evidence that would enhance the versatility of the qNMR method.

  6. SILAC-Based Quantitative Strategies for Accurate Histone Posttranslational Modification Profiling Across Multiple Biological Samples.

    PubMed

    Cuomo, Alessandro; Soldi, Monica; Bonaldi, Tiziana

    2017-01-01

    Histone posttranslational modifications (hPTMs) play a key role in regulating chromatin dynamics and fine-tuning DNA-based processes. Mass spectrometry (MS) has emerged as a versatile technology for the analysis of histones, contributing to the dissection of hPTMs, with special strength in the identification of novel marks and in the assessment of modification cross talks. Stable isotope labeling by amino acid in cell culture (SILAC), when adapted to histones, permits the accurate quantification of PTM changes among distinct functional states; however, its application has been mainly confined to actively dividing cell lines. A spike-in strategy based on SILAC can be used to overcome this limitation and profile hPTMs across multiple samples. We describe here the adaptation of SILAC to the analysis of histones, in both standard and spike-in setups. We also illustrate its coupling to an implemented "shotgun" workflow, by which heavy arginine-labeled histone peptides, produced upon Arg-C digestion, are qualitatively and quantitatively analyzed in an LC-MS/MS system that combines ultrahigh-pressure liquid chromatography (UHPLC) with new-generation Orbitrap high-resolution instrument.

  7. Accurate detection and quantitation of heteroplasmic mitochondrial point mutations by pyrosequencing.

    PubMed

    White, Helen E; Durston, Victoria J; Seller, Anneke; Fratter, Carl; Harvey, John F; Cross, Nicholas C P

    2005-01-01

    Disease-causing mutations in mitochondrial DNA (mtDNA) are typically heteroplasmic and therefore interpretation of genetic tests for mitochondrial disorders can be problematic. Detection of low level heteroplasmy is technically demanding and it is often difficult to discriminate between the absence of a mutation or the failure of a technique to detect the mutation in a particular tissue. The reliable measurement of heteroplasmy in different tissues may help identify individuals who are at risk of developing specific complications and allow improved prognostic advice for patients and family members. We have evaluated Pyrosequencing technology for the detection and estimation of heteroplasmy for six mitochondrial point mutations associated with the following diseases: Leber's hereditary optical neuropathy (LHON), G3460A, G11778A, and T14484C; mitochondrial encephalopathy with lactic acidosis and stroke-like episodes (MELAS), A3243G; myoclonus epilepsy with ragged red fibers (MERRF), A8344G, and neurogenic muscle weakness, ataxia, and retinitis pigmentosa (NARP)/Leighs: T8993G/C. Results obtained from the Pyrosequencing assays for 50 patients with presumptive mitochondrial disease were compared to those obtained using the commonly used diagnostic technique of polymerase chain reaction (PCR) and restriction enzyme digestion. The Pyrosequencing assays provided accurate genotyping and quantitative determination of mutational load with a sensitivity and specificity of 100%. The MELAS A3243G mutation was detected reliably at a level of 1% heteroplasmy. We conclude that Pyrosequencing is a rapid and robust method for detecting heteroplasmic mitochondrial point mutations.

  8. Quantitative Description of a Protein Fitness Landscape Based on Molecular Features

    PubMed Central

    Meini, María-Rocío; Tomatis, Pablo E.; Weinreich, Daniel M.; Vila, Alejandro J.

    2015-01-01

    Understanding the driving forces behind protein evolution requires the ability to correlate the molecular impact of mutations with organismal fitness. To address this issue, we employ here metallo-β-lactamases as a model system, which are Zn(II) dependent enzymes that mediate antibiotic resistance. We present a study of all the possible evolutionary pathways leading to a metallo-β-lactamase variant optimized by directed evolution. By studying the activity, stability and Zn(II) binding capabilities of all mutants in the preferred evolutionary pathways, we show that this local fitness landscape is strongly conditioned by epistatic interactions arising from the pleiotropic effect of mutations in the different molecular features of the enzyme. Activity and stability assays in purified enzymes do not provide explanatory power. Instead, measurement of these molecular features in an environment resembling the native one provides an accurate description of the observed antibiotic resistance profile. We report that optimization of Zn(II) binding abilities of metallo-β-lactamases during evolution is more critical than stabilization of the protein to enhance fitness. A global analysis of these parameters allows us to connect genotype with fitness based on quantitative biochemical and biophysical parameters. PMID:25767204

  9. Quantitative Description of a Protein Fitness Landscape Based on Molecular Features.

    PubMed

    Meini, María-Rocío; Tomatis, Pablo E; Weinreich, Daniel M; Vila, Alejandro J

    2015-07-01

    Understanding the driving forces behind protein evolution requires the ability to correlate the molecular impact of mutations with organismal fitness. To address this issue, we employ here metallo-β-lactamases as a model system, which are Zn(II) dependent enzymes that mediate antibiotic resistance. We present a study of all the possible evolutionary pathways leading to a metallo-β-lactamase variant optimized by directed evolution. By studying the activity, stability and Zn(II) binding capabilities of all mutants in the preferred evolutionary pathways, we show that this local fitness landscape is strongly conditioned by epistatic interactions arising from the pleiotropic effect of mutations in the different molecular features of the enzyme. Activity and stability assays in purified enzymes do not provide explanatory power. Instead, measurement of these molecular features in an environment resembling the native one provides an accurate description of the observed antibiotic resistance profile. We report that optimization of Zn(II) binding abilities of metallo-β-lactamases during evolution is more critical than stabilization of the protein to enhance fitness. A global analysis of these parameters allows us to connect genotype with fitness based on quantitative biochemical and biophysical parameters.

  10. Towards an accurate description of perovskite ferroelectrics: exchange and correlation effects

    PubMed Central

    Yuk, Simuck F.; Pitike, Krishna Chaitanya; Nakhmanson, Serge M.; Eisenbach, Markus; Li, Ying Wai; Cooper, Valentino R.

    2017-01-01

    Using the van der Waals density functional with C09 exchange (vdW-DF-C09), which has been applied to describing a wide range of dispersion-bound systems, we explore the physical properties of prototypical ABO3 bulk ferroelectric oxides. Surprisingly, vdW-DF-C09 provides a superior description of experimental values for lattice constants, polarization and bulk moduli, exhibiting similar accuracy to the modified Perdew-Burke-Erzenhoff functional which was designed specifically for bulk solids (PBEsol). The relative performance of vdW-DF-C09 is strongly linked to the form of the exchange enhancement factor which, like PBEsol, tends to behave like the gradient expansion approximation for small reduced gradients. These results suggest the general-purpose nature of the class of vdW-DF functionals, with particular consequences for predicting material functionality across dense and sparse matter regimes. PMID:28256544

  11. Towards an accurate description of perovskite ferroelectrics: exchange and correlation effects.

    PubMed

    Yuk, Simuck F; Pitike, Krishna Chaitanya; Nakhmanson, Serge M; Eisenbach, Markus; Li, Ying Wai; Cooper, Valentino R

    2017-03-03

    Using the van der Waals density functional with C09 exchange (vdW-DF-C09), which has been applied to describing a wide range of dispersion-bound systems, we explore the physical properties of prototypical ABO3 bulk ferroelectric oxides. Surprisingly, vdW-DF-C09 provides a superior description of experimental values for lattice constants, polarization and bulk moduli, exhibiting similar accuracy to the modified Perdew-Burke-Erzenhoff functional which was designed specifically for bulk solids (PBEsol). The relative performance of vdW-DF-C09 is strongly linked to the form of the exchange enhancement factor which, like PBEsol, tends to behave like the gradient expansion approximation for small reduced gradients. These results suggest the general-purpose nature of the class of vdW-DF functionals, with particular consequences for predicting material functionality across dense and sparse matter regimes.

  12. Towards an accurate description of perovskite ferroelectrics: exchange and correlation effects

    DOE PAGES

    Yuk, Simuck F.; Pitike, Krishna Chaitanya; Nakhmanson, Serge M.; ...

    2017-03-03

    Using the van der Waals density functional with C09 exchange (vdW-DF-C09), which has been applied to describing a wide range of dispersion-bound systems, we explore the physical properties of prototypical ABO3 bulk ferroelectric oxides. Surprisingly, vdW-DF-C09 provides a superior description of experimental values for lattice constants, polarization and bulk moduli, exhibiting similar accuracy to the modified Perdew-Burke-Erzenhoff functional which was designed specifically for bulk solids (PBEsol). The relative performance of vdW-DF-C09 is strongly linked to the form of the exchange enhancement factor which, like PBEsol, tends to behave like the gradient expansion approximation for small reduced gradients. These results suggestmore » the general-purpose nature of the class of vdW-DF functionals, with particular consequences for predicting material functionality across dense and sparse matter regimes.« less

  13. Accurate description of the electronic structure of organic semiconductors by GW methods

    NASA Astrophysics Data System (ADS)

    Marom, Noa

    2017-03-01

    Electronic properties associated with charged excitations, such as the ionization potential (IP), the electron affinity (EA), and the energy level alignment at interfaces, are critical parameters for the performance of organic electronic devices. To computationally design organic semiconductors and functional interfaces with tailored properties for target applications it is necessary to accurately predict these properties from first principles. Many-body perturbation theory is often used for this purpose within the GW approximation, where G is the one particle Green’s function and W is the dynamically screened Coulomb interaction. Here, the formalism of GW methods at different levels of self-consistency is briefly introduced and some recent applications to organic semiconductors and interfaces are reviewed.

  14. A Novel Approach to Teach the Generation of Bioelectrical Potentials from a Descriptive and Quantitative Perspective

    ERIC Educational Resources Information Center

    Rodriguez-Falces, Javier

    2013-01-01

    In electrophysiology studies, it is becoming increasingly common to explain experimental observations using both descriptive methods and quantitative approaches. However, some electrophysiological phenomena, such as the generation of extracellular potentials that results from the propagation of the excitation source along the muscle fiber, are…

  15. Leadership Styles at Middle- and Early-College Programs: A Quantitative Descriptive Correlational Study

    ERIC Educational Resources Information Center

    Berksteiner, Earl J.

    2013-01-01

    The purpose of this quantitative descriptive correlational study was to determine if associations existed between middle- and early-college (MEC) principals' leadership styles, teacher motivation, and teacher satisfaction. MEC programs were programs designed to assist high school students who were not served well in a traditional setting (Middle…

  16. Models in biology: ‘accurate descriptions of our pathetic thinking’

    PubMed Central

    2014-01-01

    In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484

  17. Towards an accurate model of redshift-space distortions: a bivariate Gaussian description for the galaxy pairwise velocity distributions

    NASA Astrophysics Data System (ADS)

    Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi

    2016-10-01

    As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation , such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and variance σ2. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distorsions on all scales, fully capturing the overall linear and nonlinear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of redshift-space distortions is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. More work is needed, but these results indicate a very promising path to make definitive progress in our program to improve RSD estimators.

  18. Accurate description of intermolecular interactions involving ions using symmetry-adapted perturbation theory.

    PubMed

    Lao, Ka Un; Schäffer, Rainer; Jansen, Georg; Herbert, John M

    2015-06-09

    Three new data sets for intermolecular interactions, AHB21 for anion-neutral dimers, CHB6 for cation-neutral dimers, and IL16 for ion pairs, are assembled here, with complete-basis CCSD(T) results for each. These benchmarks are then used to evaluate the accuracy of the single-exchange approximation that is used for exchange energies in symmetry-adapted perturbation theory (SAPT), and the accuracy of SAPT based on wave function and density-functional descriptions of the monomers is evaluated. High-level SAPT calculations afford poor results for these data sets, and this includes the recently proposed "gold", "silver", and "bronze standards" of SAPT, namely, SAPT2+(3)-δMP2/aug-cc-pVTZ, SAPT2+/aug-cc-pVDZ, and sSAPT0/jun-cc-pVDZ, respectively [ Parker , T. M. , et al. , J. Chem. Phys. 2014 , 140 , 094106 ]. Especially poor results are obtained for symmetric shared-proton systems of the form X(-)···H(+)···X(-), for X = F, Cl, or OH. For the anionic data set, the SAPT2+(CCD)-δMP2/aug-cc-pVTZ method exhibits the best performance, with a mean absolute error (MAE) of 0.3 kcal/mol and a maximum error of 0.7 kcal/mol. For the cationic data set, the highest-level SAPT method, SAPT2+3-δMP2/aug-cc-pVQZ, outperforms the rest of the SAPT methods, with a MAE of 0.2 kcal/mol and a maximum error of 0.4 kcal/mol. For the ion-pair data set, the SAPT2+3-δMP2/aug-cc-pVTZ performs the best among all SAPT methods with a MAE of 0.3 kcal/mol and a maximum error of 0.9 kcal/mol. Overall, SAPT2+3-δMP2/aug-cc-pVTZ affords a small and balanced MAE (<0.5 kcal/mol) for all three data sets, with an overall MAE of 0.4 kcal/mol. Despite the breakdown of perturbation theory for ionic systems at short-range, SAPT can still be saved given two corrections: a "δHF" correction, which requires a supermolecular Hartree-Fock calculation to incorporate polarization effects beyond second order, and a "δMP2" correction, which requires a supermolecular MP2 calculation to account for higher

  19. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  20. Does a more precise chemical description of protein-ligand complexes lead to more accurate prediction of binding affinity?

    PubMed

    Ballester, Pedro J; Schreyer, Adrian; Blundell, Tom L

    2014-03-24

    Predicting the binding affinities of large sets of diverse molecules against a range of macromolecular targets is an extremely challenging task. The scoring functions that attempt such computational prediction are essential for exploiting and analyzing the outputs of docking, which is in turn an important tool in problems such as structure-based drug design. Classical scoring functions assume a predetermined theory-inspired functional form for the relationship between the variables that describe an experimentally determined or modeled structure of a protein-ligand complex and its binding affinity. The inherent problem of this approach is in the difficulty of explicitly modeling the various contributions of intermolecular interactions to binding affinity. New scoring functions based on machine-learning regression models, which are able to exploit effectively much larger amounts of experimental data and circumvent the need for a predetermined functional form, have already been shown to outperform a broad range of state-of-the-art scoring functions in a widely used benchmark. Here, we investigate the impact of the chemical description of the complex on the predictive power of the resulting scoring function using a systematic battery of numerical experiments. The latter resulted in the most accurate scoring function to date on the benchmark. Strikingly, we also found that a more precise chemical description of the protein-ligand complex does not generally lead to a more accurate prediction of binding affinity. We discuss four factors that may contribute to this result: modeling assumptions, codependence of representation and regression, data restricted to the bound state, and conformational heterogeneity in data.

  1. Towards a more accurate microscopic description of the moving contact line problem - incorporating nonlocal effects through a statistical mechanics framework

    NASA Astrophysics Data System (ADS)

    Nold, Andreas; Goddard, Ben; Sibley, David; Kalliadasis, Serafim

    2014-03-01

    Multiscale effects play a predominant role in wetting phenomena such as the moving contact line. An accurate description is of paramount interest for a wide range of industrial applications, yet it is a matter of ongoing research, due to the difficulty of incorporating different physical effects in one model. Important small-scale phenomena are corrections to the attractive fluid-fluid and wall-fluid forces in inhomogeneous density distributions, which often previously have been accounted for by the disjoining pressure in an ad-hoc manner. We systematically derive a novel model for the description of a single-component liquid-vapor multiphase system which inherently incorporates these nonlocal effects. This derivation, which is inspired by statistical mechanics in the framework of colloidal density functional theory, is critically discussed with respect to its assumptions and restrictions. The model is then employed numerically to study a moving contact line of a liquid fluid displacing its vapor phase. We show how nonlocal physical effects are inherently incorporated by the model and describe how classical macroscopic results for the contact line motion are retrieved. We acknowledge financial support from ERC Advanced Grant No. 247031 and Imperial College through a DTG International Studentship.

  2. How accurate is the Kubelka-Munk theory of diffuse reflection? A quantitative answer

    NASA Astrophysics Data System (ADS)

    Joseph, Richard I.; Thomas, Michael E.

    2012-10-01

    The (heuristic) Kubelka-Munk theory of diffuse reflectance and transmittance of a film on a substrate, which is widely used because it gives simple analytic results, is compared to the rigorous radiative transfer model of Chandrasekhar. The rigorous model has to be numerically solved, thus is less intuitive. The Kubelka-Munk theory uses an absorption coefficient and scatter coefficient as inputs, similar to the rigorous model of Chandrasekhar. The relationship between these two sets of coefficients is addressed. It is shown that the Kubelka-Munk theory is remarkably accurate if one uses the proper albedo parameter.

  3. Highly sensitive capillary electrophoresis-mass spectrometry for rapid screening and accurate quantitation of drugs of abuse in urine.

    PubMed

    Kohler, Isabelle; Schappler, Julie; Rudaz, Serge

    2013-05-30

    The combination of capillary electrophoresis (CE) and mass spectrometry (MS) is particularly well adapted to bioanalysis due to its high separation efficiency, selectivity, and sensitivity; its short analytical time; and its low solvent and sample consumption. For clinical and forensic toxicology, a two-step analysis is usually performed: first, a screening step for compound identification, and second, confirmation and/or accurate quantitation in cases of presumed positive results. In this study, a fast and sensitive CE-MS workflow was developed for the screening and quantitation of drugs of abuse in urine samples. A CE with a time-of-flight MS (CE-TOF/MS) screening method was developed using a simple urine dilution and on-line sample preconcentration with pH-mediated stacking. The sample stacking allowed for a high loading capacity (20.5% of the capillary length), leading to limits of detection as low as 2 ng mL(-1) for drugs of abuse. Compound quantitation of positive samples was performed by CE-MS/MS with a triple quadrupole MS equipped with an adapted triple-tube sprayer and an electrospray ionization (ESI) source. The CE-ESI-MS/MS method was validated for two model compounds, cocaine (COC) and methadone (MTD), according to the Guidance of the Food and Drug Administration. The quantitative performance was evaluated for selectivity, response function, the lower limit of quantitation, trueness, precision, and accuracy. COC and MTD detection in urine samples was determined to be accurate over the range of 10-1000 ng mL(-1) and 21-1000 ng mL(-1), respectively.

  4. Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling.

    PubMed

    Boers, Stefan A; Hays, John P; Jansen, Ruud

    2017-04-05

    In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison.

  5. Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling

    PubMed Central

    Boers, Stefan A.; Hays, John P.; Jansen, Ruud

    2017-01-01

    In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison. PMID:28378789

  6. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis

    PubMed Central

    Smith, William L.; Chadwick, Sean G.; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E.; Aguin, Tina J.; Sobel, Jack D.

    2016-01-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease: Gardnerella vaginalis, Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the order Clostridiales), Megasphaera phylotype 1 or 2, Lactobacillus iners, Lactobacillus crispatus, Lactobacillus gasseri, and Lactobacillus jensenii. We generated a logistic regression model that identified G. vaginalis, A. vaginae, and Megasphaera phylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion of Lactobacillus spp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV. PMID:26818677

  7. Quantitative spectroscopy of hot stars: accurate atomic data applied on a large scale as driver of recent breakthroughs

    NASA Astrophysics Data System (ADS)

    Przybilla, N.; Schaffenroth, V.; Nieva, M. F.; Butler, K.

    2016-10-01

    OB-type stars present hotbeds for non-LTE physics because of their strong radiation fields that drive the atmospheric plasma out of local thermodynamic equilibrium. We report on recent breakthroughs in the quantitative analysis of the optical and UV-spectra of OB-type stars that were facilitated by application of accurate and precise atomic data on a large scale. An astrophysicist's dream has come true, by bringing observed and model spectra into close match over wide parts of the observed wavelength ranges. This allows tight observational constraints to be derived from OB-type stars for a wide range of applications in astrophysics. However, despite the progress made, many details of the modelling may be improved further. We discuss atomic data needs in terms of laboratory measurements and also ab-initio calculations. Particular emphasis is given to quantitative spectroscopy in the near-IR, which will be the focus in the era of the upcoming extremely large telescopes.

  8. Restriction Site Tiling Analysis: accurate discovery and quantitative genotyping of genome-wide polymorphisms using nucleotide arrays

    PubMed Central

    2010-01-01

    High-throughput genotype data can be used to identify genes important for local adaptation in wild populations, phenotypes in lab stocks, or disease-related traits in human medicine. Here we advance microarray-based genotyping for population genomics with Restriction Site Tiling Analysis. The approach simultaneously discovers polymorphisms and provides quantitative genotype data at 10,000s of loci. It is highly accurate and free from ascertainment bias. We apply the approach to uncover genomic differentiation in the purple sea urchin. PMID:20403197

  9. There's plenty of gloom at the bottom: the many challenges of accurate quantitation in size-based oligomeric separations.

    PubMed

    Striegel, André M

    2013-11-01

    There is a variety of small-molecule species (e.g., tackifiers, plasticizers, oligosaccharides) the size-based characterization of which is of considerable scientific and industrial importance. Likewise, quantitation of the amount of oligomers in a polymer sample is crucial for the import and export of substances into the USA and European Union (EU). While the characterization of ultra-high molar mass macromolecules by size-based separation techniques is generally considered a challenge, it is this author's contention that a greater challenge is encountered when trying to perform, for quantitation purposes, separations in and of the oligomeric region. The latter thesis is expounded herein, by detailing the various obstacles encountered en route to accurate, quantitative oligomeric separations by entropically dominated techniques such as size-exclusion chromatography, hydrodynamic chromatography, and asymmetric flow field-flow fractionation, as well as by methods which are, principally, enthalpically driven such as liquid adsorption and temperature gradient interaction chromatography. These obstacles include, among others, the diminished sensitivity of static light scattering (SLS) detection at low molar masses, the non-constancy of the response of SLS and of commonly employed concentration-sensitive detectors across the oligomeric region, and the loss of oligomers through the accumulation wall membrane in asymmetric flow field-flow fractionation. The battle is not lost, however, because, with some care and given a sufficient supply of sample, the quantitation of both individual oligomeric species and of the total oligomeric region is often possible.

  10. Quantitation of Insulin-Like Growth Factor 1 in Serum by Liquid Chromatography High Resolution Accurate-Mass Mass Spectrometry.

    PubMed

    Ketha, Hemamalini; Singh, Ravinder J

    2016-01-01

    Insulin-like growth factor 1 (IGF-1) is a 70 amino acid peptide hormone which acts as the principal mediator of the effects of growth hormone (GH). Due to a wide variability in circulating concentration of GH, IGF-1 quantitation is the first step in the diagnosis of GH excess or deficiency. Majority (>95 %) of IGF-1 circulates as a ternary complex along with its principle binding protein insulin-like growth factor 1 binding protein 3 (IGFBP-3) and acid labile subunit. The assay design approach for IGF-1 quantitation has to include a step to dissociate IGF-1 from its ternary complex. Several commercial assays employ a buffer containing acidified ethanol to achieve this. Despite several modifications, commercially available immunoassays have been shown to have challenges with interference from IGFBP-3. Additionally, inter-method comparison between IGF-1 immunoassays has been shown to be suboptimal. Mass spectrometry has been utilized for quantitation of IGF-1. In this chapter a liquid chromatography high resolution accurate-mass mass spectrometry (LC-HRAMS) based method for IGF-1 quantitation has been described.

  11. Quantitative methods for three-dimensional comparison and petrographic description of chondrites

    SciTech Connect

    Friedrich, J.M.

    2008-10-20

    X-ray computed tomography can be used to generate three-dimensional (3D) volumetric representations of chondritic meteorites. One of the challenges of using collected X-ray tomographic data is the extraction of useful data for 3D petrographic analysis or description. Here, I examine computer-aided quantitative 3D texture metrics that can be used for the classification of chondritic meteorites. These quantitative techniques are extremely useful for discriminating between chondritic materials, but yield little information on the 3D morphology of chondrite components. To investigate the morphology of chondrite minerals such as Fe(Ni) metal and related sulfides, the homology descriptors known as Betti numbers, are examined. Both methodologies are illustrated with theoretical discussion and examples. Betti numbers may be valuable for examining the nature of metal-silicate structural changes within chondrites with increasing degrees of metamorphism.

  12. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    SciTech Connect

    Pourmoghaddas, Amir Wells, R. Glenn

    2016-01-15

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but their level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE

  13. Linking descriptive geology and quantitative machine learning through an ontology of lithological concepts

    NASA Astrophysics Data System (ADS)

    Klump, J. F.; Huber, R.; Robertson, J.; Cox, S. J. D.; Woodcock, R.

    2014-12-01

    Despite the recent explosion of quantitative geological data, geology remains a fundamentally qualitative science. Numerical data only constitute a certain part of data collection in the geosciences. In many cases, geological observations are compiled as text into reports and annotations on drill cores, thin sections or drawings of outcrops. The observations are classified into concepts such as lithology, stratigraphy, geological structure, etc. These descriptions are semantically rich and are generally supported by more quantitative observations using geochemical analyses, XRD, hyperspectral scanning, etc, but the goal is geological semantics. In practice it has been difficult to bring the different observations together due to differing perception or granularity of classification in human observation, or the partial observation of only some characteristics using quantitative sensors. In the past years many geological classification schemas have been transferred into ontologies and vocabularies, formalized using RDF and OWL, and published through SPARQL endpoints. Several lithological ontologies were compiled by stratigraphy.net and published through a SPARQL endpoint. This work is complemented by the development of a Python API to integrate this vocabulary into Python-based text mining applications. The applications for the lithological vocabulary and Python API are automated semantic tagging of geochemical data and descriptions of drill cores, machine learning of geochemical compositions that are diagnostic for lithological classifications, and text mining for lithological concepts in reports and geological literature. This combination of applications can be used to identify anomalies in databases, where composition and lithological classification do not match. It can also be used to identify lithological concepts in the literature and infer quantitative values. The resulting semantic tagging opens new possibilities for linking these diverse sources of data.

  14. Quantitative description of collagen fibre network on trabecular bone surfaces based on AFM imaging.

    PubMed

    Hua, W-D; Chen, P-P; Xu, M-Q; Ao, Z; Liu, Y; Han, D; He, F

    2016-04-01

    The collagen fibre network is an important part of extracellular matrix (ECM) on trabecular bone surface. The geometry features of the network can provide us insights into its physical and physiological properties. However, previous researches have not focused on the geometry and the quantitative description of the collagen fibre network on trabecular bone surface. In this study,we developed a procedure to quantitatively describe the network and verified the validity of the procedure. The experiment proceeds as follow. Atomic force microscopy (AFM) was used to acquire submicron resolution images of the trabecular surface. Then, an image analysing procedure was built to extract important parameters, including, fibre orientation, fibre density, fibre width, fibre crossing numbers, the number of holes formed by fibre s, and the area of holes from AFM images. In order to verify the validity of the parameters extracted by image analysing methods, we adopted two other methods, which are statistical geometry model and computer simulation, to calculate those same parameters and check the consistency of the three methods' results. Statistical tests indicate that there is no significant difference between three groups. We conclude that, (a) the ECM on trabecular surface mainly consists of random collagen fibre network with oriented fibres; (b) our method based on image analysing can be used to characterize quantitative geometry features of the collagen fibre network effectively. This method may provide a basis for quantitative investigating the architecture and function of collagen fibre network.

  15. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis.

    PubMed

    Hilbert, David W; Smith, William L; Chadwick, Sean G; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E; Aguin, Tina J; Sobel, Jack D; Gygax, Scott E

    2016-04-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease:Gardnerella vaginalis,Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the orderClostridiales),Megasphaeraphylotype 1 or 2,Lactobacillus iners,Lactobacillus crispatus,Lactobacillus gasseri, andLactobacillus jensenii We generated a logistic regression model that identifiedG. vaginalis,A. vaginae, andMegasphaeraphylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion ofLactobacillusspp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV.

  16. Tools for quantitative form description; an evaluation of different software packages for semi-landmark analysis.

    PubMed

    Botton-Divet, Léo; Houssaye, Alexandra; Herrel, Anthony; Fabre, Anne-Claire; Cornette, Raphael

    2015-01-01

    The challenging complexity of biological structures has led to the development of several methods for quantitative analyses of form. Bones are shaped by the interaction of historical (phylogenetic), structural, and functional constrains. Consequently, bone shape has been investigated intensively in an evolutionary context. Geometric morphometric approaches allow the description of the shape of an object in all of its biological complexity. However, when biological objects present only few anatomical landmarks, sliding semi-landmarks may provide good descriptors of shape. The sliding procedure, mandatory for sliding semi-landmarks, requires several steps that may be time-consuming. We here compare the time required by two different software packages ('Edgewarp' and 'Morpho') for the same sliding task, and investigate potential differences in the results and biological interpretation. 'Morpho' is much faster than 'Edgewarp,' notably as a result of the greater computational power of the 'Morpho' software routines and the complexity of the 'Edgewarp' workflow. Morphospaces obtained using both software packages are similar and provide a consistent description of the biological variability. The principal differences between the two software packages are observed in areas characterized by abrupt changes in the bone topography. In summary, both software packages perform equally well in terms of the description of biological structures, yet differ in the simplicity of the workflow and time needed to perform the analyses.

  17. Tools for quantitative form description; an evaluation of different software packages for semi-landmark analysis

    PubMed Central

    Houssaye, Alexandra; Herrel, Anthony; Fabre, Anne-Claire; Cornette, Raphael

    2015-01-01

    The challenging complexity of biological structures has led to the development of several methods for quantitative analyses of form. Bones are shaped by the interaction of historical (phylogenetic), structural, and functional constrains. Consequently, bone shape has been investigated intensively in an evolutionary context. Geometric morphometric approaches allow the description of the shape of an object in all of its biological complexity. However, when biological objects present only few anatomical landmarks, sliding semi-landmarks may provide good descriptors of shape. The sliding procedure, mandatory for sliding semi-landmarks, requires several steps that may be time-consuming. We here compare the time required by two different software packages (‘Edgewarp’ and ‘Morpho’) for the same sliding task, and investigate potential differences in the results and biological interpretation. ‘Morpho’ is much faster than ‘Edgewarp,’ notably as a result of the greater computational power of the ‘Morpho’ software routines and the complexity of the ‘Edgewarp’ workflow. Morphospaces obtained using both software packages are similar and provide a consistent description of the biological variability. The principal differences between the two software packages are observed in areas characterized by abrupt changes in the bone topography. In summary, both software packages perform equally well in terms of the description of biological structures, yet differ in the simplicity of the workflow and time needed to perform the analyses. PMID:26618086

  18. A Global Approach to Accurate and Automatic Quantitative Analysis of NMR Spectra by Complex Least-Squares Curve Fitting

    NASA Astrophysics Data System (ADS)

    Martin, Y. L.

    The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species

  19. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples.

    PubMed

    Mackie, David M; Jahnke, Justin P; Benyamin, Marcus S; Sumner, James J

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users' purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells.

  20. Preferential access to genetic information from endogenous hominin ancient DNA and accurate quantitative SNP-typing via SPEX

    PubMed Central

    Brotherton, Paul; Sanchez, Juan J.; Cooper, Alan; Endicott, Phillip

    2010-01-01

    The analysis of targeted genetic loci from ancient, forensic and clinical samples is usually built upon polymerase chain reaction (PCR)-generated sequence data. However, many studies have shown that PCR amplification from poor-quality DNA templates can create sequence artefacts at significant levels. With hominin (human and other hominid) samples, the pervasive presence of highly PCR-amplifiable human DNA contaminants in the vast majority of samples can lead to the creation of recombinant hybrids and other non-authentic artefacts. The resulting PCR-generated sequences can then be difficult, if not impossible, to authenticate. In contrast, single primer extension (SPEX)-based approaches can genotype single nucleotide polymorphisms from ancient fragments of DNA as accurately as modern DNA. A single SPEX-type assay can amplify just one of the duplex DNA strands at target loci and generate a multi-fold depth-of-coverage, with non-authentic recombinant hybrids reduced to undetectable levels. Crucially, SPEX-type approaches can preferentially access genetic information from damaged and degraded endogenous ancient DNA templates over modern human DNA contaminants. The development of SPEX-type assays offers the potential for highly accurate, quantitative genotyping from ancient hominin samples. PMID:19864251

  1. Method for accurate quantitation of background tissue optical properties in the presence of emission from a strong fluorescence marker

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.

    2015-03-01

    Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.

  2. Quantitative Description of Crystal Nucleation and Growth from in Situ Liquid Scanning Transmission Electron Microscopy.

    PubMed

    Ievlev, Anton V; Jesse, Stephen; Cochell, Thomas J; Unocic, Raymond R; Protopopescu, Vladimir A; Kalinin, Sergei V

    2015-12-22

    Recent advances in liquid cell (scanning) transmission electron microscopy (S)TEM has enabled in situ nanoscale investigations of controlled nanocrystal growth mechanisms. Here, we experimentally and quantitatively investigated the nucleation and growth mechanisms of Pt nanostructures from an aqueous solution of K2PtCl6. Averaged statistical, network, and local approaches have been used for the data analysis and the description of both collective particles dynamics and local growth features. In particular, interaction between neighboring particles has been revealed and attributed to reduction of the platinum concentration in the vicinity of the particle boundary. The local approach for solving the inverse problem showed that particles dynamics can be simulated by a stationary diffusional model. The obtained results are important for understanding nanocrystal formation and growth processes and for optimization of synthesis conditions.

  3. A quantitative index of soil development from field descriptions: Examples from a chronosequence in central California

    USGS Publications Warehouse

    Harden, J.W.

    1982-01-01

    A soil development index has been developed in order to quantitatively measure the degree of soil profile development. This index, which combines eight soil field properties with soil thickness, is designed from field descriptions of the Merced River chronosequence in central California. These eight properties are: clay films, texture plus wet consistence, rubification (color hue and chroma), structure, dry consistence, moist consistence, color value, and pH. Other properties described in the field can be added when more soils are studied. Most of the properties change systematically within the 3 m.y. age span of the Merced River chronosequence. The absence of properties on occasion does not significantly affect the index. Individual quantified field properties, as well as the integrated index, are examined and compared as functions of soil depth and age. ?? 1982.

  4. Renal Cortical Lactate Dehydrogenase: A Useful, Accurate, Quantitative Marker of In Vivo Tubular Injury and Acute Renal Failure

    PubMed Central

    Zager, Richard A.; Johnson, Ali C. M.; Becker, Kirsten

    2013-01-01

    Studies of experimental acute kidney injury (AKI) are critically dependent on having precise methods for assessing the extent of tubular cell death. However, the most widely used techniques either provide indirect assessments (e.g., BUN, creatinine), suffer from the need for semi-quantitative grading (renal histology), or reflect the status of residual viable, not the number of lost, renal tubular cells (e.g., NGAL content). Lactate dehydrogenase (LDH) release is a highly reliable test for assessing degrees of in vitro cell death. However, its utility as an in vivo AKI marker has not been defined. Towards this end, CD-1 mice were subjected to graded renal ischemia (0, 15, 22, 30, 40, or 60 min) or to nephrotoxic (glycerol; maleate) AKI. Sham operated mice, or mice with AKI in the absence of acute tubular necrosis (ureteral obstruction; endotoxemia), served as negative controls. Renal cortical LDH or NGAL levels were assayed 2 or 24 hrs later. Ischemic, glycerol, and maleate-induced AKI were each associated with striking, steep, inverse correlations (r, −0.89) between renal injury severity and renal LDH content. With severe AKI, >65% LDH declines were observed. Corresponding prompt plasma and urinary LDH increases were observed. These observations, coupled with the maintenance of normal cortical LDH mRNA levels, indicated the renal LDH efflux, not decreased LDH synthesis, caused the falling cortical LDH levels. Renal LDH content was well maintained with sham surgery, ureteral obstruction or endotoxemic AKI. In contrast to LDH, renal cortical NGAL levels did not correlate with AKI severity. In sum, the above results indicate that renal cortical LDH assay is a highly accurate quantitative technique for gauging the extent of experimental acute ischemic and toxic renal injury. That it avoids the limitations of more traditional AKI markers implies great potential utility in experimental studies that require precise quantitation of tubule cell death. PMID:23825563

  5. Accurate, fast and cost-effective diagnostic test for monosomy 1p36 using real-time quantitative PCR.

    PubMed

    Cunha, Pricila da Silva; Pena, Heloisa B; D'Angelo, Carla Sustek; Koiffmann, Celia P; Rosenfeld, Jill A; Shaffer, Lisa G; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5-0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  6. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    PubMed Central

    Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341

  7. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    PubMed

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes.

  8. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    SciTech Connect

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; Parks, Jerry M.

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of

  9. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    DOE PAGES

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; ...

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalaminmore » in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co axial ligand binding, leading to substantial errors in predicted

  10. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    PubMed

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-04

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  11. Initial Description of a Quantitative, Cross-Species (Chimpanzee-Human) Social Responsiveness Measure

    ERIC Educational Resources Information Center

    Marrus, Natasha; Faughn, Carley; Shuman, Jeremy; Petersen, Steve E.; Constantino, John N.; Povinelli, Daniel J.; Pruett, John R., Jr.

    2011-01-01

    Objective: Comparative studies of social responsiveness, an ability that is impaired in autism spectrum disorders, can inform our understanding of both autism and the cognitive architecture of social behavior. Because there is no existing quantitative measure of social responsiveness in chimpanzees, we generated a quantitative, cross-species…

  12. Quantitative descriptions of generalized arousal, an elementary function of the vertebrate brain

    PubMed Central

    Quinkert, Amy Wells; Vimal, Vivek; Weil, Zachary M.; Reeke, George N.; Schiff, Nicholas D.; Banavar, Jayanth R.; Pfaff, Donald W.

    2011-01-01

    We review a concept of the most primitive, fundamental function of the vertebrate CNS, generalized arousal (GA). Three independent lines of evidence indicate the existence of GA: statistical, genetic, and mechanistic. Here we ask, is this concept amenable to quantitative analysis? Answering in the affirmative, four quantitative approaches have proven useful: (i) factor analysis, (ii) information theory, (iii) deterministic chaos, and (iv) application of a Gaussian equation. It strikes us that, to date, not just one but at least four different quantitative approaches seem necessary for describing different aspects of scientific work on GA. PMID:21555568

  13. Compact, accurate description of diagnostic neutral beam propagation and attenuation in a high temperature plasma for charge exchange recombination spectroscopy analysis.

    PubMed

    Bespamyatnov, Igor O; Rowan, William L; Granetz, Robert S

    2008-10-01

    Charge exchange recombination spectroscopy on Alcator C-Mod relies on the use of the diagnostic neutral beam injector as a source of neutral particles which penetrate deep into the plasma. It employs the emission resulting from the interaction of the beam atoms with fully ionized impurity ions. To interpret the emission from a given point in the plasma as the density of emitting impurity ions, the density of beam atoms must be known. Here, an analysis of beam propagation is described which yields the beam density profile throughout the beam trajectory from the neutral beam injector to the core of the plasma. The analysis includes the effects of beam formation, attenuation in the neutral gas surrounding the plasma, and attenuation in the plasma. In the course of this work, a numerical simulation and an analytical approximation for beam divergence are developed. The description is made sufficiently compact to yield accurate results in a time consistent with between-shot analysis.

  14. Can a quantitative simulation of an Otto engine be accurately rendered by a simple Novikov model with heat leak?

    NASA Astrophysics Data System (ADS)

    Fischer, A.; Hoffmann, K.-H.

    2004-03-01

    In this case study a complex Otto engine simulation provides data including, but not limited to, effects from losses due to heat conduction, exhaust losses and frictional losses. This data is used as a benchmark to test whether the Novikov engine with heat leak, a simple endoreversible model, can reproduce the complex engine behavior quantitatively by an appropriate choice of model parameters. The reproduction obtained proves to be of high quality.

  15. Enzymological considerations for a theoretical description of the quantitative competitive polymerase chain reaction (QC-PCR).

    PubMed

    Schnell, S; Mendoza, C

    1997-02-21

    The enzymological principles of the polymerase chain reaction (PCR) and of the quantitative competitive PCR (QC-PCR) are developed, proposing a theoretical framework that will facilitate quantification in experimental methodologies. It is demonstrated that the specificity of the QC-PCR, i.e. the ratio of the target initial velocity to that of the competitor template, remains constant not only during a particular amplification but also for increasing initial competitor concentrations. Linear fitting procedures are thus recommended that will enable a quantitative estimate of the initial target concentration. Finally, expressions for the efficiency of the PCR and QC-PCR are derived that are in agreement with previous experimental inferences.

  16. A classification based framework for quantitative description of large-scale microarray data

    PubMed Central

    Sangurdekar, Dipen P; Srienc, Friedrich; Khodursky, Arkady B

    2006-01-01

    Genome-wide surveys of transcription depend on gene classifications for the purpose of data interpretation. We propose a new information-theoretical-based method to: assess significance of co-expression within any gene group; quantitatively describe condition-specific gene-class activity; and systematically evaluate conditions in terms of gene-class activity. We applied this technique to describe microarray data tracking Escherichia coli transcriptional responses to more than 30 chemical and physiological perturbations. We correlated the nature and breadth of the responses with the nature of perturbation, identified gene group proxies for the perturbation classes and quantitatively compared closely related physiological conditions. PMID:16626502

  17. A quantitative description of the Na-K-2Cl cotransporter and its conformity to experimental data.

    PubMed

    Benjamin, B A; Johnson, E A

    1997-09-01

    In epithelia, the Na-K-2Cl cotransporter cooperates with other transport mechanisms to produce transepithelial NaCl transport. The reaction cycle for the Na-K-2Cl cotransporter has been established experimentally, but whether it accounts, quantitatively, for experimental findings has yet to be established. The differential equations that describe the reaction cycle were formulated, and the steady-state solutions were obtained by digital computation. Conformity between this description and the experimental data obtained from the literature was explored by automatic searches for the sets of rate constants that yielded statistical best-fits to the experimental data. Fits were obtained from renal epithelial cell lines, HeLa cells, and duck erythrocytes. Results show that the reaction cycle for the Na-K-2Cl cotransporter conforms well, quantitatively, with the experimental data.

  18. Microscope-Quantitative Luminescence Imaging System (M-Qlis) Description and User's Manual

    SciTech Connect

    Stahl, K. A.

    1991-10-01

    A Microscope Quantitative Luminescence Imaging System (M-QLIS} has been designed and constructed. The M-QLIS is designed for use in studies of chemiluminescent phenomena associated with absorption of radio-frequency radiation. The system consists of a radio-frequency waveguide/sample holder, microscope, intensified video camera, radiometric calibration source and optics, and computer-based image processor with radiometric analysis software. The system operation, hardware, software, and radiometric procedures are described.

  19. Quantitative descriptive analysis and principal component analysis for sensory characterization of Indian milk product cham-cham.

    PubMed

    Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C

    2016-02-01

    Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p < 0.05) difference in sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.

  20. Quantitative polymerase chain reaction analysis of DNA from noninvasive samples for accurate microsatellite genotyping of wild chimpanzees (Pan troglodytes verus).

    PubMed

    Morin, P A; Chambers, K E; Boesch, C; Vigilant, L

    2001-07-01

    Noninvasive samples are useful for molecular genetic analyses of wild animal populations. However, the low DNA content of such samples makes DNA amplification difficult, and there is the potential for erroneous results when one of two alleles at heterozygous microsatellite loci fails to be amplified. In this study we describe an assay designed to measure the amount of amplifiable nuclear DNA in low DNA concentration extracts from noninvasive samples. We describe the range of DNA amounts obtained from chimpanzee faeces and shed hair samples and formulate a new efficient approach for accurate microsatellite genotyping. Prescreening of extracts for DNA quantity is recommended for sorting of samples for likely success and reliability. Repetition of results remains extensive for analysis of microsatellite amplifications beginning from low starting amounts of DNA, but is reduced for those with higher DNA content.

  1. Infectious titres of sheep scrapie and bovine spongiform encephalopathy agents cannot be accurately predicted from quantitative laboratory test results.

    PubMed

    González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda

    2012-11-01

    It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).

  2. Vision ray calibration for the quantitative geometric description of general imaging and projection optics in metrology

    SciTech Connect

    Bothe, Thorsten; Li Wansong; Schulte, Michael; von Kopylow, Christoph; Bergmann, Ralf B.; Jueptner, Werner P. O.

    2010-10-20

    Exact geometric calibration of optical devices like projectors or cameras is the basis for utilizing them in quantitative metrological applications. The common state-of-the-art photogrammetric pinhole-imaging-based models with supplemental polynomial corrections fail in the presence of nonsymmetric or high-spatial-frequency distortions and in describing caustics efficiently. These problems are solved by our vision ray calibration (VRC), which is proposed in this paper. The VRC takes an optical mapping system modeled as a black box and directly delivers corresponding vision rays for each mapped pixel. The underlying model, the calibration process, and examples are visualized and reviewed, demonstrating the potential of the VRC.

  3. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait

    PubMed Central

    Golestan Hashemi, Farahnaz Sadat; Rafii, Mohd Y.; Ismail, Mohd Razi; Mohamed, Mahmud Tengku Muda; Rahim, Harun A.; Latif, Mohammad Abdul; Aslani, Farzad

    2015-01-01

    When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL) analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP), the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance) was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM). These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice. PMID:26061689

  4. Quantitative description of ion transport via plasma membrane of yeast and small cells

    PubMed Central

    Volkov, Vadim

    2015-01-01

    Modeling of ion transport via plasma membrane needs identification and quantitative understanding of the involved processes. Brief characterization of main ion transport systems of a yeast cell (Pma1, Ena1, TOK1, Nha1, Trk1, Trk2, non-selective cation conductance) and determining the exact number of molecules of each transporter per a typical cell allow us to predict the corresponding ion flows. In this review a comparison of ion transport in small yeast cell and several animal cell types is provided. The importance of cell volume to surface ratio is emphasized. The role of cell wall and lipid rafts is discussed in respect to required increase in spatial and temporary resolution of measurements. Conclusions are formulated to describe specific features of ion transport in a yeast cell. Potential directions of future research are outlined based on the assumptions. PMID:26113853

  5. Quantitative description of ion transport via plasma membrane of yeast and small cells.

    PubMed

    Volkov, Vadim

    2015-01-01

    Modeling of ion transport via plasma membrane needs identification and quantitative understanding of the involved processes. Brief characterization of main ion transport systems of a yeast cell (Pma1, Ena1, TOK1, Nha1, Trk1, Trk2, non-selective cation conductance) and determining the exact number of molecules of each transporter per a typical cell allow us to predict the corresponding ion flows. In this review a comparison of ion transport in small yeast cell and several animal cell types is provided. The importance of cell volume to surface ratio is emphasized. The role of cell wall and lipid rafts is discussed in respect to required increase in spatial and temporary resolution of measurements. Conclusions are formulated to describe specific features of ion transport in a yeast cell. Potential directions of future research are outlined based on the assumptions.

  6. Quantitative description of the interaction between folate and the folate-binding protein from cow's milk

    PubMed Central

    2004-01-01

    A detailed study has been carried out on the dependence of folate binding on the concentration of FBP (folate-binding protein) at pH 5.0, conditions selected to prevent complications arising from the pre-existing self-association of the acceptor. In contrast with the mandatory requirement that reversible interaction of ligand with a single acceptor site should exhibit a unique, rectangular hyperbolic binding curve, results obtained by ultrafiltration for the FBP–folate system required description in terms of (i) a sigmoidal relationship between concentrations of bound and free folate and (ii) an inverse dependence of affinity on FBP concentration. These findings have been attributed to the difficulties in determining the free ligand concentration in the FBP–folate mixtures for which reaction is essentially stoichiometric. This explanation also accounts for the similar published behaviour of the FBP–folate system at neutral pH, which had been attributed erroneously to acceptor self-association, a phenomenon incompatible with the experimental findings because of its prediction of a greater affinity for folate with increasing FBP concentration. PMID:15142039

  7. Towards a quantitative description of tunneling conductance of superconductors: Application to LiFeAs

    NASA Astrophysics Data System (ADS)

    Kreisel, A.; Nelson, R.; Berlijn, T.; Ku, W.; Aluru, Ramakrishna; Chi, Shun; Zhou, Haibiao; Singh, Udai Raj; Wahl, Peter; Liang, Ruixing; Hardy, Walter N.; Bonn, D. A.; Hirschfeld, P. J.; Andersen, Brian M.

    2016-12-01

    Since the discovery of iron-based superconductors, a number of theories have been put forward to explain the qualitative origin of pairing, but there have been few attempts to make quantitative, material-specific comparisons to experimental results. The spin-fluctuation theory of electronic pairing, based on first-principles electronic structure calculations, makes predictions for the superconducting gap. Within the same framework, the surface wave functions may also be calculated, allowing, e.g., for detailed comparisons between theoretical results and measured scanning tunneling topographs and spectra. Here we present such a comparison between theory and experiment on the Fe-based superconductor LiFeAs. Results for the homogeneous surface as well as impurity states are presented as a benchmark test of the theory. For the homogeneous system, we argue that the maxima of topographic image intensity may be located at positions above either the As or Li atoms, depending on tip height and the setpoint current of the measurement. We further report the experimental observation of transitions between As- and Li-registered lattices as functions of both tip height and setpoint bias, in agreement with this prediction. Next, we give a detailed comparison between the simulated scanning tunneling microscopy images of transition-metal defects with experiment. Finally, we discuss possible extensions of the current framework to obtain a theory with true predictive power for scanning tunneling microscopy in Fe-based systems.

  8. Quantitative description for the growth rate of self-induced GaN nanowires

    NASA Astrophysics Data System (ADS)

    Consonni, V.; Dubrovskii, V. G.; Trampert, A.; Geelhaar, L.; Riechert, H.

    2012-04-01

    We determine with high precision the growth rate of self-induced GaN nanowires grown by molecular beam epitaxy under various conditions from scanning electron micrographs by taking into account in situ measurements of the initial incubation time, which is needed before the nanowire growth starts. In order to quantitatively describe the dependence of the growth rate on growth time, gallium flux, and growth temperature, we develop a detailed theoretical model of diffusion-induced nanowire growth specifically for the self-induced approach, i.e., without any droplet at the nanowire top. The theoretical fits are in excellent agreement with the experimental data and allow us to deduce important kinetic parameters of the self-induced GaN nanowire growth. The gallium adatom effective diffusion length on the nanowire sidewalls composed of m-plane facets is only 45 nm, which is consistent with our experimental finding that the growth rate initially decreases drastically as the contribution from the adatoms on the planar substrate surface rapidly vanishes. In contrast, the gallium adatom effective diffusion length on the amorphous silicon nitride substrate surface reaches about 100 nm. Furthermore, the nucleation energy on the nanowire sidewalls is found to be 5.44 eV and is larger than on their top facet accounting for the nanowire elongation.

  9. Quantitative description of fluid flows produced by left-right cilia in zebrafish.

    PubMed

    Fox, Craig; Manning, M Lisa; Amack, Jeffrey D

    2015-01-01

    Motile cilia generate directional flows that move mucus through airways, cerebrospinal fluid through brain ventricles, and oocytes through fallopian tubes. In addition, specialized monocilia beat in a rotational pattern to create asymmetric flows that are involved in establishing the left-right (LR) body axis during embryogenesis. These monocilia, which we refer to as "left-right cilia," produce a leftward flow of extraembryonic fluid in a transient "organ of asymmetry" that directs asymmetric signaling and development of LR asymmetries in the cardiovascular system and gastrointestinal tract. The asymmetric flows are thought to establish a chemical gradient and/or activate mechanosensitive cilia to initiate calcium ion signals and a conserved Nodal (TGFβ) pathway on the left side of the embryo, but the mechanisms underlying this process remain unclear. The zebrafish organ of asymmetry, called Kupffer's vesicle, provides a useful model system for investigating LR cilia and cilia-powered fluid flows. Here, we describe methods to visualize flows in Kupffer's vesicle using fluorescent microspheres and introduce a new and freely available MATLAB particle tracking code to quantitatively describe these flows. Analysis of normal and aberrant flows indicates this approach is useful for characterizing flow properties that impact LR asymmetry and may be more broadly applicable for quantifying other cilia flows.

  10. Evaluation of the impact of peak description on the quantitative capabilities of comprehensive two-dimensional liquid chromatography.

    PubMed

    Place, Benjamin J; Morris, Mallory J; Phillips, Melissa M; Sander, Lane C; Rimmer, Catherine A

    2014-11-14

    Comprehensive, two-dimensional liquid chromatography (LC × LC) is a powerful technique for the separation of complex mixtures. Most studies using LC × LC are focused on qualitative efforts, such as increasing peak capacity. The present study examined the use of LC × LC-UV/vis for the separation and quantitation of polycyclic aromatic hydrocarbons (PAHs). More specifically, this study evaluated the impact of different peak integration approaches on the quantitative performance of the LC × LC method. For well-resolved three-dimensional peaks, parameters such as baseline definition, peak base shape, and peak width determination did not have a significant impact on accuracy and precision. For less-resolved peaks, a dropped baseline and the summation of all slices in the peak improved the accuracy and precision of the integration methods. The computational approaches to three-dimensional peak integration are provided, including fully descriptive, select slice, and summed heights integration methods, each with its own strengths and weaknesses. Overall, the integration methods presented quantify each of the PAHs within acceptable precision and accuracy ranges and have comparable performance to that of single dimension liquid chromatography.

  11. A High Resolution/Accurate Mass (HRAM) Data-Dependent MS3 Neutral Loss Screening, Classification, and Relative Quantitation Methodology for Carbonyl Compounds in Saliva

    NASA Astrophysics Data System (ADS)

    Dator, Romel; Carrà, Andrea; Maertens, Laura; Guidolin, Valeria; Villalta, Peter W.; Balbo, Silvia

    2016-10-01

    Reactive carbonyl compounds (RCCs) are ubiquitous in the environment and are generated endogenously as a result of various physiological and pathological processes. These compounds can react with biological molecules inducing deleterious processes believed to be at the basis of their toxic effects. Several of these compounds are implicated in neurotoxic processes, aging disorders, and cancer. Therefore, a method characterizing exposures to these chemicals will provide insights into how they may influence overall health and contribute to disease pathogenesis. Here, we have developed a high resolution accurate mass (HRAM) screening strategy allowing simultaneous identification and relative quantitation of DNPH-derivatized carbonyls in human biological fluids. The screening strategy involves the diagnostic neutral loss of hydroxyl radical triggering MS3 fragmentation, which is only observed in positive ionization mode of DNPH-derivatized carbonyls. Unique fragmentation pathways were used to develop a classification scheme for characterizing known and unanticipated/unknown carbonyl compounds present in saliva. Furthermore, a relative quantitation strategy was implemented to assess variations in the levels of carbonyl compounds before and after exposure using deuterated d 3 -DNPH. This relative quantitation method was tested on human samples before and after exposure to specific amounts of alcohol. The nano-electrospray ionization (nano-ESI) in positive mode afforded excellent sensitivity with detection limits on-column in the high-attomole levels. To the best of our knowledge, this is the first report of a method using HRAM neutral loss screening of carbonyl compounds. In addition, the method allows simultaneous characterization and relative quantitation of DNPH-derivatized compounds using nano-ESI in positive mode.

  12. Sensory descriptive quantitative analysis of unpasteurized and pasteurized juçara pulp (Euterpe edulis) during long-term storage

    PubMed Central

    da Silva, Paula Porrelli Moreira; Casemiro, Renata Cristina; Zillo, Rafaela Rebessi; de Camargo, Adriano Costa; Prospero, Evanilda Teresinha Perissinotto; Spoto, Marta Helena Fillet

    2014-01-01

    This study evaluated the effect of pasteurization followed by storage under different conditions on the sensory attributes of frozen juçara pulp using quantitative descriptive analysis (QDA). Pasteurization of packed frozen pulp was performed by its immersion in stainless steel tank containing water (80°C) for 5 min, followed by storage under refrigerated and frozen conditions. A trained sensory panel evaluated the samples (6°C) on day 1, 15, 30, 45, 60, 75, and 90. Sensory attributes were separated as follows: appearance (foamy, heterogeneous, purple, brown, oily, and creamy), aroma (sweet and fermented), taste (astringent, bitter, and sweet), and texture (oily and consistent), and compared to a reference material. In general, unpasteurized frozen pulp showed the highest score for foamy appearance, and pasteurized samples showed highest scores to creamy appearance. Pasteurized samples remained stable regarding brown color development while unpasteurized counterparts presented increase. Color is an important attribute related to the product identity. All attributes related to taste and texture remained constant during storage for all samples. Pasteurization followed by storage under frozen conditions has shown to be the best conservation method as samples submitted to such process received the best sensory evaluation, described as foamy, slightly heterogeneous, slightly bitter, and slightly astringent. PMID:25473489

  13. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    SciTech Connect

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  14. Accurate, quantitative assays for the hydrolysis of soluble type I, II, and III /sup 3/H-acetylated collagens by bacterial and tissue collagenases

    SciTech Connect

    Mallya, S.K.; Mookhtiar, K.A.; Van Wart, H.E.

    1986-11-01

    Accurate and quantitative assays for the hydrolysis of soluble /sup 3/H-acetylated rat tendon type I, bovine cartilage type II, and human amnion type III collagens by both bacterial and tissue collagenases have been developed. The assays are carried out at any temperature in the 1-30/sup 0/C range in a single reaction tube and the progress of the reaction is monitored by withdrawing aliquots as a function of time, quenching with 1,10-phenanthroline, and quantitation of the concentration of hydrolysis fragments. The latter is achieved by selective denaturation of these fragments by incubation under conditions described in the previous paper of this issue. The assays give percentages of hydrolysis of all three collagen types by neutrophil collagenase that agree well with the results of gel electrophoresis experiments. The initial rates of hydrolysis of all three collagens are proportional to the concentration of both neutrophil or Clostridial collagenases over a 10-fold range of enzyme concentrations. All three assays can be carried out at collagen concentrations that range from 0.06 to 2 mg/ml and give linear double reciprocal plots for both tissue and bacterial collagenases that can be used to evaluate the kinetic parameters K/sub m/ and k/sub cat/ or V/sub max/. The assay developed for the hydrolysis of rat type I collagen by neutrophil collagenase is shown to be more sensitive by at least one order of magnitude than comparable assays that use rat type I collagen fibrils or gels as substrate.

  15. Detection and quantitation of trace phenolphthalein (in pharmaceutical preparations and in forensic exhibits) by liquid chromatography-tandem mass spectrometry, a sensitive and accurate method.

    PubMed

    Sharma, Kakali; Sharma, Shiba P; Lahiri, Sujit C

    2013-01-01

    Phenolphthalein, an acid-base indicator and laxative, is important as a constituent of widely used weight-reducing multicomponent food formulations. Phenolphthalein is an useful reagent in forensic science for the identification of blood stains of suspected victims and for apprehending erring officials accepting bribes in graft or trap cases. The pink-colored alkaline hand washes originating from the phenolphthalein-smeared notes can easily be determined spectrophotometrically. But in many cases, colored solution turns colorless with time, which renders the genuineness of bribe cases doubtful to the judiciary. No method is known till now for the detection and identification of phenolphthalein in colorless forensic exhibits with positive proof. Liquid chromatography-tandem mass spectrometry had been found to be most sensitive, accurate method capable of detection and quantitation of trace phenolphthalein in commercial formulations and colorless forensic exhibits with positive proof. The detection limit of phenolphthalein was found to be 1.66 pg/L or ng/mL, and the calibration curve shows good linearity (r(2) = 0.9974).

  16. Allele Specific Locked Nucleic Acid Quantitative PCR (ASLNAqPCR): An Accurate and Cost-Effective Assay to Diagnose and Quantify KRAS and BRAF Mutation

    PubMed Central

    Morandi, Luca; de Biase, Dario; Visani, Michela; Cesari, Valentina; De Maglio, Giovanna; Pizzolitto, Stefano; Pession, Annalisa; Tallini, Giovanni

    2012-01-01

    The use of tyrosine kinase inhibitors (TKIs) requires the testing for hot spot mutations of the molecular effectors downstream the membrane-bound tyrosine kinases since their wild type status is expected for response to TKI therapy. We report a novel assay that we have called Allele Specific Locked Nucleic Acid quantitative PCR (ASLNAqPCR). The assay uses LNA-modified allele specific primers and LNA-modified beacon probes to increase sensitivity, specificity and to accurately quantify mutations. We designed primers specific for codon 12/13 KRAS mutations and BRAF V600E, and validated the assay with 300 routine samples from a variety of sources, including cytology specimens. All were analyzed by ASLNAqPCR and Sanger sequencing. Discordant cases were pyrosequenced. ASLNAqPCR correctly identified BRAF and KRAS mutations in all discordant cases and all had a mutated/wild type DNA ratio below the analytical sensitivity of the Sanger method. ASLNAqPCR was 100% specific with greater accuracy, positive and negative predictive values compared with Sanger sequencing. The analytical sensitivity of ASLNAqPCR is 0.1%, allowing quantification of mutated DNA in small neoplastic cell clones. ASLNAqPCR can be performed in any laboratory with real-time PCR equipment, is very cost-effective and can easily be adapted to detect hot spot mutations in other oncogenes. PMID:22558339

  17. Coupling geostatistics to detailed reservoir description allows better visualization and more accurate characterization/simulation of turbidite reservoirs: Elk Hills oil field, California

    SciTech Connect

    Allan, M.E.; Wilson, M.L.; Wightman, J. )

    1996-01-01

    The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.

  18. Coupling geostatistics to detailed reservoir description allows better visualization and more accurate characterization/simulation of turbidite reservoirs: Elk Hills oil field, California

    SciTech Connect

    Allan, M.E.; Wilson, M.L.; Wightman, J.

    1996-12-31

    The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity & permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic & petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.

  19. A mathematical recursive model for accurate description of the phase behavior in the near-critical region by Generalized van der Waals Equation

    NASA Astrophysics Data System (ADS)

    Kim, Jibeom; Jeon, Joonhyeon

    2015-01-01

    Recently, related studies on Equation Of State (EOS) have reported that generalized van der Waals (GvdW) shows poor representations in the near critical region for non-polar and non-sphere molecules. Hence, there are still remains a problem of GvdW parameters to minimize loss in describing saturated vapor densities and vice versa. This paper describes a recursive model GvdW (rGvdW) for an accurate representation of pure fluid materials in the near critical region. For the performance evaluation of rGvdW in the near critical region, other EOS models are also applied together with two pure molecule group: alkane and amine. The comparison results show rGvdW provides much more accurate and reliable predictions of pressure than the others. The calculating model of EOS through this approach gives an additional insight into the physical significance of accurate prediction of pressure in the nearcritical region.

  20. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    PubMed Central

    2010-01-01

    Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene

  1. Toward an accurate description of solid-state properties of superheavy elements. A case study for the element Og (Z=118)

    NASA Astrophysics Data System (ADS)

    Schwerdtfeger, Peter

    2016-12-01

    In the last two decades cold and hot fusion experiments lead to the production of new elements for the Periodic Table up to nuclear charge 118. Recent developments in relativistic quantum theory have made it possible to obtain accurate electronic properties for the trans-actinide elements with the aim to predict their potential chemical and physical behaviour. Here we report on first results of solid-state calculations for Og (element 118) to support future atom-at-a-time gas-phase adsorption experiments on surfaces such as gold or quartz.

  2. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  3. Accurate Descriptions of Hot Flow Behaviors Across β Transus of Ti-6Al-4V Alloy by Intelligence Algorithm GA-SVR

    NASA Astrophysics Data System (ADS)

    Wang, Li-yong; Li, Le; Zhang, Zhi-hua

    2016-09-01

    Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.

  4. Quantitative description of the properties of extended defects in silicon by means of electron- and laser-beam-induced currents

    SciTech Connect

    Shabelnikova, Ya. L. Yakimov, E. B.; Nikolaev, D. P.; Chukalina, M. V.

    2015-06-15

    A solar cell on a wafer of multicrystalline silicon containing grain boundaries was studied by the induced-current method. The sample was scanned by an electron beam and by a laser beam at two wavelengths (980 and 635 nm). The recorded induced-current maps were aligned by means of a specially developed code, that enabled to analyze the same part of the grain boundary for three types of measurements. Optimization of the residual between simulated induced-current profiles and those obtained experimentally yielded quantitative estimates of the characteristics of a sample and its defects: the diffusion length of minority carriers and recombination velocity at the grain boundary.

  5. Formulating the bonding contribution equation in heterogeneous catalysis: a quantitative description between the surface structure and adsorption energy.

    PubMed

    Wang, Ziyun; Hu, P

    2017-02-15

    The relation between the surface structure and adsorption energy of adsorbates is of great importance in heterogeneous catalysis. Based on density functional theory calculations, we propose an explicit equation with three chemically meaningful terms, namely the bonding contribution equation, to quantitatively account for the surface structures and the adsorption energies. Successful predictions of oxygen adsorption energies on complex alloy surfaces containing up to 4 components are demonstrated, and the generality of this equation is also tested using different surface sizes and other adsorbates. This work may not only offer a powerful tool to understand the structure-adsorption relation, but may also be used to inversely design novel catalysts.

  6. Quantitation of Compounds in Wine Using (1)H NMR Spectroscopy: Description of the Method and Collaborative Study.

    PubMed

    Godelmann, Rolf; Kost, Christian; Patz, Claus-Dieter; Ristow, Reinhard; Wachter, Helmut

    2016-09-01

    To examine whether NMR analysis is a suitable method for the quantitative determination of wine components, an international collaborative trial was organized to evaluate the method according to the international regulations and guidelines of the German Institute for Standardization/International Organization for Standardization, AOAC INTERNATIONAL, the International Union of Pure and Applied Chemistry, and the International Organization of Vine and Wine. Sugars such as glucose; acids such as malic, acetic, fumaric, and shikimic acids (the latter two as minor components); and sorbic acid, a preservative, were selected for the exemplary quantitative determination of substances in wine. Selection criteria for the examination of sample material included different NMR spectral signal types (singlet and multiplet), as well as the suitability of the proposed substances for manual integration at different levels of challenge (e.g., interference as a result of the necessary suppression of a water signal or the coverage of different typical wine concentration ranges for a selection of major components, minor components, and additives). To show that this method can be universally applied, NMR measurement and the method of evaluation were not strictly elucidated. Fifteen international laboratories participated in the collaborative trial and determined six parameters in 10 samples. The values, in particular the reproducibility SD (SR), were compared with the expected Horwitz SD (SH) by forming the quotient SR/SH (i.e., the HorRat value). The resulting HorRat values of most parameters were predominantly between 0.6 and 1.5, and thus of an acceptable range.

  7. A quantitative description of equilibrium and homeostatic thickness regulation in the in vivo cornea. II. Variations from the normal state.

    PubMed

    Friedman, M H

    1972-06-01

    The description of corneal mechanics and transport developed in part I and used there to describe normal corneal behavior is here applied to corneas whose properties or boundary conditions are abnormal. The predicted effects of changing intraocular pressure, aqueous concentration, and tear tonicity are examined, and these compare favorably with available experimental data. The periodic variation in tear tonicity which accompanies the sleep-wake cycle prevents the cornea from achieving a true steady state, but a time-average steady state, about which corneal behavior oscillates, can be defined. The in vivo effects of endothelial dystrophy and epithelial removal are explained, and it is suggested that the epithelial sodium pump may act homeostatically to maintain corneal thickness in the face of ambient temperature variations. Part II concludes with a discussion, from the standpoint of the present theory, of the role of metabolically coupled water transport in the maintenance of the normal corneal thickness.

  8. Quantitative description of induced seismic activity before and after the 2011 Tohoku-Oki earthquake by nonstationary ETAS models

    NASA Astrophysics Data System (ADS)

    Kumazawa, Takao; Ogata, Yosihiko

    2013-12-01

    The epidemic-type aftershock sequence (ETAS) model is extended for application to nonstationary seismic activity, including transient swarm activity or seismicity anomalies, in a seismogenic region. The time-dependent rates of both background seismicity and aftershock productivity in the ETAS model are optimally estimated from hypocenter data. These rates can provide quantitative evidence for abrupt or gradual changes in shear stress and/or fault strength due to aseismic transient causes such as triggering by remote earthquakes, slow slips, or fluid intrusions within the region. This extended model is applied to data sets from several seismic events including swarms that were induced by the M9.0 Tohoku-Oki earthquake of 2011.

  9. A quantitative description of the extension and retraction of surface protrusions in spreading 3T3 mouse fibroblasts.

    PubMed

    Albrecht-Buehler, G; Lancaster, R M

    1976-11-01

    We suggest a method of quantitating the motile actions of surface protrusions in spreading animal cells in culture. Its basis is the determination of the percentage of freshly plated cells which produce particle-free areas around them on a gold particle-coated glass cover slip within 50 min. Studying 3T3 cells with this assay, we found that the presence of Na+, K+, Cl-, and Mg++ or Ca++ in a neutral or slightly alkaline phosphate or bicarbonate buffered solution is sufficient to support the optimal particle removal by the cells for at least 50 min. Two metabolic inhibitors, 2,4-dinitrophenol and Na-azide, inhibit the particle removal. If D-glucose is added along with the inhibitors, particle removal can be restored, whereas the addition of three glucose analogues which are generally believed to be nonmetabolizable cannot restore the activity. Serum is not required for the mechanism(s) of the motile actions of surface protrusions in spreading 3T3 cells. However, it contains components which can neutralize the inhibitory actions of bovine serum albumin and several amino acids, particularly L-cystine or L-cystein and L-methionine. Furthermore, serum codetermines which of the major surface extension, filopodia, lamellipodia, or lobopodia, is predominantly active. We found three distinct classes of extracellular conditions under which the active surface projections are predominantly either lamellipodia, (sheetlike projections), lobopodia (blebs), or filopodia (microspikes). The quantitated dependencies on temperature, pH and the inhibition by cytochalasin B or the particle removal are very similar in all three cases. Preventing the cells from anchoring themselves for 15-20 min before plating in serum-free medium seems to stimulate particle removal threefold.

  10. Fathers' feelings related to their partners' childbirth and views on their presence during labour and childbirth: A descriptive quantitative study.

    PubMed

    He, Hong-Gu; Vehviläinen-Julkunen, Katri; Qian, Xiao-Fang; Sapountzi-Krepia, Despina; Gong, Yuhua; Wang, Wenru

    2015-05-01

    This study examined Chinese fathers' feelings about their partners' delivery and views on their presence during labour and birth. A questionnaire survey was conducted with 403 fathers whose partners gave birth in one provincial hospital in China. Data were analysed by descriptive statistics, χ(2)-test and content analysis. The results indicated that more than 80% of fathers experienced feelings of pride related to fatherhood and of love towards their partners and newborns. Significant differences in fathers' feelings were found between subgroups with regard to age, education, employment, presence in the delivery room, method of birth and whether preparatory visits had been made to the hospital. The majority who answered an open-ended question on the meaning of fathers' presence in the delivery room held a positive attitude towards fathers' presence at labour and birth, as their presence could empower their partners and provide psychological support. This study indicates fathers' presence at delivery and birth is important and that younger fathers need more support. It also provides evidence for clinical practice and future interventions to improve fathers' psychological health and experiences.

  11. Quantitative description of the lie-to-sit-to-stand-to-walk transfer by a single body-fixed sensor.

    PubMed

    Bagalà, Fabio; Klenk, Jochen; Cappello, Angelo; Chiari, Lorenzo; Becker, Clemens; Lindemann, Ulrich

    2013-07-01

    Sufficient capacity and quality of performance of complex movement patterns during daily activity, such as standing up from a bed, is a prerequisite for independent living and also may be an indicator of fall risk. Until now, the transfer from lying-to-sit-to-stand-to-walk (LSSW) was investigated by functional testing, subjective rating or for activity classification of subtasks. The aim of this study was to use a single body-fixed inertial sensor to describe the complex movement of the LSSW transfer. Fifteen older patients of a geriatric rehabilitation clinic (median age 81 years) and ten young, healthy persons (median age 37 years) were instructed to stand up from bed in a continuous movement and to start walking. Data acquisition was performed using an inertial measurement unit worn on the lower back. Parameters extracted from the sensor outputs were able to correctly classify the subjects into a correct group with sensitivity and specificity between 90% and 100%. ICCs 3,1 of the descriptive parameters ranged between 0.85 and 0.95 in the cohort of older patients. The different strategies adopted to transfer from lying to standing up were estimated through an extended Kalman filter. The results obtained in this study suggest the usability of the instrumented LSSW test in clinical settings.

  12. Differential label-free quantitative proteomic analysis of Shewanella oneidensis cultured under aerobic and suboxic conditions by accurate mass and time tag approach.

    PubMed

    Fang, Ruihua; Elias, Dwayne A; Monroe, Matthew E; Shen, Yufeng; McIntosh, Martin; Wang, Pei; Goddard, Carrie D; Callister, Stephen J; Moore, Ronald J; Gorby, Yuri A; Adkins, Joshua N; Fredrickson, Jim K; Lipton, Mary S; Smith, Richard D

    2006-04-01

    We describe the application of LC-MS without the use of stable isotope labeling for differential quantitative proteomic analysis of whole cell lysates of Shewanella oneidensis MR-1 cultured under aerobic and suboxic conditions. LC-MS/MS was used to initially identify peptide sequences, and LC-FTICR was used to confirm these identifications as well as measure relative peptide abundances. 2343 peptides covering 668 proteins were identified with high confidence and quantified. Among these proteins, a subset of 56 changed significantly using statistical approaches such as statistical analysis of microarrays, whereas another subset of 56 that were annotated as performing housekeeping functions remained essentially unchanged in relative abundance. Numerous proteins involved in anaerobic energy metabolism exhibited up to a 10-fold increase in relative abundance when S. oneidensis was transitioned from aerobic to suboxic conditions.

  13. Differential Label-free Quantitative Proteomic Analysis of Shewanella oneidensis Cultured under Aerobic and Suboxic Conditions by Accurate Mass and Time Tag Approach

    SciTech Connect

    Fang, Ruihua; Elias, Dwayne A.; Monroe, Matthew E.; Shen, Yufeng; McIntosh, Martin; Wang, Pei; Goddard, Carrie D.; Callister, Stephen J.; Moore, Ronald J.; Gorby, Yuri A.; Adkins, Joshua N.; Fredrickson, Jim K.; Lipton, Mary S.; Smith, Richard D.

    2006-04-01

    We describe the application of liquid chromatography coupled to mass spectrometry (LC/MS) without the use of stable isotope labeling for differential quantitative proteomics analysis of whole cell lysates of Shewanella oneidensis MR-1 cultured under aerobic and sub-oxic conditions. Liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) was used to initially identify peptide sequences, and LC coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR) was used to confirm these identifications, as well as measure relative peptide abundances. 2343 peptides, covering 668 proteins were identified with high confidence and quantified. Among these proteins, a subset of 56 changed significantly using statistical approaches such as SAM, while another subset of 56 that were annotated as performing housekeeping functions remained essentially unchanged in relative abundance. Numerous proteins involved in anaerobic energy metabolism exhibited up to a 10-fold increase in relative abundance when S. oneidensis is transitioned from aerobic to sub-oxic conditions.

  14. Quantitative description of RF power-based ratiometric chemical exchange saturation transfer (CEST) pH imaging

    PubMed Central

    Wu, Renhua; Longo, Dario Livio; Aime, Silvio; Sun, Phillip Zhe

    2015-01-01

    Chemical exchange saturation transfer (CEST) MRI holds great promise for imaging pH. However, routine CEST measurement varies not only with pH-dependent chemical exchange rate but also with CEST agent concentration, providing pH-weighted information. Conventional ratiometric CEST imaging normalizes the confounding concentration factor by analyzing the relative CEST effect from different exchangeable groups, requiring CEST agents with multiple chemically distinguishable labile proton sites. Recently, an RF power-based ratiometric CEST MRI approach has been developed for concentration-independent pH MRI using CEST agents with a single exchangeable group. To facilitate quantification and optimization of the new ratiometric analysis, we quantitated RF power-based ratiometric CEST ratio (rCESTR) and derived its signal-to-noise and contrast-to-noise ratio. Using creatine as a representative CEST agent containing a single exchangeable site, our study demonstrated that optimized RF power-based ratiometric analysis provides good pH sensitivity. We showed that rCESTR follows a base-catalyzed exchange relationship with pH independent of creatine concentration. The pH accuracy of RF power-based ratiometric MRI was within 0.15–0.20 pH unit. Furthermore, absolute exchange rate can be obtained from the proposed ratiometric analysis. To summarize, RF power-based ratiometric CEST analysis provides concentration-independent pH-sensitive imaging and complements conventional multiple labile proton groups-based ratiometric CEST analysis. PMID:25807919

  15. Colostrum protein uptake in neonatal lambs examined by descriptive and quantitative liquid chromatography-tandem mass spectrometry.

    PubMed

    Hernández-Castellano, Lorenzo E; Argüello, Anastasio; Almeida, André M; Castro, Noemí; Bendixen, Emøke

    2015-01-01

    Colostrum intake is a key factor for newborn ruminant survival because the placenta does not allow the transfer of immune components. Therefore, newborn ruminants depend entirely on passive immunity transfer from the mother to the neonate, through the suckling of colostrum. Understanding the importance of specific colostrum proteins has gained significant attention in recent years. However, proteomics studies of sheep colostrum and their uptake in neonate lambs has not yet been presented. The aim of this study was to describe the proteomes of sheep colostrum and lamb blood plasma, using sodium dodecyl sulfate-PAGE for protein separation and in-gel digestion, followed by liquid chromatography-tandem mass spectrometry of resulting tryptic peptides for protein identification. An isobaric tag for relative and absolute quantitation (iTRAQ)-based proteomics approach was subsequently used to provide relative quantification of how neonatal plasma protein concentrations change as an effect of colostrum intake. The results of this study describe the presence of 70 proteins in the ovine colostrum proteome. Furthermore, colostrum intake resulted in an increase of 8 proteins with important immune functions in the blood plasma of lambs. Further proteomic studies will be necessary, particularly using the selected reaction monitoring approach, to describe in detail the role of specific colostrum proteins for immune transfer to the neonate.

  16. Novel Structural Parameters of Ig–Ag Complexes Yield a Quantitative Description of Interaction Specificity and Binding Affinity

    PubMed Central

    Marillet, Simon; Lefranc, Marie-Paule; Boudinot, Pierre; Cazals, Frédéric

    2017-01-01

    Antibody–antigen complexes challenge our understanding, as analyses to date failed to unveil the key determinants of binding affinity and interaction specificity. We partially fill this gap based on novel quantitative analyses using two standardized databases, the IMGT/3Dstructure-DB and the structure affinity benchmark. First, we introduce a statistical analysis of interfaces which enables the classification of ligand types (protein, peptide, and chemical; cross-validated classification error of 9.6%) and yield binding affinity predictions of unprecedented accuracy (median absolute error of 0.878 kcal/mol). Second, we exploit the contributions made by CDRs in terms of position at the interface and atomic packing properties to show that in general, VH CDR3 and VL CDR3 make dominant contributions to the binding affinity, a fact also shown to be consistent with the enthalpy–entropy compensation associated with preconfiguration of CDR3. Our work suggests that the affinity prediction problem could be partially solved from databases of high resolution crystal structures of complexes with known affinity. PMID:28232828

  17. Qualitative and quantitative descriptions of temperature: a study of the terminology used by local television weather forecasters to describe thermal sensation.

    PubMed

    Brunskill, Jeffrey C

    2010-03-01

    This paper presents a study of the relationship between quantitative and qualitative descriptions of temperature. Online weather forecast narratives produced by local television forecasters were collected from affiliates in 23 cities throughout the northeastern, central and southern portions of the United States from August 2007 to July 2008. The narratives were collected to study the terminology and reference frames that local forecasters use to describe predicted temperatures for the following day. The main objectives were to explore the adjectives used to describe thermal conditions and the impact that geographical and seasonal variations in thermal conditions have on these descriptions. The results of this empirical study offer some insights into the structure of weather narratives and suggest that spatiotemporal variations in the weather impact how forecasters describe the temperature to their local audiences. In a broader sense, this investigation builds upon research in biometeorology, urban planning and linguistics that has explored the physiological and psychological factors that influence subjective assessments of thermal sensation and comfort. The results of this study provide a basis to reason about how thermal comfort is conveyed in meteorological communications and how experiential knowledge derived from daily observations of the weather influence how we think about and discuss the weather.

  18. Qualitative and quantitative descriptions of temperature: a study of the terminology used by local television weather forecasters to describe thermal sensation

    NASA Astrophysics Data System (ADS)

    Brunskill, Jeffrey C.

    2010-03-01

    This paper presents a study of the relationship between quantitative and qualitative descriptions of temperature. Online weather forecast narratives produced by local television forecasters were collected from affiliates in 23 cities throughout the northeastern, central and southern portions of the United States from August 2007 to July 2008. The narratives were collected to study the terminology and reference frames that local forecasters use to describe predicted temperatures for the following day. The main objectives were to explore the adjectives used to describe thermal conditions and the impact that geographical and seasonal variations in thermal conditions have on these descriptions. The results of this empirical study offer some insights into the structure of weather narratives and suggest that spatiotemporal variations in the weather impact how forecasters describe the temperature to their local audiences. In a broader sense, this investigation builds upon research in biometeorology, urban planning and linguistics that has explored the physiological and psychological factors that influence subjective assessments of thermal sensation and comfort. The results of this study provide a basis to reason about how thermal comfort is conveyed in meteorological communications and how experiential knowledge derived from daily observations of the weather influence how we think about and discuss the weather.

  19. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    PubMed

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts.

  20. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    PubMed

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  1. Improved Detection System Description and New Method for Accurate Calibration of Micro-Channel Plate Based Instruments and Its Use in the Fast Plasma Investigation on NASA's Magnetospheric MultiScale Mission

    NASA Technical Reports Server (NTRS)

    Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.; Zeuch, M. A.; Pollock, C. J.; Jacques, A. D.

    2015-01-01

    system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. More precise calibration is highly desirable as the instruments will produce higher quality raw data that will require less post-acquisition data correction using results from in-flight pitch angle distribution measurements and ground calibration measurements. The detection system description and the fundamental concepts of this new calibration method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters and how to choose the optimum detection system operating point. This new method has been successfully applied to achieve a highly accurate calibration of the DESs and DISs of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown that, with further detailed modeling, this method can be extended for use in flight to achieve and maintain a highly accurate detection system calibration across a large number of instruments during the mission.

  2. A quantitative description of the ground-state wave function of Cu(A) by X-ray absorption spectroscopy: comparison to plastocyanin and relevance to electron transfer.

    PubMed

    DeBeer George, S; Metz, M; Szilagyi, R K; Wang, H; Cramer, S P; Lu, Y; Tolman, W B; Hedman, B; Hodgson, K O; Solomon, E I

    2001-06-20

    To evaluate the importance of the electronic structure of Cu(A) to its electron-transfer (ET) function, a quantitative description of the ground-state wave function of the mixed-valence (MV) binuclear Cu(A) center engineered into Pseudomonas aeruginosa azurin has been developed, using a combination of S K-edge and Cu L-edge X-ray absorption spectroscopies (XAS). Parallel descriptions have been developed for a binuclear thiolate-bridged MV reference model complex ([(L(i)(PrdacoS)Cu)(2)](+)) and a homovalent (II,II) analogue ([L(i)(Pr2tacnS)Cu)(2)](2+), where L(i)(PrdacoS) and L(i)(Pr2tacnS) are macrocyclic ligands with attached thiolates that bridge the Cu ions. Previous studies have qualitatively defined the ground-state wave function of Cu(A) in terms of ligand field effects on the orbital orientation and the presence of a metal--metal bond. The studies presented here provide further evidence for a direct Cu--Cu interaction and, importantly, experimentally quantify the covalency of the ground-state wave function. The experimental results are further supported by DFT calculations. The nature of the ground-state wave function of Cu(A) is compared to that of the well-defined blue copper site in plastocyanin, and the importance of this wave function to the lower reorganization energy and ET function of Cu(A) is discussed. This wave function incorporates anisotropic covalency into the intra- and intermolecular ET pathways in cytochrome c oxidase. Thus, the high covalency of the Cys--Cu bond allows a path through this ligand to become competitive with a shorter His path in the intramolecular ET from Cu(A) to heme a and is particularly important for activating the intermolecular ET path from heme c to Cu(A).

  3. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  4. TopCAT and PySESA: Open-source software tools for point cloud decimation, roughness analyses, and quantitative description of terrestrial surfaces

    NASA Astrophysics Data System (ADS)

    Hensleigh, J.; Buscombe, D.; Wheaton, J. M.; Brasington, J.; Welcker, C. W.; Anderson, K.

    2015-12-01

    The increasing use of high-resolution topography (HRT) constructed from point clouds obtained from technology such as LiDAR, SoNAR, SAR, SfM and a variety of range-imaging techniques, has created a demand for custom analytical tools and software for point cloud decimation (data thinning and gridding) and spatially explicit statistical analysis of terrestrial surfaces. We will present on a number of analytical and computational tools designed to quantify surface roughness and texture, directly from point clouds in a variety of ways (using spatial- and frequency-domain statistics). TopCAT (Topographic Point Cloud Analysis Toolkit; Brasington et al., 2012) and PySESA (Python program for Spatially Explicit Spectral Analysis) both work by applying a small moving window to (x,y,z) data to calculate a suite of (spatial and spectral domain) statistics, which are then spatially-referenced on a regular (x,y) grid at a user-defined resolution. Collectively, these tools facilitate quantitative description of surfaces and may allow, for example, fully automated texture characterization and segmentation, roughness and grain size calculation, and feature detection and classification, on very large point clouds with great computational efficiency. Using tools such as these, it may be possible to detect geomorphic change in surfaces which have undergone minimal elevation difference, for example deflation surfaces which have coarsened but undergone no net elevation change, or surfaces which have eroded and accreted, leaving behind a different textural surface expression than before. The functionalities of the two toolboxes are illustrated with example high-resolution bathymetric point cloud data collected with multibeam echosounder, and topographic data collected with LiDAR.

  5. Analytical Validation of a Highly Quantitative, Sensitive, Accurate, and Reproducible Assay (HERmark®) for the Measurement of HER2 Total Protein and HER2 Homodimers in FFPE Breast Cancer Tumor Specimens

    PubMed Central

    Larson, Jeffrey S.; Goodman, Laurie J.; Tan, Yuping; Defazio-Eli, Lisa; Paquet, Agnes C.; Cook, Jennifer W.; Rivera, Amber; Frankson, Kristi; Bose, Jolly; Chen, Lili; Cheung, Judy; Shi, Yining; Irwin, Sarah; Kiss, Linda D. B.; Huang, Weidong; Utter, Shannon; Sherwood, Thomas; Bates, Michael; Weidler, Jodi; Parry, Gordon; Winslow, John; Petropoulos, Christos J.; Whitcomb, Jeannette M.

    2010-01-01

    We report here the results of the analytical validation of assays that measure HER2 total protein (H2T) and HER2 homodimer (H2D) expression in Formalin Fixed Paraffin Embedded (FFPE) breast cancer tumors as well as cell line controls. The assays are based on the VeraTag technology platform and are commercially available through a central CAP-accredited clinical reference laboratory. The accuracy of H2T measurements spans a broad dynamic range (2-3 logs) as evaluated by comparison with cross-validating technologies. The measurement of H2T expression demonstrates a sensitivity that is approximately 7–10 times greater than conventional immunohistochemistry (IHC) (HercepTest). The HERmark assay is a quantitative assay that sensitively and reproducibly measures continuous H2T and H2D protein expression levels and therefore may have the potential to stratify patients more accurately with respect to response to HER2-targeted therapies than current methods which rely on semiquantitative protein measurements (IHC) or on indirect assessments of gene amplification (FISH). PMID:21151530

  6. A behavioral-level HDL description of SFQ logic circuits for quantitative performance analysis of large-scale SFQ digital systems

    NASA Astrophysics Data System (ADS)

    Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.

    2003-10-01

    Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.

  7. Accurate quantitation for in vitro refolding of single domain antibody fragments expressed as inclusion bodies by referring the concomitant expression of a soluble form in the periplasms of Escherichia coli.

    PubMed

    Noguchi, Tomoaki; Nishida, Yuichi; Takizawa, Keiji; Cui, Yue; Tsutsumi, Koki; Hamada, Takashi; Nishi, Yoshisuke

    2017-03-01

    Single domain antibody fragments from two species, a camel VHH (PM1) and a shark VNAR (A6), were derived from inclusion bodies of E. coli and refolded in vitro following three refolding recipes for comparing refolding efficiencies: three-step cold dialysis refolding (TCDR), one-step hot dialysis refolding (OHDR), and one-step cold dialysis refolding (OCDR), as these fragments were expressed as 'a soluble form' either in cytoplasm or periplasm, but the amount were much less than those expressed as 'an insoluble form (inclusion body)' in cytoplasm and periplasm. In order to verify the refolding efficiencies from inclusion bodies correctly, proteins purified from periplasmic soluble fractions were used as reference samples. These samples showed far-UV spectra of a typical β-sheet-dominant structure in circular dichroism (CD) spectroscopy and so did the refolded samples as well. As the maximal magnitude of ellipticity in millidegrees (θmax) observed at a given wave length was proportional to the concentrations of the respective reference samples, we could draw linear regression lines for the magnitudes vs. sample concentrations. By using these lines, we measured the concentrations for the refolded PM1 and A6 samples purified from solubilized cytoplasmic insoluble fractions. The refolding efficiency of PM1 was almost 50% following TCDR and 40% and 30% following OHDR and OCDR, respectively, whereas the value of A6 was around 30% following TCDR, and out of bound for quantitation following the other two recipes. The ELISA curves, which were derived from the refolded samples, coincided better with those obtained from the reference samples after converting the values from the protein-concentrations at recovery to the ones of refolded proteins using recovery ratios, indicating that such a correction gives better results for the accurate measure of the ELISA curves than those without correction. Our method require constructing a dual expression system, expressed both in

  8. Measuring the Internal Structure and Physical Conditions in Star and Planet Forming Clouds Cores: Towards a Quantitative Description of Cloud Evolution

    NASA Technical Reports Server (NTRS)

    Lada, Charles J.

    2004-01-01

    This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular cloud cores and investigate the physical conditions which give rise to star and planet formation. The goals of this program are to acquire, reduce and analyze deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds in order to determine the detailed initial conditions for star formation from quantitative measurements of the internal structure of starless cloud cores and to quantitatively investigate the evolution of such structure through the star and planet formation process.

  9. Measuring the Internal Structure and Physical Conditions in Star and Planet Forming Clouds Core: Toward a Quantitative Description of Cloud Evolution

    NASA Technical Reports Server (NTRS)

    Lada, Charles J.

    2005-01-01

    This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular cloud cores and investigate the physical conditions which give rise to star and planet formation. The goals of this program are to acquire, reduce and analyze deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds in order to internal structure of starless cloud cores and to quantitatively investigate the evolution of such structure through the star and planet formation process. During the second year of this grant, progress toward these goals is discussed.

  10. DNA DAMAGE QUANTITATION BY ALKALINE GEL ELECTROPHORESIS.

    SciTech Connect

    SUTHERLAND,B.M.; BENNETT,P.V.; SUTHERLAND, J.C.

    2004-03-24

    Physical and chemical agents in the environment, those used in clinical applications, or encountered during recreational exposures to sunlight, induce damages in DNA. Understanding the biological impact of these agents requires quantitation of the levels of such damages in laboratory test systems as well as in field or clinical samples. Alkaline gel electrophoresis provides a sensitive (down to {approx} a few lesions/5Mb), rapid method of direct quantitation of a wide variety of DNA damages in nanogram quantities of non-radioactive DNAs from laboratory, field, or clinical specimens, including higher plants and animals. This method stems from velocity sedimentation studies of DNA populations, and from the simple methods of agarose gel electrophoresis. Our laboratories have developed quantitative agarose gel methods, analytical descriptions of DNA migration during electrophoresis on agarose gels (1-6), and electronic imaging for accurate determinations of DNA mass (7-9). Although all these components improve sensitivity and throughput of large numbers of samples (7,8,10), a simple version using only standard molecular biology equipment allows routine analysis of DNA damages at moderate frequencies. We present here a description of the methods, as well as a brief description of the underlying principles, required for a simplified approach to quantitation of DNA damages by alkaline gel electrophoresis.

  11. Orientation-guided two-scale approach for the segmentation and quantitative description of woven bundles of fibers from three-dimensional tomographic images

    NASA Astrophysics Data System (ADS)

    Chapoullié, Cédric; Da Costa, Jean-Pierre; Cataldi, Michel; Vignoles, Gérard L.; Germain, Christian

    2015-11-01

    This paper proposes a two-scale approach for the description of fibrous materials from tomographic data. It operates at two scales: coarse scale to describe weaving patterns and fine scale to depict fiber layout within yarns. At both scales, the proposed approach starts with the segmentation of yarns and fibers. Then, the fibrous structure (fiber diameters, fiber and yarn orientations, fiber density within yarns) is described. The segmentation algorithms are applied to a chunk of a woven ceramic-matrix composite observed at yarn and fiber scales using tomographic data from the European synchrotron radiation facility. The fiber and yarn segmentation results allow investigation of intrayarn fiber layout. The analysis of intrayarn fiber density and orientations shows the effects of the weaving process on fiber organization, in particular fiber compaction or yarn shearing. These results pave the way toward a deeper analysis of such materials. Indeed, the data collected with the proposed methods are a key starting point for realistic image synthesis. Such images may in turn be used to validate the fiber and yarn segmentation algorithms. Besides, and above all, they will allow material behavior simulation, aiming at the evaluation of the material's strengths and weaknesses inferred from its fibrous architecture.

  12. Rigour in quantitative research.

    PubMed

    Claydon, Leica Sarah

    2015-07-22

    This article which forms part of the research series addresses scientific rigour in quantitative research. It explores the basis and use of quantitative research and the nature of scientific rigour. It examines how the reader may determine whether quantitative research results are accurate, the questions that should be asked to determine accuracy and the checklists that may be used in this process. Quantitative research has advantages in nursing, since it can provide numerical data to help answer questions encountered in everyday practice.

  13. Quantitative description of the effect of stratification on dormancy release of grape seeds in response to various temperatures and water contents.

    PubMed

    Wang, W Q; Song, S Q; Li, S H; Gan, Y Y; Wu, J H; Cheng, H Y

    2009-01-01

    The effect of stratification on dormancy release of grape seeds crossing from the sub- to the supraoptimal range of temperatures and water contents was analysed by modified threshold models. The stratification impacted on dormancy release in three different ways: (i) dormancy was consistently released with prolonged stratification time when stratified at temperatures of <15 degrees C; (ii) at 15 degrees C and 20 degrees C, the stratification effect initially increased, and then decreased with extended time; and (iii) stratification at 25 degrees C only reduced germinable seeds. These behaviours indicated that stratification could not only release primary dormancy but also induce secondary dormancy in grape seed. The rate of dormancy release changed linearly in two phases, while induction increased exponentially with increasing temperature. The thermal time approaches effectively quantified dormancy release only at suboptimal temperature, but a quantitative method to integrate the occurrence of dormancy release and induction at the same time could describe it well at either sub- or supraoptimal temperatures. The regression with the percentage of germinable seeds versus stratification temperature or water content within both the sub- and supraoptimal range revealed how the optimal temperature (T(so)) and water content (W(so)) for stratification changed. The T(so) moved from 10.6 degrees C to 5.3 degrees C with prolonged time, while W(so) declined from >0.40 g H2O g DW(-1) at 5 degrees C to approximately 0.23 g H2O g DW(-1) at 30 degrees C. Dormancy release in grape seeds can occur across a very wide range of conditions, which has important implications for their ability to adapt to a changeable environment in the wild.

  14. Quantitative chromatin pattern description in Feulgen-stained nuclei as a diagnostic tool to characterize the oligodendroglial and astroglial components in mixed oligo-astrocytomas.

    PubMed

    Decaestecker, C; Lopes, B S; Gordower, L; Camby, I; Cras, P; Martin, J J; Kiss, R; VandenBerg, S R; Salmon, I

    1997-04-01

    The oligoastrocytoma, as a mixed glioma, represents a nosologic dilemma with respect to precisely defining the oligodendroglial and astroglial phenotypes that constitute the neoplastic cell lineages of these tumors. In this study, cell image analysis with Feulgen-stained nuclei was used to distinguish between oligodendroglial and astrocytic phenotypes in oligodendrogliomas and astrocytomas and then applied to mixed oligoastrocytomas. Quantitative features with respect to chromatin pattern (30 variables) and DNA ploidy (8 variables) were evaluated on Feulgen-stained nuclei in a series of 71 gliomas using computer-assisted microscopy. These included 32 oligodendrogliomas (OLG group: 24 grade II and 8 grade III tumors according to the WHO classification), 32 astrocytomas (AST group: 13 grade II and 19 grade III tumors), and 7 oligoastrocytomas (OLGAST group). Initially, image analysis with multivariate statistical analyses (Discriminant Analysis) could identify each glial tumor group. Highly significant statistical differences were obtained distinguishing the morphonuclear features of oligodendrogliomas from those of astrocytomas, regardless of their histological grade. When compared with the 7 mixed oligoastrocytomas under study, 5 exhibited DNA ploidy and chromatin pattern characteristics similar to grade II oligodendrogliomas, I to grade III oligodendrogliomas, and I to grade II astrocytomas. Using multifactorial statistical analyses (Discriminant Analysis combined with Principal Component Analysis). It was possible to quantify the proportion of "typical" glial cell phenotypes that compose grade II and III oligodendrogliomas and grade II and III astrocytomas in each mixed glioma. Cytometric image analysis may be an important adjunct to routine histopathology for the reproducible identification of neoplasms containing a mixture of oligodendroglial and astrocytic phenotypes.

  15. Comment on ``The First Accurate Description of an Aurora''

    NASA Astrophysics Data System (ADS)

    Silverman, Sam

    2007-11-01

    Schröder [2006] discusses Das Buch der Natur (The Book of Nature), written by Konrad von Megenberg between 1348 and 1350. The Buch was the first encyclopedia of natural phenomena written in German. (For a contemporary German translation, see Schulz [1897] for definitions of Megenberg's astronomical terminology, see Deschler [1977]). Megenberg translated the Liber de Natura Rerum, written by Thomas of Cantimpré between 1225 and 1240.

  16. A method to accurately quantitate intensities of (32)P-DNA bands when multiple bands appear in a single lane of a gel is used to study dNTP insertion opposite a benzo[a]pyrene-dG adduct by Sulfolobus DNA polymerases Dpo4 and Dbh.

    PubMed

    Sholder, Gabriel; Loechler, Edward L

    2015-01-01

    Quantitating relative (32)P-band intensity in gels is desired, e.g., to study primer-extension kinetics of DNA polymerases (DNAPs). Following imaging, multiple (32)P-bands are often present in lanes. Though individual bands appear by eye to be simple and well-resolved, scanning reveals they are actually skewed-Gaussian in shape and neighboring bands are overlapping, which complicates quantitation, because slower migrating bands often have considerable contributions from the trailing edges of faster migrating bands. A method is described to accurately quantitate adjacent (32)P-bands, which relies on having a standard: a simple skewed-Gaussian curve from an analogous pure, single-component band (e.g., primer alone). This single-component scan/curve is superimposed on its corresponding band in an experimentally determined scan/curve containing multiple bands (e.g., generated in a primer-extension reaction); intensity exceeding the single-component scan/curve is attributed to other components (e.g., insertion products). Relative areas/intensities are determined via pixel analysis, from which relative molarity of components is computed. Common software is used. Commonly used alternative methods (e.g., drawing boxes around bands) are shown to be less accurate. Our method was used to study kinetics of dNTP primer-extension opposite a benzo[a]pyrene-N(2)-dG-adduct with four DNAPs, including Sulfolobus solfataricus Dpo4 and Sulfolobus acidocaldarius Dbh. Vmax/Km is similar for correct dCTP insertion with Dpo4 and Dbh. Compared to Dpo4, Dbh misinsertion is slower for dATP (∼20-fold), dGTP (∼110-fold) and dTTP (∼6-fold), due to decreases in Vmax. These findings provide support that Dbh is in the same Y-Family DNAP class as eukaryotic DNAP κ and bacterial DNAP IV, which accurately bypass N(2)-dG adducts, as well as establish the scan-method described herein as an accurate method to quantitate relative intensity of overlapping bands in a single lane, whether generated

  17. Quantitative description and modeling of real networks

    NASA Astrophysics Data System (ADS)

    Capocci, Andrea; Caldarelli, Guido; de Los Rios, Paolo

    2003-10-01

    We present data analysis and modeling of two particular cases of study in the field of growing networks. We analyze World Wide Web data set and authorship collaboration networks in order to check the presence of correlation in the data. The results are reproduced with good agreement through a suitable modification of the standard Albert-Barabási model of network growth. In particular, intrinsic relevance of sites plays a role in determining the future degree of the vertex.

  18. Descriptive statistics.

    PubMed

    Shi, Runhua; McLarty, Jerry W

    2009-10-01

    In this article, we introduced basic concepts of statistics, type of distributions, and descriptive statistics. A few examples were also provided. The basic concepts presented herein are only a fraction of the concepts related to descriptive statistics. Also, there are many commonly used distributions not presented herein, such as Poisson distributions for rare events and exponential distributions, F distributions, and logistic distributions. More information can be found in many statistics books and publications.

  19. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  20. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  1. Anaphoric Descriptions

    ERIC Educational Resources Information Center

    Beller, Charley

    2013-01-01

    The study of definite descriptions has been a central part of research in linguistics and philosophy of language since Russell's seminal work "On Denoting" (Russell 1905). In that work Russell quickly dispatches analyses of denoting expressions with forms like "no man," "some man," "a man," and "every…

  2. Accurate ab Initio Spin Densities.

    PubMed

    Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus

    2012-06-12

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].

  3. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  4. [Descriptive statistics].

    PubMed

    Rendón-Macías, Mario Enrique; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe

    2016-01-01

    Descriptive statistics is the branch of statistics that gives recommendations on how to summarize clearly and simply research data in tables, figures, charts, or graphs. Before performing a descriptive analysis it is paramount to summarize its goal or goals, and to identify the measurement scales of the different variables recorded in the study. Tables or charts aim to provide timely information on the results of an investigation. The graphs show trends and can be histograms, pie charts, "box and whiskers" plots, line graphs, or scatter plots. Images serve as examples to reinforce concepts or facts. The choice of a chart, graph, or image must be based on the study objectives. Usually it is not recommended to use more than seven in an article, also depending on its length.

  5. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  6. Accurate spectral color measurements

    NASA Astrophysics Data System (ADS)

    Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.

    1999-08-01

    Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.

  7. Quantitative myocardial perfusion SPECT.

    PubMed

    Tsui, B M; Frey, E C; LaCroix, K J; Lalush, D S; McCartney, W H; King, M A; Gullberg, G T

    1998-01-01

    In recent years, there has been much interest in the clinical application of attenuation compensation to myocardial perfusion single photon emission computed tomography (SPECT) with the promise that accurate quantitative images can be obtained to improve clinical diagnoses. The different attenuation compensation methods that are available create confusion and some misconceptions. Also, attenuation-compensated images reveal other image-degrading effects including collimator-detector blurring and scatter that are not apparent in uncompensated images. This article presents basic concepts of the major factors that degrade the quality and quantitative accuracy of myocardial perfusion SPECT images, and includes a discussion of the various image reconstruction and compensation methods and misconceptions and pitfalls in implementation. The differences between the various compensation methods and their performance are demonstrated. Particular emphasis is directed to an approach that promises to provide quantitative myocardial perfusion SPECT images by accurately compensating for the 3-dimensional (3-D) attenuation, collimator-detector response, and scatter effects. With advances in the computer hardware and optimized implementation techniques, quantitatively accurate and high-quality myocardial perfusion SPECT images can be obtained in clinically acceptable processing time. Examples from simulation, phantom, and patient studies are used to demonstrate the various aspects of the investigation. We conclude that quantitative myocardial perfusion SPECT, which holds great promise to improve clinical diagnosis, is an achievable goal in the near future.

  8. Single-tube nested competitive PCR with homologous competitor for quantitation of DNA target sequences: theoretical description of heteroduplex formation, evaluation of sensitivity, precision and linear range of the method.

    PubMed

    Serth, J; Panitz, F; Herrmann, H; Alves, J

    1998-10-01

    Competitive PCR is a frequently used technique for quantitation of DNA and mRNA. However, the application of the most favourable homologous mutated competitors is impeded by the formation of heteroduplex molecules which complicates the data evaluation and may lead to quantitation errors. Moreover, in most cases a single quantitation of an unknown sample requires multiple competitive reactions for identification of the equivalence point. In the present study, a highly efficient and reliable method as well as the underlying theoretical model is described. The mathematical solutions of this model provide the basis for single-tube quantitation using a homologous mutated competitor. For quantitation of Human Papilloma Virus 16-DNA, it is shown that single tube quantitations using simple PAGE separation and video evaluation for signal analysis permit linear detection within more than two orders of magnitude. In addition, repeated single-tube competitive PCRs exhibited good precision (average standard deviation 5%), even if carried out as nested high cycle PCR for quantitation of low abundant sequences (intraassay sensitivity <2 x 10(2) copies). This evaluation method can be applied to any DNA separation and detection method which is capable of resolving the heteroduplex fraction from both homoduplex molecules.

  9. Descriptive thermodynamics

    NASA Astrophysics Data System (ADS)

    Ford, David; Huntsman, Steven

    2006-06-01

    Thermodynamics (in concert with its sister discipline, statistical physics) can be regarded as a data reduction scheme based on partitioning a total system into a subsystem and a bath that weakly interact with each other. Whereas conventionally, the systems investigated require this form of data reduction in order to facilitate prediction, a different problem also occurs, in the context of communication networks, markets, etc. Such “empirically accessible” systems typically overwhelm observers with the sort of information that in the case of (say) a gas is effectively unobtainable. What is required for such complex interacting systems is not prediction (this may be impossible when humans besides the observer are responsible for the interactions) but rather, description as a route to understanding. Still, the need for a thermodynamical data reduction scheme remains. In this paper, we show how an empirical temperature can be computed for finite, empirically accessible systems, and further outline how this construction allows the age-old science of thermodynamics to be fruitfully applied to them.

  10. Sequentially Simulated Outcomes: Kind Experience versus Nontransparent Description

    ERIC Educational Resources Information Center

    Hogarth, Robin M.; Soyer, Emre

    2011-01-01

    Recently, researchers have investigated differences in decision making based on description and experience. We address the issue of when experience-based judgments of probability are more accurate than are those based on description. If description is well understood ("transparent") and experience is misleading ("wicked"), it…

  11. An analytical fit to an accurate ab initio ( 1A 1) potential surface of H 2O

    NASA Astrophysics Data System (ADS)

    Redmon, Michael J.; Schatz, George C.

    1981-01-01

    The accurate ab initio MBPT quartic force field of Bartlett, Shavitt and Purvis has been fit to an analytical function using a method developed by Sorbie and Murrell (SM). An analysis of this surface indicates that it describes most properties of the H 2O molecule very accurately, including an exact fit to the MBPT force field, and very close to the correct energy difference between linear and equilibrium H 2O. The surface also reproduces the correct diatomic potentials in all dissociative regions, but some aspects of it in the "near asymptotic" O( 1D) + H 2 region are not quantitatively described. For example, the potential seems to be too attractive at long range for O + H 2 encounters, although it does have the correct minimum energy path geometry and correctly exhibits no barrier to O atom insertion. Comparisons of this surface with one previously developed by SM indicates generally good agreement between the two, especially after some of the SM parameters were corrected, using a numerical differentiation algorithm to evaluate them. A surface developed by Schinke and Lester (SL) is more realistic than outs in the O( 1D) + H 2 regions, but less quantitative in its description of the H 2O molecule. Overall, the present fit appears to be both realistic and quantitative for energy displacements up to 3-4; eV from H 2O equilibrium, and should therefore be useful for spectroscopic and collision dynamics studies involving H 2O.

  12. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  13. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  14. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  15. LSM: perceptually accurate line segment merging

    NASA Astrophysics Data System (ADS)

    Hamid, Naila; Khan, Nazar

    2016-11-01

    Existing line segment detectors tend to break up perceptually distinct line segments into multiple segments. We propose an algorithm for merging such broken segments to recover the original perceptually accurate line segments. The algorithm proceeds by grouping line segments on the basis of angular and spatial proximity. Then those line segment pairs within each group that satisfy unique, adaptive mergeability criteria are successively merged to form a single line segment. This process is repeated until no more line segments can be merged. We also propose a method for quantitative comparison of line segment detection algorithms. Results on the York Urban dataset show that our merged line segments are closer to human-marked ground-truth line segments compared to state-of-the-art line segment detection algorithms.

  16. Biological Interpretation of Quantitative PET Brain Data

    NASA Astrophysics Data System (ADS)

    Sossi, Vesna

    2002-11-01

    The variety of available positron emission tomography (PET) radiotracers and the ability of providing quantitative estimates of radiotracer concentrations make PET an invaluable tool in the in-vivo investigation of biological processes. Mathematical descriptions of the processes under investigation are used to extract relevant kinetic parameters from the time course of radioactivity concentrations. Such kinetic parameters can provide a quantitative description of both, the characteristics of a particular process, and its changes due to various disease states.

  17. Soft Biometrics; Human Identification Using Comparative Descriptions.

    PubMed

    Reid, Daniel A; Nixon, Mark S; Stevenage, Sarah V

    2014-06-01

    Soft biometrics are a new form of biometric identification which use physical or behavioral traits that can be naturally described by humans. Unlike other biometric approaches, this allows identification based solely on verbal descriptions, bridging the semantic gap between biometrics and human description. To permit soft biometric identification the description must be accurate, yet conventional human descriptions comprising of absolute labels and estimations are often unreliable. A novel method of obtaining human descriptions will be introduced which utilizes comparative categorical labels to describe differences between subjects. This innovative approach has been shown to address many problems associated with absolute categorical labels-most critically, the descriptions contain more objective information and have increased discriminatory capabilities. Relative measurements of the subjects' traits can be inferred from comparative human descriptions using the Elo rating system. The resulting soft biometric signatures have been demonstrated to be robust and allow accurate recognition of subjects. Relative measurements can also be obtained from other forms of human representation. This is demonstrated using a support vector machine to determine relative measurements from gait biometric signatures-allowing retrieval of subjects from video footage by using human comparisons, bridging the semantic gap.

  18. The innervation of the adrenal gland. IV. Innervation of the rat adrenal medulla from birth to old age. A descriptive and quantitative morphometric and biochemical study of the innervation of chromaffin cells and adrenal medullary neurons in Wistar rats.

    PubMed Central

    Tomlinson, A; Coupland, R E

    1990-01-01

    The innervation of the adrenal medulla has been investigated in normal Wistar rats from birth to old age and ultrastructural findings compared with biochemical markers of the cholinergic innervation of the adrenal gland and catecholamine storage. Morphological evidence of the immaturity of the innervation during the first postnatal week is provided and using quantitative morphometry the innervation of chromaffin cells is shown to reach a mean total of 5.4 synapses per chromaffin cell during the period 26 days to 12 weeks of age. The variation in contents of synaptic profiles is discussed in the light of recent work that demonstrates a major sensory as well as visceral efferent innervation of the gland. Adrenal medullary neurons usually occur in closely packed groups, intimately associated with Schwann cells. Axodendritic and axosomatic synapses on these neurons are described and the likely origin of axonal processes innervating the neurons discussed. In old age the density of innervation remains the same as in young adult animals even though the medulla shows evidence of hyperplasia and hypertrophy of individual chromaffin cells. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9 Fig. 10 Fig. 11 Fig. 12 Fig. 13 Fig. 14 Fig. 15 Fig. 16 Fig. 17 Fig. 18 Fig. 19 Fig. 20 Fig. 21 Fig. 22 Fig. 23 Fig. 24 Fig. 25 PMID:2384334

  19. Quantitative description of habitat suitability for the juvenile common sole ( Solea solea, L.) in the Bay of Biscay (France) and the contribution of different habitats to the adult population

    NASA Astrophysics Data System (ADS)

    Le Pape, Olivier; Chauvet, Florence; Mahévas, Stéphanie; Lazure, Pascal; Guérault, Daniel; Désaunay, Yves

    2003-11-01

    This study describes the spatial distribution of young-of-the-year sole based on autumnal beam trawl surveys conducted in the Bay of Biscay (France) during a 15-y period. Previous studies showed that habitat suitability for juvenile sole varies according to physical factors such as bathymetry, sediment structure and river plume influence. These factors, which are known exhaustively for the entire Bay of Biscay from static maps (bathymetry and granulometry) or temporal maps based on a hydrodynamic model (the river plume), were used as descriptors in a generalised linear model of habitat suitability in order to characterise the distribution of juvenile 0-group sole according to delta distribution. This model was used to identify the habitats in which juvenile 0-group sole are concentrated. The respective areas of these habitats were determined from a Geographic Information System (GIS), and their respective contribution to the sole population in the Bay of Biscay was calculated in terms of the estimated number of young fish (GIS area×density derived from the model). Despite the great variability of survey data, this quantitative approach emphasises the highly important role of restricted shallow, muddy estuarine areas as nursery grounds of sole in the Bay of Biscay and demonstrates the relation between interannual variations of nursery habitat capacity (with respect to estuarine extent) and sole recruitment.

  20. Quantitative research.

    PubMed

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  1. QUANTITATIVE MORPHOLOGY

    EPA Science Inventory

    Abstract: In toxicology, the role of quantitative assessment of brain morphology can be understood in the context of two types of treatment-related alterations. One type of alteration is specifically associated with treatment and is not observed in control animals. Measurement ...

  2. Accurate Automated Apnea Analysis in Preterm Infants

    PubMed Central

    Vergales, Brooke D.; Paget-Brown, Alix O.; Lee, Hoshik; Guin, Lauren E.; Smoot, Terri J.; Rusin, Craig G.; Clark, Matthew T.; Delos, John B.; Fairchild, Karen D.; Lake, Douglas E.; Moorman, Randall; Kattwinkel, John

    2017-01-01

    Objective In 2006 the apnea of prematurity (AOP) consensus group identified inaccurate counting of apnea episodes as a major barrier to progress in AOP research. We compare nursing records of AOP to events detected by a clinically validated computer algorithm that detects apnea from standard bedside monitors. Study Design Waveform, vital sign, and alarm data were collected continuously from all very low-birth-weight infants admitted over a 25-month period, analyzed for central apnea, bradycardia, and desaturation (ABD) events, and compared with nursing documentation collected from charts. Our algorithm defined apnea as > 10 seconds if accompanied by bradycardia and desaturation. Results Of the 3,019 nurse-recorded events, only 68% had any algorithm-detected ABD event. Of the 5,275 algorithm-detected prolonged apnea events > 30 seconds, only 26% had nurse-recorded documentation within 1 hour. Monitor alarms sounded in only 74% of events of algorithm-detected prolonged apnea events > 10 seconds. There were 8,190,418 monitor alarms of any description throughout the neonatal intensive care unit during the 747 days analyzed, or one alarm every 2 to 3 minutes per nurse. Conclusion An automated computer algorithm for continuous ABD quantitation is a far more reliable tool than the medical record to address the important research questions identified by the 2006 AOP consensus group. PMID:23592319

  3. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  4. Quantitative glycomics.

    PubMed

    Orlando, Ron

    2010-01-01

    The ability to quantitatively determine changes is an essential component of comparative glycomics. Multiple strategies are available by which this can be accomplished. These include label-free approaches and strategies where an isotopic label is incorporated into the glycans prior to analysis. The focus of this chapter is to describe each of these approaches while providing insight into their strengths and weaknesses, so that glycomic investigators can make an educated choice of the strategy that is best suited for their particular application.

  5. Quantitative Literacy: Geosciences and Beyond

    NASA Astrophysics Data System (ADS)

    Richardson, R. M.; McCallum, W. G.

    2002-12-01

    Quantitative literacy seems like such a natural for the geosciences, right? The field has gone from its origin as a largely descriptive discipline to one where it is hard to imagine failing to bring a full range of mathematical tools to the solution of geological problems. Although there are many definitions of quantitative literacy, we have proposed one that is analogous to the UNESCO definition of conventional literacy: "A quantitatively literate person is one who, with understanding, can both read and represent quantitative information arising in his or her everyday life." Central to this definition is the concept that a curriculum for quantitative literacy must go beyond the basic ability to "read and write" mathematics and develop conceptual understanding. It is also critical that a curriculum for quantitative literacy be engaged with a context, be it everyday life, humanities, geoscience or other sciences, business, engineering, or technology. Thus, our definition works both within and outside the sciences. What role do geoscience faculty have in helping students become quantitatively literate? Is it our role, or that of the mathematicians? How does quantitative literacy vary between different scientific and engineering fields? Or between science and nonscience fields? We will argue that successful quantitative literacy curricula must be an across-the-curriculum responsibility. We will share examples of how quantitative literacy can be developed within a geoscience curriculum, beginning with introductory classes for nonmajors (using the Mauna Loa CO2 data set) through graduate courses in inverse theory (using singular value decomposition). We will highlight six approaches to across-the curriculum efforts from national models: collaboration between mathematics and other faculty; gateway testing; intensive instructional support; workshops for nonmathematics faculty; quantitative reasoning requirement; and individual initiative by nonmathematics faculty.

  6. Quantitation of signal transduction.

    PubMed

    Krauss, S; Brand, M D

    2000-12-01

    Conventional qualitative approaches to signal transduction provide powerful ways to explore the architecture and function of signaling pathways. However, at the level of the complete system, they do not fully depict the interactions between signaling and metabolic pathways and fail to give a manageable overview of the complexity that is often a feature of cellular signal transduction. Here, we introduce a quantitative experimental approach to signal transduction that helps to overcome these difficulties. We present a quantitative analysis of signal transduction during early mitogen stimulation of lymphocytes, with steady-state respiration rate as a convenient marker of metabolic stimulation. First, by inhibiting various key signaling pathways, we measure their relative importance in regulating respiration. About 80% of the input signal is conveyed via identifiable routes: 50% through pathways sensitive to inhibitors of protein kinase C and MAP kinase and 30% through pathways sensitive to an inhibitor of calcineurin. Second, we quantify how each of these pathways differentially stimulates functional units of reactions that produce and consume a key intermediate in respiration: the mitochondrial membrane potential. Both the PKC and calcineurin routes stimulate consumption more strongly than production, whereas the unidentified signaling routes stimulate production more than consumption, leading to no change in membrane potential despite increased respiration rate. The approach allows a quantitative description of the relative importance of signal transduction pathways and the routes by which they activate a specific cellular process. It should be widely applicable.

  7. Will Quantitative Proteomics Redefine Some of the Key Concepts in Skeletal Muscle Physiology?

    PubMed Central

    Gizak, Agnieszka; Rakus, Dariusz

    2016-01-01

    Molecular and cellular biology methodology is traditionally based on the reasoning called “the mechanistic explanation”. In practice, this means identifying and selecting correlations between biological processes which result from our manipulation of a biological system. In theory, a successful application of this approach requires precise knowledge about all parameters of a studied system. However, in practice, due to the systems’ complexity, this requirement is rarely, if ever, accomplished. Typically, it is limited to a quantitative or semi-quantitative measurements of selected parameters (e.g., concentrations of some metabolites), and a qualitative or semi-quantitative description of expression/post-translational modifications changes within selected proteins. A quantitative proteomics approach gives a possibility of quantitative characterization of the entire proteome of a biological system, in the context of the titer of proteins as well as their post-translational modifications. This enables not only more accurate testing of novel hypotheses but also provides tools that can be used to verify some of the most fundamental dogmas of modern biology. In this short review, we discuss some of the consequences of using quantitative proteomics to verify several key concepts in skeletal muscle physiology.

  8. Multimedia content description framework

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Kim, Michelle Yoonk Yung (Inventor); Li, Chung-Sheng (Inventor); Mohan, Rakesh (Inventor); Smith, John Richard (Inventor)

    2003-01-01

    A framework is provided for describing multimedia content and a system in which a plurality of multimedia storage devices employing the content description methods of the present invention can interoperate. In accordance with one form of the present invention, the content description framework is a description scheme (DS) for describing streams or aggregations of multimedia objects, which may comprise audio, images, video, text, time series, and various other modalities. This description scheme can accommodate an essentially limitless number of descriptors in terms of features, semantics or metadata, and facilitate content-based search, index, and retrieval, among other capabilities, for both streamed or aggregated multimedia objects.

  9. Self-consistent mean flow description of the nonlinear saturation of the vortex shedding in the cylinder wake.

    PubMed

    Mantič-Lugo, Vladislav; Arratia, Cristóbal; Gallaire, François

    2014-08-22

    The Bénard-von Kármán vortex shedding instability in the wake of a cylinder is perhaps the best known example of a supercritical Hopf bifurcation in fluid dynamics. However, a simplified physical description that accurately accounts for the saturation amplitude of the instability is still missing. Here, we present a simple self-consistent model that provides a clear description of the saturation mechanism and quantitatively predicts the saturated amplitude and flow fields. The model is formally constructed by a set of coupled equations governing the mean flow together with its most unstable eigenmode with finite size. The saturation amplitude is determined by requiring the mean flow to be neutrally stable. Without requiring any input from numerical or experimental data, the resolution of the model provides a good prediction of the amplitude and frequency of the vortex shedding as well as the spatial structure of the mean flow and the Reynolds stress.

  10. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  11. Dual Enrollment in a Rural Environment: A Descriptive Quantitative Study

    ERIC Educational Resources Information Center

    Dodge, Mary Beth

    2012-01-01

    Dual enrollment is a federally funded program that offers high school students the opportunity to earn both high school and postsecondary credits for the same course. While the phenomenon of concurrent enrollment in postsecondary and college educational programs is not new, political support and public funding has drawn focus to the policies of…

  12. A survey of quantitative descriptions of molecular structure.

    PubMed

    Guha, Rajarshi; Willighagen, Egon

    2012-01-01

    Numerical characterization of molecular structure is a first step in many computational analysis of chemical structure data. These numerical representations, termed descriptors, come in many forms, ranging from simple atom counts and invariants of the molecular graph to distribution of properties, such as charge, across a molecular surface. In this article we first present a broad categorization of descriptors and then describe applications and toolkits that can be employed to evaluate them. We highlight a number of issues surrounding molecular descriptor calculations such as versioning and reproducibility and describe how some toolkits have attempted to address these problems.

  13. Quantitative representation and description of intravoxel fiber complexity in HARDI

    NASA Astrophysics Data System (ADS)

    Sun, Chang-yu; Chu, Chun-yu; Liu, Wan-yu; Hsu, Edward W.; Korenberg, Julie R.; Zhu, Yue-min

    2015-11-01

    Diffusion tensor imaging and high angular resolution diffusion imaging are often used to analyze the fiber complexity of tissues. In these imaging techniques, the most commonly calculated metric is anisotropy, such as fractional anisotropy (FA), generalized anisotropy (GA), and generalized fractional anisotropy (GFA). The basic idea underlying these metrics is to compute the deviation from free or spherical diffusion. However, in many cases, the question is not really to know whether it concerns spherical diffusion. Instead, the main concern is to describe and quantify fiber complexity such as fiber crossing in a voxel. In this context, it would be more direct and effective to compute the deviation from a single fiber bundle instead of a sphere. We propose a new metric, called PEAM (PEAnut Metric), which is based on computing the deviation of orientation diffusion functions (ODFs) from a single fiber bundle ODF represented by a peanut. As an example, the proposed PEAM metric is used to classify intravoxel fiber configurations. The results on simulated data, physical phantom data and real brain data consistently showed that the proposed PEAM provides greater accuracy than FA, GA and GFA and enables parallel and complex fibers to be better distinguished.

  14. Physics 3204. Course Description.

    ERIC Educational Resources Information Center

    Newfoundland and Labrador Dept. of Education.

    A description of the physics 3204 course in Newfoundland and Labrador is provided. The description includes: (1) statement of purpose, including general objectives of science education; (2) a list of six course objectives; (3) course content for units on sound, light, optical instruments, electrostatics, current electricity, Michael Faraday and…

  15. Descriptive Metadata: Emerging Standards.

    ERIC Educational Resources Information Center

    Ahronheim, Judith R.

    1998-01-01

    Discusses metadata, digital resources, cross-disciplinary activity, and standards. Highlights include Standard Generalized Markup Language (SGML); Extensible Markup Language (XML); Dublin Core; Resource Description Framework (RDF); Text Encoding Initiative (TEI); Encoded Archival Description (EAD); art and cultural-heritage metadata initiatives;…

  16. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  17. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.

  18. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. ACCURATE SIMULATIONS OF BINARY BLACK HOLE MERGERS IN FORCE-FREE ELECTRODYNAMICS

    SciTech Connect

    Alic, Daniela; Moesta, Philipp; Rezzolla, Luciano; Jaramillo, Jose Luis; Zanotti, Olindo

    2012-07-20

    We provide additional information on our recent study of the electromagnetic emission produced during the inspiral and merger of supermassive black holes when these are immersed in a force-free plasma threaded by a uniform magnetic field. As anticipated in a recent letter, our results show that although a dual-jet structure is present, the associated luminosity is {approx}100 times smaller than the total one, which is predominantly quadrupolar. Here we discuss the details of our implementation of the equations in which the force-free condition is not implemented at a discrete level, but rather obtained via a damping scheme which drives the solution to satisfy the correct condition. We show that this is important for a correct and accurate description of the current sheets that can develop in the course of the simulation. We also study in greater detail the three-dimensional charge distribution produced as a consequence of the inspiral and show that during the inspiral it possesses a complex but ordered structure which traces the motion of the two black holes. Finally, we provide quantitative estimates of the scaling of the electromagnetic emission with frequency, with the diffused part having a dependence that is the same as the gravitational-wave one and that scales as L{sup non-coll}{sub EM} Almost-Equal-To {Omega}{sup 10/3-8/3}, while the collimated one scales as L{sup coll}{sub EM} Almost-Equal-To {Omega}{sup 5/3-6/3}, thus with a steeper dependence than previously estimated. We discuss the impact of these results on the potential detectability of dual jets from supermassive black holes and the steps necessary for more accurate estimates.

  4. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  5. Precocious quantitative cognition in monkeys.

    PubMed

    Ferrigno, Stephen; Hughes, Kelly D; Cantlon, Jessica F

    2016-02-01

    Basic quantitative abilities are thought to have an innate basis in humans partly because the ability to discriminate quantities emerges early in child development. If humans and nonhuman primates share this developmentally primitive foundation of quantitative reasoning, then this ability should be present early in development across species and should emerge earlier in monkeys than in humans because monkeys mature faster than humans. We report that monkeys spontaneously make accurate quantity choices by 1 year of age in a task that human children begin to perform only at 2.5 to 3 years of age. Additionally, we report that the quantitative sensitivity of infant monkeys is equal to that of the adult animals in their group and that rates of learning do not differ between infant and adult animals. This novel evidence of precocious quantitative reasoning in infant monkeys suggests that human quantitative reasoning shares its early developing foundation with other primates. The data further suggest that early developing components of primate quantitative reasoning are constrained by maturational factors related to genetic development as opposed to learning experience alone.

  6. Langevin description of nonequilibrium quantum fields

    NASA Astrophysics Data System (ADS)

    Gautier, F.; Serreau, J.

    2012-12-01

    We consider the nonequilibrium dynamics of a real quantum scalar field. We show the formal equivalence of the exact evolution equations for the statistical and spectral two-point functions with a fictitious Langevin process and examine the conditions under which a local Markovian dynamics is a valid approximation. In quantum field theory, the memory kernel and the noise correlator typically exhibit long time power laws and are thus highly nonlocal, thereby questioning the possibility of a local description. We show that despite this fact, there is a finite time range during which a local description is accurate. This requires the theory to be (effectively) weakly coupled. We illustrate the use of such a local description for studies of decoherence and entropy production in quantum field theory.

  7. [Examination procedure and description of skin lesions].

    PubMed

    Ochsendorf, Falk; Meister, Laura

    2017-02-09

    The dermatologic examination follows a clear structure. After a short history is taken, the whole skin is inspected. The description, which is ideally provided in writing, forces one to look at the skin more closely. The description should include an accurate description of the location, the distribution, the form, and the type of lesion. The article contains tables with internationally approved definitions to describe skin changes. The analysis of these findings allows one to deduce pathophysiologic mechanisms occurring in the skin and to deduce hypotheses, i. e., suspected and differential diagnoses. These are confirmed or excluded by further diagnostic measures. The expert comes to a diagnosis very quickly by a pattern-recognition process, whereby novices must still develop this kind of thinking. Experts can minimize cognitive bias by reflective analytical reasoning and reorganization of knowledge.

  8. Hardware description languages

    NASA Technical Reports Server (NTRS)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  9. The use of experimental bending tests to more accurate numerical description of TBC damage process

    NASA Astrophysics Data System (ADS)

    Sadowski, T.; Golewski, P.

    2016-04-01

    Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.

  10. Enabling Computational Technologies for the Accurate Prediction/Description of Molecular Interactions in Condensed Phases

    DTIC Science & Technology

    2014-10-08

    Marenich, Christopher J. Cramer, Donald G. Truhlar, and Chang-Guo Zhan. Free Energies of Solvation with Surface , Volume, and Local Electrostatic...Effects and Atomic Surface Tensions to Represent the First Solvation Shell (Reprint), Journal of Chemical Theory and Computation, (01 2010): . doi...the Gibbs free energy of solvation and dissociation of HCl in water via Monte Carlo simulations and continuum solvation models, Physical Chemistry

  11. An accurate theoretical description for electronic transport properties of single molecular junctions

    NASA Astrophysics Data System (ADS)

    Luo, Yi

    2002-03-01

    We have developed a new theoretical approach to characterize the electron transport process in molecular devices based on the elastic-scattering Green's function theory in connection with the hybrid density functional theory without using any fitting parameters. Two molecular devices with benzene-1,4-dithiol and octanedithiol molecules embedded between two gold electrodes have been studied. The calculated current-voltage characteristics are in very good agreement with existing experimental results reported by Reed et. al for benzene-1,4-dithiol [Science, 278(1997) 252] and by Cui et al. for octanedithiol [Science, 294(2001) 571]. Our approach is very straightforward and can apply to quite large systems. Most importantly, it provides a reliable way to design and optimize molecular devices theoretically, thereby avoiding extremely difficult, time consuming laboratory tests.

  12. Some "Facts" About CAI: A Quantitative Analysis of the 1976 Index to Computer Based Instruction

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    1976-01-01

    Descriptive quantitative data on various aspects of CAI are reported, including subject matter, author languages, instructional strategies, level of instruction, sources, and central processors. (Author)

  13. Quantitative imaging methods in osteoporosis

    PubMed Central

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M. Carola

    2016-01-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research. PMID:28090446

  14. Quantitative imaging methods in osteoporosis.

    PubMed

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G

    2016-12-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.

  15. Quantitative Decision Support Requires Quantitative User Guidance

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  16. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  17. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  18. Accurate measurement of streamwise vortices using dual-plane PIV

    NASA Astrophysics Data System (ADS)

    Waldman, Rye M.; Breuer, Kenneth S.

    2012-11-01

    Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers.

  19. Teaching Descriptive Style.

    ERIC Educational Resources Information Center

    Brashers, H. C.

    1968-01-01

    As the inexperienced writer becomes aware of the issues involved in the composition of effective descriptive prose, he also develops a consistent control over his materials. The persona he chooses, if coherently thought out, can function as an index of many choices, helping him to manipulate the tone, intent, and mood of this style; to regulate…

  20. Andrew integrated reservoir description

    SciTech Connect

    Todd, S.P.

    1996-12-31

    The Andrew field is an oil and gas accumulation in Palaeocene deep marine sands in the Central North Sea. It is currently being developed with mainly horizontal oil producers. Because of the field`s relatively small reserves (mean 118 mmbbls), the performance of each of the 10 or so horizontal wells is highly important. Reservoir description work at sanction time concentrated on supporting the case that the field could be developed commercially with the minimum number of wells. The present Integrated Reservoir Description (IRD) is focussed on delivering the next level of detail that will impact the understanding of the local reservoir architecture and dynamic performance of each well. Highlights of Andrew IRD Include: (1) Use of a Reservoir Uncertainty Statement (RUS) developed at sanction time to focus the descriptive effort of both asset, support and contract petrotechnical staff, (2) High resolution biostratigraphic correlation to support confident zonation of the reservoir, (3) Detailed sedimentological analysis of the core including the use of dipmeter to interpret channel/sheet architecture to provide new insights into reservoir heterogeneity; (4) Integrated petrographical and petrophysical investigation of the controls on Sw-Height and relative permeability of water; (5) Fluids description using oil geochemistry and Residual Salt Analysis Sr isotope studies. Andrew IRD has highlighted several important risks to well performance, including the influence of more heterolithic intervals on gas breakthrough and the controls on water coning exerted by suppressed water relative permeability in the transition zone.

  1. Andrew integrated reservoir description

    SciTech Connect

    Todd, S.P.

    1996-01-01

    The Andrew field is an oil and gas accumulation in Palaeocene deep marine sands in the Central North Sea. It is currently being developed with mainly horizontal oil producers. Because of the field's relatively small reserves (mean 118 mmbbls), the performance of each of the 10 or so horizontal wells is highly important. Reservoir description work at sanction time concentrated on supporting the case that the field could be developed commercially with the minimum number of wells. The present Integrated Reservoir Description (IRD) is focussed on delivering the next level of detail that will impact the understanding of the local reservoir architecture and dynamic performance of each well. Highlights of Andrew IRD Include: (1) Use of a Reservoir Uncertainty Statement (RUS) developed at sanction time to focus the descriptive effort of both asset, support and contract petrotechnical staff, (2) High resolution biostratigraphic correlation to support confident zonation of the reservoir, (3) Detailed sedimentological analysis of the core including the use of dipmeter to interpret channel/sheet architecture to provide new insights into reservoir heterogeneity; (4) Integrated petrographical and petrophysical investigation of the controls on Sw-Height and relative permeability of water; (5) Fluids description using oil geochemistry and Residual Salt Analysis Sr isotope studies. Andrew IRD has highlighted several important risks to well performance, including the influence of more heterolithic intervals on gas breakthrough and the controls on water coning exerted by suppressed water relative permeability in the transition zone.

  2. Quantitative Literacy Provision in the First Year of Medical Studies

    ERIC Educational Resources Information Center

    Frith, V.

    2011-01-01

    This article presents a description of and motivation for the quantitative literacy (numeracy) intervention in the first year of medical studies at a South African university. This intervention is a response to the articulation gap between the quantitative literacy of many first-year medical students and the demands of their curriculum.…

  3. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  4. Recommended procedures and techniques for the petrographic description of bituminous coals

    USGS Publications Warehouse

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    Modern coal petrology requires rapid and precise description of great numbers of coal core or bench samples in order to acquire the information required to understand and predict vertical and lateral variation of coal quality for correlation with coal-bed thickness, depositional environment, suitability for technological uses, etc. Procedures for coal description vary in accordance with the objectives of the description. To achieve our aim of acquiring the maximum amount of quantitative information within the shortest period of time, we have adopted a combined megascopic-microscopic procedure. Megascopic analysis is used to identify the distinctive lithologies present, and microscopic analysis is required only to describe representative examples of the mixed lithologies observed. This procedure greatly decreases the number of microscopic analyses needed for adequate description of a sample. For quantitative megascopic description of coal microlithotypes, microlithotype assemblages, and lithotypes, we use (V) for vitrite or vitrain, (E) for liptite, (I) for inertite or fusain, (M) for mineral layers or lenses other than iron sulfide, (S) for iron sulfide, and (X1), (X2), etc. for mixed lithologies. Microscopic description is expressed in terms of V representing the vitrinite maceral group, E the exinite group, I the inertinite group, and M mineral components. volume percentages are expressed as subscripts. Thus (V)20(V80E10I5M5)80 indicates a lithotype or assemblage of microlithotypes consisting of 20 vol. % vitrite and 80% of a mixed lithology having a modal maceral composition V80E10I5M5. This bulk composition can alternatively be recalculated and described as V84E8I4M4. To generate these quantitative data rapidly and accurately, we utilize an automated image analysis system (AIAS). Plots of VEIM data on easily constructed ternary diagrams provide readily comprehended illustrations of the range of modal composition of the lithologic units making up a given coal

  5. Reasoning about scene descriptions

    SciTech Connect

    DiManzo, M.; Adorni, G.; Giunchiglia, F.

    1986-07-01

    When a scene is described by means of natural language sentences, many details are usually omitted, because they are not in the focus of the conversation. Moreover, natural language is not the best tool to define precisely positions and spatial relationships. The process of interpreting ambiguous statements and inferring missing details involves many types of knowledge, from linguistics to physics. This paper is mainly concerned with the problem of modeling the process of understanding descriptions of static scenes. The specific topics covered by this work are the analysis of the meaning of spatial prepositions, the problem of the reference system and dimensionality, the activation of expectations about unmentioned objects, the role of default knowledge about object positions and its integration with contextual information sources, and the problem of space representation. The issue of understanding dynamic scenes descriptions is briefly approached in the last section.

  6. Quantitative Analysis of Radar Returns from Insects

    NASA Technical Reports Server (NTRS)

    Riley, J. R.

    1979-01-01

    When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.

  7. Increasing the quantitative bandwidth of NMR measurements.

    PubMed

    Power, J E; Foroozandeh, M; Adams, R W; Nilsson, M; Coombes, S R; Phillips, A R; Morris, G A

    2016-02-18

    The frequency range of quantitative NMR is increased from tens to hundreds of kHz by a new pulse sequence, CHORUS. It uses chirp pulses to excite uniformly over very large bandwidths, yielding accurate integrals even for nuclei such as (19)F that have very wide spectra.

  8. A new HPLC method for azithromycin quantitation.

    PubMed

    Zubata, Patricia; Ceresole, Rita; Rosasco, Maria Ana; Pizzorno, Maria Teresa

    2002-02-01

    A simple liquid chromatographic method was developed for the estimation of azithromycin raw material and in pharmaceutical forms. The sample was chromatographed on a reverse phase C18 column and eluants monitored at a wavelength of 215 nm. The method was accurate, precise and sufficiently selective. It is applicable for its quantitation, stability and dissolution tests.

  9. Spacelab J experiment descriptions

    SciTech Connect

    Miller, T.Y.

    1993-08-01

    Brief descriptions of the experiment investigations for the Spacelab J Mission which was launched from the Kennedy Space Center aboard the Endeavour in Sept. 1992 are presented. Experiments cover the following: semiconductor crystals; single crystals; superconducting composite materials; crystal growth; bubble behavior in weightlessness; microgravity environment; health monitoring of Payload Specialists; cultured plant cells; effect of low gravity on calcium metabolism and bone formation; and circadian rhythm. Separate abstracts have been prepared for articles from this report.

  10. Spacelab J experiment descriptions

    NASA Technical Reports Server (NTRS)

    Miller, Teresa Y. (Editor)

    1993-01-01

    Brief descriptions of the experiment investigations for the Spacelab J Mission which was launched from the Kennedy Space Center aboard the Endeavour in Sept. 1992 are presented. Experiments cover the following: semiconductor crystals; single crystals; superconducting composite materials; crystal growth; bubble behavior in weightlessness; microgravity environment; health monitoring of Payload Specialists; cultured plant cells; effect of low gravity on calcium metabolism and bone formation; and circadian rhythm.

  11. Management control system description

    SciTech Connect

    Bence, P. J.

    1990-10-01

    This Management Control System (MCS) description describes the processes used to manage the cost and schedule of work performed by Westinghouse Hanford Company (Westinghouse Hanford) for the US Department of Energy, Richland Operations Office (DOE-RL), Richland, Washington. Westinghouse Hanford will maintain and use formal cost and schedule management control systems, as presented in this document, in performing work for the DOE-RL. This MCS description is a controlled document and will be modified or updated as required. This document must be approved by the DOE-RL; thereafter, any significant change will require DOE-RL concurrence. Westinghouse Hanford is the DOE-RL operations and engineering contractor at the Hanford Site. Activities associated with this contract (DE-AC06-87RL10930) include operating existing plant facilities, managing defined projects and programs, and planning future enhancements. This document is designed to comply with Section I-13 of the contract by providing a description of Westinghouse Hanford's cost and schedule control systems used in managing the above activities. 5 refs., 22 figs., 1 tab.

  12. Accurate vessel segmentation with constrained B-snake.

    PubMed

    Yuanzhi Cheng; Xin Hu; Ji Wang; Yadong Wang; Tamura, Shinichi

    2015-08-01

    We describe an active contour framework with accurate shape and size constraints on the vessel cross-sectional planes to produce the vessel segmentation. It starts with a multiscale vessel axis tracing in a 3D computed tomography (CT) data, followed by vessel boundary delineation on the cross-sectional planes derived from the extracted axis. The vessel boundary surface is deformed under constrained movements on the cross sections and is voxelized to produce the final vascular segmentation. The novelty of this paper lies in the accurate contour point detection of thin vessels based on the CT scanning model, in the efficient implementation of missing contour points in the problematic regions and in the active contour model with accurate shape and size constraints. The main advantage of our framework is that it avoids disconnected and incomplete segmentation of the vessels in the problematic regions that contain touching vessels (vessels in close proximity to each other), diseased portions (pathologic structure attached to a vessel), and thin vessels. It is particularly suitable for accurate segmentation of thin and low contrast vessels. Our method is evaluated and demonstrated on CT data sets from our partner site, and its results are compared with three related methods. Our method is also tested on two publicly available databases and its results are compared with the recently published method. The applicability of the proposed method to some challenging clinical problems, the segmentation of the vessels in the problematic regions, is demonstrated with good results on both quantitative and qualitative experimentations; our segmentation algorithm can delineate vessel boundaries that have level of variability similar to those obtained manually.

  13. Description of induced nuclear fission with Skyrme energy functionals: Static potential energy surfaces and fission fragment properties

    NASA Astrophysics Data System (ADS)

    Schunck, N.; Duke, D.; Carr, H.; Knoll, A.

    2014-11-01

    Eighty years after its experimental discovery, a description of induced nuclear fission based solely on the interactions between neutrons and protons and quantum many-body methods still poses formidable challenges. The goal of this paper is to contribute to the development of a predictive microscopic framework for the accurate calculation of static properties of fission fragments for hot fission and thermal or slow neutrons. To this end, we focus on the 239Pu(n ,f ) reaction and employ nuclear density functional theory with Skyrme energy densities. Potential energy surfaces are computed at the Hartree-Fock-Bogoliubov approximation with up to five collective variables. We find that the triaxial degree of freedom plays an important role, both near the fission barrier and at scission. The impact of the parametrization of the Skyrme energy density and the role of pairing correlations on deformation properties from the ground state up to scission are also quantified. We introduce a general template for the quantitative description of fission fragment properties. It is based on the careful analysis of scission configurations, using both advanced topological methods and recently proposed quantum many-body techniques. We conclude that an accurate prediction of fission fragment properties at low incident neutron energies, although technologically demanding, should be within the reach of current nuclear density functional theory.

  14. Hierarchical structure description of spatiotemporal chaos.

    PubMed

    Liu, Jian; She, Zhen-Su; Guo, Hongyu; Li, Liang; Ouyang, Qi

    2004-09-01

    We develop a hierarchical structure (HS) analysis for quantitative description of statistical states of spatially extended systems. Examples discussed here include an experimental reaction-diffusion system with Belousov-Zhabotinsky kinetics, the two-dimensional complex Ginzburg-Landau equation, and the modified FitzHugh-Nagumon equation, which all show complex dynamics of spirals and defects. We demonstrate that the spatial-temporal fluctuation fields in the above-mentioned systems all display the HS similarity property originally proposed for the study of fully developed turbulence [Phys. Rev. Lett. 72, 336 (1994)

  15. Continuum description of avalanches in granular media.

    SciTech Connect

    Aranson, I. S.; Tsimring, L. S.

    2000-12-05

    A continuum theory of partially fluidized granular flows is proposed. The theory is based on a combination of the mass and momentum conservation equations with the order parameter equation which describes the transition between flowing and static components of the granular system. We apply this model to the dynamics of avalanches in chutes. The theory provides a quantitative description of recent observations of granular flows on rough inclined planes (Daerr and Douady 1999): layer bistability, and the transition from triangular avalanches propagating downhill at small inclination angles to balloon-shaped avalanches also propagating uphill for larger angles.

  16. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  17. Quantitative rescattering theory for high-order harmonic generation from molecules

    NASA Astrophysics Data System (ADS)

    Le, Anh-Thu; Lucchese, R. R.; Tonzani, S.; Morishita, T.; Lin, C. D.

    2009-07-01

    The quantitative rescattering theory (QRS) for high-order harmonic generation (HHG) by intense laser pulses is presented. According to the QRS, HHG spectra can be expressed as a product of a returning electron wave packet and the photorecombination differential cross section of the laser-free continuum electron back to the initial bound state. We show that the shape of the returning electron wave packet is determined mostly by the laser. The returning electron wave packets can be obtained from the strong-field approximation or from the solution of the time-dependent Schrödinger equation (TDSE) for a reference atom. The validity of the QRS is carefully examined by checking against accurate results for both harmonic magnitude and phase from the solution of the TDSE for atomic targets within the single active electron approximation. Combining with accurate transition dipoles obtained from state-of-the-art molecular photoionization calculations, we further show that available experimental measurements for HHG from partially aligned molecules can be explained by the QRS. Our results show that quantitative description of the HHG from aligned molecules has become possible. Since infrared lasers of pulse durations of a few femtoseconds are easily available in the laboratory, they may be used for dynamic imaging of a transient molecule with femtosecond temporal resolutions.

  18. Measuring Joint Stimulus Control by Complex Graph/Description Correspondences

    ERIC Educational Resources Information Center

    Fields, Lanny; Spear, Jack

    2012-01-01

    Joint stimulus control occurs when responding is determined by the correspondence of elements of a complex sample and a complex comparison stimulus. In academic settings, joint stimulus control of behavior would be evidenced by the selection of an accurate description of a complex graph in which each element of a graph corresponded to particular…

  19. The Genre of Technical Description.

    ERIC Educational Resources Information Center

    Jordan, Michael P.

    1986-01-01

    Summarizes recent research into systems of lexical and grammatical cohesion in technical description. Discusses various methods by which technical writers "re-enter" the topic of description back into the text in successive sentences. (HTH)

  20. Accurate three-dimensional documentation of distinct sites

    NASA Astrophysics Data System (ADS)

    Singh, Mahesh K.; Dutta, Ashish; Subramanian, Venkatesh K.

    2017-01-01

    One of the most critical aspects of documenting distinct sites is acquiring detailed and accurate range information. Several three-dimensional (3-D) acquisition techniques are available, but each has its own limitations. This paper presents a range data fusion method with the aim to enhance the descriptive contents of the entire 3-D reconstructed model. A kernel function is introduced for supervised classification of the range data using a kernelized support vector machine. The classification method is based on the local saliency features of the acquired range data. The range data acquired from heterogeneous range sensors are transformed into a defined common reference frame. Based on the segmentation criterion, the fusion of range data is performed by integrating finer regions of range data acquired from a laser range scanner with the coarser region of Kinect's range data. After fusion, the Delaunay triangulation algorithm is applied to generate the highly accurate, realistic 3-D model of the scene. Finally, experimental results show the robustness of the proposed approach.

  1. A Descriptive Analysis of High School Student Motivators for Success

    ERIC Educational Resources Information Center

    Booker, Janet Maria

    2011-01-01

    The purpose of the quantitative descriptive study was to gain an understanding of the motivating factors leading high school students from rural and urban schools to receive a diploma. A revised version of the High School Motivation Scale (Close, 2001; Solberg et al., 2007) generated from SurveyMonkey.com was administered to high school graduates…

  2. Using an Educational Electronic Documentation System to Help Nursing Students Accurately Identify Nursing Diagnoses

    ERIC Educational Resources Information Center

    Pobocik, Tamara J.

    2013-01-01

    The use of technology and electronic medical records in healthcare has exponentially increased. This quantitative research project used a pretest/posttest design, and reviewed how an educational electronic documentation system helped nursing students to identify the accurate related to statement of the nursing diagnosis for the patient in the case…

  3. MCO Monitoring activity description

    SciTech Connect

    SEXTON, R.A.

    1998-11-09

    Spent Nuclear Fuel remaining from Hanford's N-Reactor operations in the 1970s has been stored under water in the K-Reactor Basins. This fuel will be repackaged, dried and stored in a new facility in the 200E Area. The safety basis for this process of retrieval, drying, and interim storage of the spent fuel has been established. The monitoring of MCOS in dry storage is a currently identified issue in the SNF Project. This plan outlines the key elements of the proposed monitoring activity. Other fuel stored in the K-Reactor Basins, including SPR fuel, will have other monitoring considerations and is not addressed by this activity description.

  4. Three Approaches to Descriptive Research.

    ERIC Educational Resources Information Center

    Svensson, Lennart

    This report compares three approaches to descriptive research, focusing on the kinds of descriptions developed and on the methods used to develop the descriptions. The main emphasis in all three approaches is on verbal data. In these approaches the importance of interpretation and its intuitive nature are emphasized. The three approaches, however,…

  5. An accurate metric for the spacetime around rotating neutron stars.

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2017-01-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parameterisation of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a 3-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  6. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  7. Trophic relationships in an estuarine environment: A quantitative fatty acid analysis signature approach

    NASA Astrophysics Data System (ADS)

    Magnone, Larisa; Bessonart, Martin; Gadea, Juan; Salhi, María

    2015-12-01

    In order to better understand the functioning of aquatic environments, it is necessary to obtain accurate diet estimations in food webs. Their description should incorporate information about energy flow and the relative importance of trophic pathways. Fatty acids have been extensively used in qualitative studies on trophic relationships in food webs. Recently a new method to estimate quantitatively single predator diet has been developed. In this study, a model of aquatic food web through quantitative fatty acid signature analysis was generated to identify the trophic interactions among the species in the Rocha Lagoon. The biological sampling over two consecutive annual periods was comprehensive enough to identify all functional groups in the aquatic food web (except birds and mammals). Heleobia australis seemed to play a central role in this estuarine ecosystem. As both, a grazer and a prey to several other species, probably H. australis is transferring a great amount of energy to upper trophic levels. Most of the species at Rocha Lagoon have a wide range of prey items in their diet reflecting a complex food web, which is characteristic of extremely dynamic environment as estuarine ecosystems. QFASA is a model in tracing and quantitative estimate trophic pathways among species in an estuarine food web. The results obtained in the present work are a valuable contribution in the understanding of trophic relationships in Rocha Lagoon.

  8. Theory of bi-molecular association dynamics in 2D for accurate model and experimental parameterization of binding rates

    PubMed Central

    Yogurtcu, Osman N.; Johnson, Margaret E.

    2015-01-01

    The dynamics of association between diffusing and reacting molecular species are routinely quantified using simple rate-equation kinetics that assume both well-mixed concentrations of species and a single rate constant for parameterizing the binding rate. In two-dimensions (2D), however, even when systems are well-mixed, the assumption of a single characteristic rate constant for describing association is not generally accurate, due to the properties of diffusional searching in dimensions d ≤ 2. Establishing rigorous bounds for discriminating between 2D reactive systems that will be accurately described by rate equations with a single rate constant, and those that will not, is critical for both modeling and experimentally parameterizing binding reactions restricted to surfaces such as cellular membranes. We show here that in regimes of intrinsic reaction rate (ka) and diffusion (D) parameters ka/D > 0.05, a single rate constant cannot be fit to the dynamics of concentrations of associating species independently of the initial conditions. Instead, a more sophisticated multi-parametric description than rate-equations is necessary to robustly characterize bimolecular reactions from experiment. Our quantitative bounds derive from our new analysis of 2D rate-behavior predicted from Smoluchowski theory. Using a recently developed single particle reaction-diffusion algorithm we extend here to 2D, we are able to test and validate the predictions of Smoluchowski theory and several other theories of reversible reaction dynamics in 2D for the first time. Finally, our results also mean that simulations of reactive systems in 2D using rate equations must be undertaken with caution when reactions have ka/D > 0.05, regardless of the simulation volume. We introduce here a simple formula for an adaptive concentration dependent rate constant for these chemical kinetics simulations which improves on existing formulas to better capture non-equilibrium reaction dynamics from dilute

  9. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  10. Accurate 3D quantification of the bronchial parameters in MDCT

    NASA Astrophysics Data System (ADS)

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  11. Survival benefits in mimicry: a quantitative framework.

    PubMed

    Mikaberidze, Alexey; Haque, Masudul

    2009-08-07

    Mimicry is a resemblance between species that benefits at least one of the species. It is a ubiquitous evolutionary phenomenon particularly common among prey species, in which case the advantage involves better protection from predation. We formulate a mathematical description of predation, to investigate benefits and disadvantages of mimicry. The basic setup involves differential equations for quantities representing predator behavior, namely, the probabilities for attacking prey at the next encounter. Using this framework, we present new quantitative results, and also provide a unified description of a significant fraction of the quantitative mimicry literature. The new results include "temporary" mutualism between prey species, and an optimal density at which the survival benefit is greatest for the mimic. The formalism leads naturally to extensions in several directions, such as the interplay of mimicry with population dynamics, studies of spatiotemporal patterns, etc. We demonstrate this extensibility by presenting some explorations on spatiotemporal pattern dynamics.

  12. Description of Jet Breakup

    NASA Technical Reports Server (NTRS)

    Papageorgiou, Demetrios T.

    1996-01-01

    In this article we review recent results on the breakup of cylindrical jets of a Newtonian fluid. Capillary forces provide the main driving mechanism and our interest is in the description of the flow as the jet pinches to form drops. The approach is to describe such topological singularities by constructing local (in time and space) similarity solutions from the governing equations. This is described for breakup according to the Euler, Stokes or Navier-Stokes equations. It is found that slender jet theories can be applied when viscosity is present, but for inviscid jets the local shape of the jet at breakup is most likely of a non-slender geometry. Systems of one-dimensional models of the governing equations are solved numerically in order to illustrate these differences.

  13. Task Description Language

    NASA Technical Reports Server (NTRS)

    Simmons, Reid; Apfelbaum, David

    2005-01-01

    Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.

  14. Symmetrical gait descriptions

    NASA Astrophysics Data System (ADS)

    Dunajewski, Adam; Dusza, Jacek J.; Rosado Muñoz, Alfredo

    2014-11-01

    The article presents a proposal for the description of human gait as a periodic and symmetric process. Firstly, the data for researches was obtained in the Laboratory of Group SATI in the School of Engineering of University of Valencia. Then, the periodical model - Mean Double Step (MDS) was made. Finally, on the basis of MDS, the symmetrical models - Left Mean Double Step and Right Mean Double Step (LMDS and RMDS) could be created. The method of various functional extensions was used. Symmetrical gait models can be used to calculate the coefficients of asymmetry at any time or phase of the gait. In this way it is possible to create asymmetry, function which better describes human gait dysfunction. The paper also describes an algorithm for calculating symmetric models, and shows exemplary results based on the experimental data.

  15. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    PubMed

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well.

  16. YUCCA MOUNTAIN SITE DESCRIPTION

    SciTech Connect

    A.M. Simmons

    2004-04-16

    The ''Yucca Mountain Site Description'' summarizes, in a single document, the current state of knowledge and understanding of the natural system at Yucca Mountain. It describes the geology; geochemistry; past, present, and projected future climate; regional hydrologic system; and flow and transport within the unsaturated and saturated zones at the site. In addition, it discusses factors affecting radionuclide transport, the effect of thermal loading on the natural system, and tectonic hazards. The ''Yucca Mountain Site Description'' is broad in nature. It summarizes investigations carried out as part of the Yucca Mountain Project since 1988, but it also includes work done at the site in earlier years, as well as studies performed by others. The document has been prepared under the Office of Civilian Radioactive Waste Management quality assurance program for the Yucca Mountain Project. Yucca Mountain is located in Nye County in southern Nevada. The site lies in the north-central part of the Basin and Range physiographic province, within the northernmost subprovince commonly referred to as the Great Basin. The basin and range physiography reflects the extensional tectonic regime that has affected the region during the middle and late Cenozoic Era. Yucca Mountain was initially selected for characterization, in part, because of its thick unsaturated zone, its arid to semiarid climate, and the existence of a rock type that would support excavation of stable openings. In 1987, the United States Congress directed that Yucca Mountain be the only site characterized to evaluate its suitability for development of a geologic repository for high-level radioactive waste and spent nuclear fuel.

  17. Older Adults’ Pain Descriptions

    PubMed Central

    McDonald, Deborah Dillon

    2008-01-01

    The purpose of this study was to describe the types of pain information described by older adults with chronic osteoarthritis pain. Pain descriptions were obtained from older adults’ who participated in a posttest only double blind study testing how the phrasing of healthcare practitioners’ pain questions affected the amount of communicated pain information. The 207 community dwelling older adults were randomized to respond to either the open-ended or closed-ended pain question. They viewed and orally responded to a computer displayed videotape of a practitioner asking them the respective pain question. All then viewed and responded to the general follow up question, ““What else can you tell me?” and lastly, “What else can you tell me about your pain, aches, soreness or discomfort?” Audio-taped responses were transcribed and content analyzed by trained, independent raters using 16 a priori criteria from the American Pain Society (2002) Guidelines for the Management of Pain in Osteoarthritis, Rheumatoid Arthritis, and Juvenile Chronic Arthritis. Older adults described important but limited types of information primarily about pain location, timing, and intensity. Pain treatment information was elicited after repeated questioning. Therefore, practitioners need to follow up older adults’ initial pain descriptions with pain questions that promote a more complete pain management discussion. Routine use of a multidimensional pain assessment instrument that measures information such as functional interference, current pain treatments, treatment effects, and side effects would be one way of insuring a more complete pain management discussion with older adults. PMID:19706351

  18. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  19. Quantitative analysis to guide orphan drug development.

    PubMed

    Lesko, L J

    2012-08-01

    The development of orphan drugs for rare diseases has made impressive strides in the past 10 years. There has been a surge in orphan drug designations, but new drug approvals have not kept up. This article presents a three-pronged hierarchical strategy for quantitative analysis of data at the descriptive, mechanistic, and systems levels of the biological system that could represent a standardized and rational approach to orphan drug development. Examples are provided to illustrate the concept.

  20. Quantitative results for square gradient models of fluids

    NASA Astrophysics Data System (ADS)

    Kong, Ling-Ti; Vriesinga, Dan; Denniston, Colin

    2011-03-01

    Square gradient models for fluids are extensively used because they are believed to provide a good qualitative understanding of the essential physics. However, unlike elasticity theory for solids, there are few quantitative results for specific (as opposed to generic) fluids. Indeed the only numerical value of the square gradient coefficients for specific fluids have been inferred from attempts to match macroscopic properties such as surface tensions rather than from direct measurement. We employ all-atom molecular dynamics, using the TIP3P and OPLS force fields, to directly measure the coefficients of the density gradient expansion for several real fluids. For all liquids measured, including water, we find that the square gradient coefficient is negative, suggesting the need for some regularization of a model including only the square gradient, but only at wavelengths comparable to the molecular separation of molecules. The implications for liquid-gas interfaces are also examined. Remarkably, the square gradient model is found to give a reasonably accurate description of density fluctuations in the liquid state down to wavelengths close to atomic size.

  1. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  2. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  3. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  4. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  5. An Accurate, Simplified Model Intrabeam Scattering

    SciTech Connect

    Bane, Karl LF

    2002-05-23

    Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.

  6. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  7. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  8. Theoretical and computational studies of hydrophobic and hydrophilic hydration: Towards a molecular description of the hydration of proteins

    NASA Astrophysics Data System (ADS)

    Garde, Shekhar

    The unique balance of forces underlying biological processes-such as protein folding, aggregation, molecular recognition, and the formation of biological membranes-owes its origin in large part to the surrounding aqueous medium. A quantitative description of fundamental noncovalent interactions, in particular hydrophobic and electrostatic interactions at molecular- scale separations, requires an accurate description of water structure. Thus, the primary goals of our research are to understand the role of water in mediating interactions between molecules and to incorporate this understanding into molecular theories for calculating water-mediated interactions. We have developed a molecular model of hydrophobic interactions that uses methods of information theory to relate hydrophobic effects to the density fluctuations in liquid water. This model provides a quantitative description of small-molecule hydration thermodynamics, as well as insights into the entropies of unfolding globular proteins. For larger molecular solutes, we relate the inhomogeneous water structure in their vicinity to their hydration thermodynamics. We find that the water structure in the vicinity of nonpolar solutes is only locally sensitive to the molecular details of the solute. Water structures predicted using this observation are used to study the association of two neopentane molecules and the conformational equilibria of n-pentane molecule. We have also studied the hydration of a model molecular ionic solute, a tetramethylammonium ion, over a wide range of charge states of the solute. We find that, although the charge dependence of the ion hydration free energy is quadratic, negative ions are more favorably hydrated compared to positive ions. Moreover, this asymmetry of hydration can be reconciled by considering the differences in water organization surrounding positive and negative ions. We have also developed methods for predicting water structure surrounding molecular ions and relating

  9. Towards quantitative analysis of retinal features in optical coherence tomography.

    PubMed

    Baroni, Maurizio; Fortunato, Pina; La Torre, Agostino

    2007-05-01

    The purpose of this paper was to propose a new computer method for quantitative evaluation of representative features of the retina using optical coherence tomography (OCT). A multi-step approach was devised and positively tested for segmentation of the three main retinal layers: the vitreo-retinal interface and the inner and outer retina. Following a preprocessing step, three regions of interest were delimited. Significant peaks corresponding to high and low intensity strips were located along the OCT A-scan lines and accurate boundaries between different layers were obtained by maximizing an edge likelihood function. For a quantitative description, thickness measurement, densitometry, texture and curvature analyses were performed. As a first application, the effect of intravitreal injection of triamcinolone acetonide (IVTA) for the treatment of vitreo-retinal interface syndrome was evaluated. Almost all the parameters, measured on a set of 16 pathologic OCT images, were statistically different before and after IVTA injection (p<0.05). Shape analysis of the internal limiting membrane confirmed the reduction of the pathological traction state. Other significant parameters, such as reflectivity and texture contrast, exhibited relevant changes both at the vitreo-retinal interface and in the inner retinal layers. Texture parameters in the inner and outer retinal layers significantly correlated with the visual acuity restoration. According to these findings an IVTA injection might be considered a possible alternative to surgery for selected patients. In conclusion, the proposed approach appeared to be a promising tool for the investigation of tissue changes produced by pathology and/or therapy.

  10. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  11. Recent advances in quantitative neuroproteomics.

    PubMed

    Craft, George E; Chen, Anshu; Nairn, Angus C

    2013-06-15

    The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson's disease and Alzheimer's disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to shed

  12. Fast and Reliable Quantitative Peptidomics with labelpepmatch.

    PubMed

    Verdonck, Rik; De Haes, Wouter; Cardoen, Dries; Menschaert, Gerben; Huhn, Thomas; Landuyt, Bart; Baggerman, Geert; Boonen, Kurt; Wenseleers, Tom; Schoofs, Liliane

    2016-03-04

    The use of stable isotope tags in quantitative peptidomics offers many advantages, but the laborious identification of matching sets of labeled peptide peaks is still a major bottleneck. Here we present labelpepmatch, an R-package for fast and straightforward analysis of LC-MS spectra of labeled peptides. This open-source tool offers fast and accurate identification of peak pairs alongside an appropriate framework for statistical inference on quantitative peptidomics data, based on techniques from other -omics disciplines. A relevant case study on the desert locust Schistocerca gregaria proves our pipeline to be a reliable tool for quick but thorough explorative analyses.

  13. Microgravity Environment Description Handbook

    NASA Technical Reports Server (NTRS)

    DeLombard, Richard; McPherson, Kevin; Hrovat, Kenneth; Moskowitz, Milton; Rogers, Melissa J. B.; Reckart, Timothy

    1997-01-01

    The Microgravity Measurement and Analysis Project (MMAP) at the NASA Lewis Research Center (LeRC) manages the Space Acceleration Measurement System (SAMS) and the Orbital Acceleration Research Experiment (OARE) instruments to measure the microgravity environment on orbiting space laboratories. These laboratories include the Spacelab payloads on the shuttle, the SPACEHAB module on the shuttle, the middeck area of the shuttle, and Russia's Mir space station. Experiments are performed in these laboratories to investigate scientific principles in the near-absence of gravity. The microgravity environment desired for most experiments would have zero acceleration across all frequency bands or a true weightless condition. This is not possible due to the nature of spaceflight where there are numerous factors which introduce accelerations to the environment. This handbook presents an overview of the major microgravity environment disturbances of these laboratories. These disturbances are characterized by their source (where known), their magnitude, frequency and duration, and their effect on the microgravity environment. Each disturbance is characterized on a single page for ease in understanding the effect of a particular disturbance. The handbook also contains a brief description of each laboratory.

  14. Trusting Description: Authenticity, Accountability, and Archival Description Standards

    ERIC Educational Resources Information Center

    MacNeil, Heather

    2009-01-01

    It has been suggested that one of the purposes of archival description is to establish grounds for presuming the authenticity of the records being described. The article examines the implications of this statement by examining the relationship between and among authenticity, archival description, and archival accountability, assessing how this…

  15. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  16. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  17. Does the Taylor Spatial Frame Accurately Correct Tibial Deformities?

    PubMed Central

    Segal, Kira; Ilizarov, Svetlana; Fragomen, Austin T.; Ilizarov, Gabriel

    2009-01-01

    Background Optimal leg alignment is the goal of tibial osteotomy. The Taylor Spatial Frame (TSF) and the Ilizarov method enable gradual realignment of angulation and translation in the coronal, sagittal, and axial planes, therefore, the term six-axis correction. Questions/purposes We asked whether this approach would allow precise correction of tibial deformities. Methods We retrospectively reviewed 102 patients (122 tibiae) with tibial deformities treated with percutaneous osteotomy and gradual correction with the TSF. The proximal osteotomy group was subdivided into two subgroups to distinguish those with an intentional overcorrection of the mechanical axis deviation (MAD). The minimum followup after frame removal was 10 months (average, 48 months; range, 10–98 months). Results In the proximal osteotomy group, patients with varus and valgus deformities for whom the goal of alignment was neutral or overcorrection experienced accurate correction of MAD. In the proximal tibia, the medial proximal tibial angle improved from 80° to 89° in patients with a varus deformity and from 96° to 85° in patients with a valgus deformity. In the middle osteotomy group, all patients had less than 5° coronal plane deformity and 15 of 17 patients had less that 5° sagittal plane deformity. In the distal osteotomy group, the lateral distal tibial angle improved from 77° to 86° in patients with a valgus deformity and from 101° to 90° for patients with a varus deformity. Conclusions Gradual correction of all tibial deformities with the TSF was accurate and with few complications. Level of Evidence Level IV, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence. PMID:19911244

  18. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  19. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  20. Benchmark data base for accurate van der Waals interaction in inorganic fragments

    NASA Astrophysics Data System (ADS)

    Brndiar, Jan; Stich, Ivan

    2012-02-01

    A range of inorganic materials, such as Sb, As, P, S, Se are built from van der Waals (vdW) interacting units forming the crystals, which neither the standard DFT GGA description as well as cheap quantum chemistry methods, such as MP2, do not describe correctly. We use this data base, for which have performed ultra accurate CCSD(T) calculations in complete basis set limit, to test the alternative approximate theories, such as Grimme [1], Langreth-Lundqvist [2], and Tkachenko-Scheffler [3]. While none of these theories gives entirely correct description, Grimme consistently provides more accurate results than Langreth-Lundqvist, which tend to overestimate the distances and underestimate the interaction energies for this set of systems. Contrary Tkachenko-Scheffler appear to yield surprisingly accurate and computationally cheap and convenient description applicable also for systems with appreciable charge transfer. [4pt] [1] S. Grimme, J. Comp. Chem. 27, 1787 (2006) [0pt] [2] K. Lee, et al., Phys. Rev. B 82 081101 (R) (2010) [0pt] [3] Tkachenko and M. Scheffler Phys. Rev. Lett. 102 073005 (2009).

  1. A microscopic description of black hole evaporation via holography

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Hanada, Masanori; Maltz, Jonathan

    2016-07-01

    We propose a description of how a large, cold black hole (black zero-brane) in type IIA superstring theory evaporates into freely propagating D0-branes, by solving the dual gauge theory quantitatively. The energy spectrum of emitted D0-branes is parametrically close to thermal when the black hole is large. The black hole, while initially cold, gradually becomes an extremely hot and stringy object as it evaporates. As it emits D0-branes, its emission rate speeds up and it evaporates completely without leaving any remnant. Hence this system provides us with a concrete holographic description of black hole evaporation without information loss.

  2. Accurate taxonomic assignment of short pyrosequencing reads.

    PubMed

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  3. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  4. Sparse and accurate high resolution SAR imaging

    NASA Astrophysics Data System (ADS)

    Vu, Duc; Zhao, Kexin; Rowe, William; Li, Jian

    2012-05-01

    We investigate the usage of an adaptive method, the Iterative Adaptive Approach (IAA), in combination with a maximum a posteriori (MAP) estimate to reconstruct high resolution SAR images that are both sparse and accurate. IAA is a nonparametric weighted least squares algorithm that is robust and user parameter-free. IAA has been shown to reconstruct SAR images with excellent side lobes suppression and high resolution enhancement. We first reconstruct the SAR images using IAA, and then we enforce sparsity by using MAP with a sparsity inducing prior. By coupling these two methods, we can produce a sparse and accurate high resolution image that are conducive for feature extractions and target classification applications. In addition, we show how IAA can be made computationally efficient without sacrificing accuracies, a desirable property for SAR applications where the size of the problems is quite large. We demonstrate the success of our approach using the Air Force Research Lab's "Gotcha Volumetric SAR Data Set Version 1.0" challenge dataset. Via the widely used FFT, individual vehicles contained in the scene are barely recognizable due to the poor resolution and high side lobe nature of FFT. However with our approach clear edges, boundaries, and textures of the vehicles are obtained.

  5. Logic synthesis from DDL description

    NASA Technical Reports Server (NTRS)

    Shiva, S. G.

    1980-01-01

    The implementation of DDLTRN and DDLSIM programs on SEL-2 computer system is reported. These programs were tested with DDL descriptions of various complexity. An algorithm to synthesize the combinational logic using the cells available in the standard IC cell library was formulated. The algorithm is implemented as a FORTRAN program and a description of the program is given.

  6. Mission data system framework description

    NASA Technical Reports Server (NTRS)

    Meyer, K.; Rinker, G.; Dvorak, D.; Rosmussen, R.; Reinholttz, K.

    2002-01-01

    This document provides an overall description of the MDS Framework technology. Since the purpose is to provide a general reference for the frameworks, the descriptions are organized as compendium. This document does not provide guidance for how the MDS technology should be used.

  7. Quantitative analysis of blood vessel geometry

    NASA Astrophysics Data System (ADS)

    Fuhrman, Michael G.; Abdul-Karim, Othman; Shah, Sujal; Gilbert, Steven G.; Van Bibber, Richard

    2001-07-01

    Re-narrowing or restenosis of a human coronary artery occurs within six months in one third of balloon angioplasty procedures. Accurate and repeatable quantitative analysis of vessel shape is important to characterize the progression and type of restenosis, and to evaluate effects new therapies might have. A combination of complicated geometry and image variability, and the need for high resolution and large image size makes visual/manual analysis slow, difficult, and prone to error. The image processing and analysis described here was developed to automate feature extraction of the lumen, internal elastic lamina, neointima, external elastic lamina, and tunica adventitia and to enable an objective, quantitative definition of blood vessel geometry. The quantitative geometrical analysis enables the measurement of several features including perimeter, area, and other metrics of vessel damage. Automation of feature extraction creates a high throughput capability that enables analysis of serial sections for more accurate measurement of restenosis dimensions. Measurement results are input into a relational database where they can be statistically analyzed compared across studies. As part of the integrated process, results are also imprinted on the images themselves to facilitate auditing of the results. The analysis is fast, repeatable and accurate while allowing the pathologist to control the measurement process.

  8. An accurate equation of state for fluids and solids.

    PubMed

    Parsafar, G A; Spohr, H V; Patey, G N

    2009-09-03

    A simple functional form for a general equation of state based on an effective near-neighbor pair interaction of an extended Lennard-Jones (12,6,3) type is given and tested against experimental data for a wide variety of fluids and solids. Computer simulation results for ionic liquids are used for further evaluation. For fluids, there appears to be no upper density limitation on the equation of state. The lower density limit for isotherms near the critical temperature is the critical density. The equation of state gives a good description of all types of fluids, nonpolar (including long-chain hydrocarbons), polar, hydrogen-bonded, and metallic, at temperatures ranging from the triple point to the highest temperature for which there is experimental data. For solids, the equation of state is very accurate for all types considered, including covalent, molecular, metallic, and ionic systems. The experimental pvT data available for solids does not reveal any pressure or temperature limitations. An analysis of the importance and possible underlying physical significance of the terms in the equation of state is given.

  9. Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng

    Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.

  10. Reference module selection criteria for accurate testing of photovoltaic (PV) panels

    SciTech Connect

    Roy, J.N.; Gariki, Govardhan Rao; Nagalakhsmi, V.

    2010-01-15

    It is shown that for accurate testing of PV panels the correct selection of reference modules is important. A detailed description of the test methodology is given. Three different types of reference modules, having different I{sub SC} (short circuit current) and power (in Wp) have been used for this study. These reference modules have been calibrated from NREL. It has been found that for accurate testing, both I{sub SC} and power of the reference module must be either similar or exceed to that of modules under test. In case corresponding values of the test modules are less than a particular limit, the measurements may not be accurate. The experimental results obtained have been modeled by using simple equivalent circuit model and associated I-V equations. (author)

  11. A statistical mechanical description of biomolecular hydration

    SciTech Connect

    1996-02-01

    We present an efficient and accurate theoretical description of the structural hydration of biological macromolecules. The hydration of molecules of almost arbitrary size (tRNA, antibody-antigen complexes, photosynthetic reaction centre) can be studied in solution and in the crystal environment. The biomolecular structure obtained from x-ray crystallography, NMR, or modeling is required as input information. The structural arrangement of water molecules near a biomolecular surface is represented by the local water density analogous to the corresponding electron density in an x-ray diffraction experiment. The water-density distribution is approximated in terms of two- and three-particle correlation functions of solute atoms with water using a potentials-of-mean-force expansion.

  12. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  13. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  14. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  15. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  16. Obtaining accurate translations from expressed sequence tags.

    PubMed

    Wasmuth, James; Blaxter, Mark

    2009-01-01

    The genomes of an increasing number of species are being investigated through the generation of expressed sequence tags (ESTs). However, ESTs are prone to sequencing errors and typically define incomplete transcripts, making downstream annotation difficult. Annotation would be greatly improved with robust polypeptide translations. Many current solutions for EST translation require a large number of full-length gene sequences for training purposes, a resource that is not available for the majority of EST projects. As part of our ongoing EST programs investigating these "neglected" genomes, we have developed a polypeptide prediction pipeline, prot4EST. It incorporates freely available software to produce final translations that are more accurate than those derived from any single method. We describe how this integrated approach goes a long way to overcoming the deficit in training data.

  17. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  18. Accurate radio positions with the Tidbinbilla interferometer

    NASA Technical Reports Server (NTRS)

    Batty, M. J.; Gulkis, S.; Jauncey, D. L.; Rayner, P. T.

    1979-01-01

    The Tidbinbilla interferometer (Batty et al., 1977) is designed specifically to provide accurate radio position measurements of compact radio sources in the Southern Hemisphere with high sensitivity. The interferometer uses the 26-m and 64-m antennas of the Deep Space Network at Tidbinbilla, near Canberra. The two antennas are separated by 200 m on a north-south baseline. By utilizing the existing antennas and the low-noise traveling-wave masers at 2.29 GHz, it has been possible to produce a high-sensitivity instrument with a minimum of capital expenditure. The north-south baseline ensures that a good range of UV coverage is obtained, so that sources lying in the declination range between about -80 and +30 deg may be observed with nearly orthogonal projected baselines of no less than about 1000 lambda. The instrument also provides high-accuracy flux density measurements for compact radio sources.

  19. Magnetic ranging tool accurately guides replacement well

    SciTech Connect

    Lane, J.B.; Wesson, J.P. )

    1992-12-21

    This paper reports on magnetic ranging surveys and directional drilling technology which accurately guided a replacement well bore to intersect a leaking gas storage well with casing damage. The second well bore was then used to pump cement into the original leaking casing shoe. The repair well bore kicked off from the surface hole, bypassed casing damage in the middle of the well, and intersected the damaged well near the casing shoe. The repair well was subsequently completed in the gas storage zone near the original well bore, salvaging the valuable bottom hole location in the reservoir. This method would prevent the loss of storage gas, and it would prevent a potential underground blowout that could permanently damage the integrity of the storage field.

  20. Mobile app-based quantitative scanometric analysis.

    PubMed

    Wong, Jessica X H; Liu, Frank S F; Yu, Hua-Zhong

    2014-12-16

    The feasibility of using smartphones and other mobile devices as the detection platform for quantitative scanometric assays is demonstrated. The different scanning modes (color, grayscale, black/white) and grayscale converting protocols (average, weighted average/luminosity, and software specific) have been compared in determining the optical darkness ratio (ODR) values, a conventional quantitation measure for scanometric assays. A mobile app was developed to image and analyze scanometric assays, as demonstrated by paper-printed tests and a biotin-streptavidin assay on a plastic substrate. Primarily for ODR analysis, the app has been shown to perform as well as a traditional desktop scanner, augmenting that smartphones (and other mobile devices) promise to be a practical platform for accurate, quantitative chemical analysis and medical diagnostics.

  1. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  2. Quantitative plant proteomics.

    PubMed

    Bindschedler, Laurence V; Cramer, Rainer

    2011-02-01

    Quantitation is an inherent requirement in comparative proteomics and there is no exception to this for plant proteomics. Quantitative proteomics has high demands on the experimental workflow, requiring a thorough design and often a complex multi-step structure. It has to include sufficient numbers of biological and technical replicates and methods that are able to facilitate a quantitative signal read-out. Quantitative plant proteomics in particular poses many additional challenges but because of the nature of plants it also offers some potential advantages. In general, analysis of plants has been less prominent in proteomics. Low protein concentration, difficulties in protein extraction, genome multiploidy, high Rubisco abundance in green tissue, and an absence of well-annotated and completed genome sequences are some of the main challenges in plant proteomics. However, the latter is now changing with several genomes emerging for model plants and crops such as potato, tomato, soybean, rice, maize and barley. This review discusses the current status in quantitative plant proteomics (MS-based and non-MS-based) and its challenges and potentials. Both relative and absolute quantitation methods in plant proteomics from DIGE to MS-based analysis after isotope labeling and label-free quantitation are described and illustrated by published studies. In particular, we describe plant-specific quantitative methods such as metabolic labeling methods that can take full advantage of plant metabolism and culture practices, and discuss other potential advantages and challenges that may arise from the unique properties of plants.

  3. Semiclassical description of autocorrelations in nuclear masses

    SciTech Connect

    Garcia-Garcia, Antonio M.; Hirsch, Jorge G.; Frank, Alejandro

    2006-08-15

    Nuclear mass autocorrelations are investigated as a function of the number of nucleons. The fluctuating part of these autocorrelations is modeled by a parameter free model in which the nucleons are confined in a rigid sphere. Explicit results are obtained by using periodic orbit theory. Despite the simplicity of the model we have found a remarkable quantitative agreement of the mass autocorrelations for all nuclei in the nuclear data chart. In order to achieve a similar degree of agreement for the nuclear masses themselves it is necessary to consider additional variables such as multipolar corrections to the spherical shape and an effective number of nucleons. Our findings suggest that higher order effects like nuclear deformations or residual interactions have little relevance in the description of the fluctuations of the nuclear autocorrelations.

  4. On an efficient and accurate method to integrate restricted three-body orbits

    NASA Technical Reports Server (NTRS)

    Murison, Marc A.

    1989-01-01

    This work is a quantitative analysis of the advantages of the Bulirsch-Stoer (1966) method, demonstrating that this method is certainly worth considering when working with small N dynamical systems. The results, qualitatively suspected by many users, are quantitatively confirmed as follows: (1) the Bulirsch-Stoer extrapolation method is very fast and moderately accurate; (2) regularization of the equations of motion stabilizes the error behavior of the method and is, of course, essential during close approaches; and (3) when applicable, a manifold-correction algorithm reduces numerical errors to the limits of machine accuracy. In addition, for the specific case of the restricted three-body problem, even a small eccentricity for the orbit of the primaries drastically affects the accuracy of integrations, whether regularized or not; the circular restricted problem integrates much more accurately.

  5. Recommended procedures and methodology of coal description

    USGS Publications Warehouse

    Chao, E.C.; Minkin, J.A.; Thompson, C.L.

    1983-01-01

    This document is the result of a workshop on coal description held for the Branch of Coal Resources of the U.S. Geological Survey in March 1982. It has been prepared to aid and encourage the field-oriented coal scientist to participate directly in petrographic coal-description activities. The objectives and past and current practices of coal description vary widely. These are briefly reviewed and illustrated with examples. Sampling approaches and techniques for collecting columnar samples of fresh coal are also discussed. The recommended procedures and methodology emphasize the fact that obtaining a good megascopic description of a coal bed is much better done in the laboratory with a binocular microscope and under good lighting conditions after the samples have been cut and quickly prepared. For better observation and cross-checking using a petrographic microscope for identification purposes, an in-place polishing procedure (requiring less than 2 min) is routinely used. Methods for using both the petrographic microscope and an automated image analysis system are also included for geologists who have access to such instruments. To describe the material characteristics of a coal bed in terms of microlithotypes or lithotypes, a new nomenclature of (V), (E), (1), (M). (S). (X1). (X2) and so on is used. The microscopic description of the modal composition of a megascopically observed lithologic type is expressed in terms of (VEIM); subscripts are used to denote the volume percentage of each constituent present. To describe a coal-bed profile, semiquantitative data (without microscopic study) and quantitative data (with microscopic study) are presented in ready-to-understand form. The average total composition of any thickness interval or of the entire coal bed can be plotted on a triangular diagram having V, E, and I+ M +S as the apices. The modal composition of any mixed lithologies such as (X1), (X2), and so on can also be plotted on such a triangular ternary diagram

  6. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  7. Spherical shell model description of deformation and superdeformation

    NASA Astrophysics Data System (ADS)

    Poves, A.; Caurier, E.; Nowacki, F.; Zuker, A.

    2003-04-01

    Large-scale shell model calculations give at present a very accurate and comprehensive description of light and medium-light nuclei, specially when 0hbar ω spaces are adequate. The full pf-shell calculations have made it possible to describe many collective features in an spherical shell model context. Calculations including two major oscillator shells have proven able to describe also superdeformed bands.

  8. Exploiting spatial descriptions in visual scene analysis.

    PubMed

    Ziegler, Leon; Johannsen, Katrin; Swadzba, Agnes; De Ruiter, Jan P; Wachsmuth, Sven

    2012-08-01

    The reliable automatic visual recognition of indoor scenes with complex object constellations using only sensor data is a nontrivial problem. In order to improve the construction of an accurate semantic 3D model of an indoor scene, we exploit human-produced verbal descriptions of the relative location of pairs of objects. This requires the ability to deal with different spatial reference frames (RF) that humans use interchangeably. In German, both the intrinsic and relative RF are used frequently, which often leads to ambiguities in referential communication. We assume that there are certain regularities that help in specific contexts. In a first experiment, we investigated how speakers of German describe spatial relationships between different pieces of furniture. This gave us important information about the distribution of the RFs used for furniture-predicate combinations, and by implication also about the preferred spatial predicate. The results of this experiment are compiled into a computational model that extracts partial orderings of spatial arrangements between furniture items from verbal descriptions. In the implemented system, the visual scene is initially scanned by a 3D camera system. From the 3D point cloud, we extract point clusters that suggest the presence of certain furniture objects. We then integrate the partial orderings extracted from the verbal utterances incrementally and cumulatively with the estimated probabilities about the identity and location of objects in the scene, and also estimate the probable orientation of the objects. This allows the system to significantly improve both the accuracy and richness of its visual scene representation.

  9. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  10. Reconstruction of the activity of point sources for the accurate characterization of nuclear waste drums by segmented gamma scanning.

    PubMed

    Krings, Thomas; Mauerhofer, Eric

    2011-06-01

    This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.

  11. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  12. Accurate multiple network alignment through context-sensitive random walk

    PubMed Central

    2015-01-01

    Background Comparative network analysis can provide an effective means of analyzing large-scale biological networks and gaining novel insights into their structure and organization. Global network alignment aims to predict the best overall mapping between a given set of biological networks, thereby identifying important similarities as well as differences among the networks. It has been shown that network alignment methods can be used to detect pathways or network modules that are conserved across different networks. Until now, a number of network alignment algorithms have been proposed based on different formulations and approaches, many of them focusing on pairwise alignment. Results In this work, we propose a novel multiple network alignment algorithm based on a context-sensitive random walk model. The random walker employed in the proposed algorithm switches between two different modes, namely, an individual walk on a single network and a simultaneous walk on two networks. The switching decision is made in a context-sensitive manner by examining the current neighborhood, which is effective for quantitatively estimating the degree of correspondence between nodes that belong to different networks, in a manner that sensibly integrates node similarity and topological similarity. The resulting node correspondence scores are then used to predict the maximum expected accuracy (MEA) alignment of the given networks. Conclusions Performance evaluation based on synthetic networks as well as real protein-protein interaction networks shows that the proposed algorithm can construct more accurate multiple network alignments compared to other leading methods. PMID:25707987

  13. Raman Spectroscopy as an Accurate Probe of Defects in Graphene

    NASA Astrophysics Data System (ADS)

    Rodriguez-Nieva, Joaquin; Barros, Eduardo; Saito, Riichiro; Dresselhaus, Mildred

    2014-03-01

    Raman Spectroscopy has proved to be an invaluable non-destructive technique that allows us to obtain intrinsic information about graphene. Furthermore, defect-induced Raman features, namely the D and D' bands, have previously been used to assess the purity of graphitic samples. However, quantitative studies of the signatures of the different types of defects on the Raman spectra is still an open problem. Experimental results already suggest that the Raman intensity ratio ID /ID' may allow us to identify the nature of the defects. We study from a theoretical point of view the power and limitations of Raman spectroscopy in the study of defects in graphene. We derive an analytic model that describes the Double Resonance Raman process of disordered graphene samples, and which explicitly shows the role played by both the defect-dependent parameters as well as the experimentally-controlled variables. We compare our model with previous Raman experiments, and use it to guide new ways in which defects in graphene can be accurately probed with Raman spectroscopy. We acknowledge support from NSF grant DMR1004147.

  14. Rapid Accurate Identification of Bacterial and Viral Pathogens

    SciTech Connect

    Dunn, John

    2007-03-09

    The goals of this program were to develop two assays for rapid, accurate identification of pathogenic organisms at the strain level. The first assay "Quantitative Genome Profiling or QGP" is a real time PCR assay with a restriction enzyme-based component. Its underlying concept is that certain enzymes should cleave genomic DNA at many sites and that in some cases these cuts will interrupt the connection on the genomic DNA between flanking PCR primer pairs thereby eliminating selected PCR amplifications. When this occurs the appearance of the real-time PCR threshold (Ct) signal during DNA amplification is totally eliminated or, if cutting is incomplete, greatly delayed compared to an uncut control. This temporal difference in appearance of the Ct signal relative to undigested control DNA provides a rapid, high-throughput approach for DNA-based identification of different but closely related pathogens depending upon the nucleotide sequence of the target region. The second assay we developed uses the nucleotide sequence of pairs of shmi identifier tags (-21 bp) to identify DNA molecules. Subtle differences in linked tag pair combinations can also be used to distinguish between closely related isolates..

  15. Personalized Orthodontic Accurate Tooth Arrangement System with Complete Teeth Model.

    PubMed

    Cheng, Cheng; Cheng, Xiaosheng; Dai, Ning; Liu, Yi; Fan, Qilei; Hou, Yulin; Jiang, Xiaotong

    2015-09-01

    The accuracy, validity and lack of relation information between dental root and jaw in tooth arrangement are key problems in tooth arrangement technology. This paper aims to describe a newly developed virtual, personalized and accurate tooth arrangement system based on complete information about dental root and skull. Firstly, a feature constraint database of a 3D teeth model is established. Secondly, for computed simulation of tooth movement, the reference planes and lines are defined by the anatomical reference points. The matching mathematical model of teeth pattern and the principle of the specific pose transformation of rigid body are fully utilized. The relation of position between dental root and alveolar bone is considered during the design process. Finally, the relative pose relationships among various teeth are optimized using the object mover, and a personalized therapeutic schedule is formulated. Experimental results show that the virtual tooth arrangement system can arrange abnormal teeth very well and is sufficiently flexible. The relation of position between root and jaw is favorable. This newly developed system is characterized by high-speed processing and quantitative evaluation of the amount of 3D movement of an individual tooth.

  16. A Fuzzy Description Logic with Automatic Object Membership Measurement

    NASA Astrophysics Data System (ADS)

    Cai, Yi; Leung, Ho-Fung

    In this paper, we propose a fuzzy description logic named f om -DL by combining the classical view in cognitive psychology and fuzzy set theory. A formal mechanism used to determine object memberships automatically in concepts is also proposed, which is lacked in previous work fuzzy description logics. In this mechanism, object membership is based on the defining properties of concept definition and properties in object description. Moreover, while previous works cannot express the qualitative measurements of an object possessing a property, we introduce two kinds of properties named N-property and L-property, which are quantitative measurements and qualitative measurements of an object possessing a property respectively. The subsumption and implication of concepts and properties are also explored in our work. We believe that it is useful to the Semantic Web community for reasoning the fuzzy membership of objects for concepts in fuzzy ontologies.

  17. MINKOWSKI FUNCTIONALS FOR QUANTITATIVE ASSESSMENTS OF SHOCK-INDUCED MIXING FLOWS

    SciTech Connect

    STRELITZ, RICHARD A.; KAMM, JAMES R.

    2007-01-22

    We describe the morphological descriptors known as Minkowski Functionals (MFs) on a shock-induced mixing problem. MFs allow accurate and compact characterization of complex images. MFs characterize connectivity, size, and shape of disordered structures. They possess several desirable properties, such as additivity, smoothness, and a direct relationship to certain physical properties. The scalar MFs that we describe can be extended to a moment-based tensor form that allows more thorough image descriptions. We apply MFs to experimental data for shock-induced mixing experiments conducted at the LANL shock tube facility. Those experiments, using low Mach number shock waves in air to induce the Richtmyer-Meshkov instability on air-SF{sub 6} interfaces, provide high-resolution, quantitative planar laser-induced fluorescence (PLIF) images. We describe MFs and use them to quantify experimental PLIF images of shock-induced mixing. This method can be used as a tool fo r validation, i.e., the quantitative comparison of simulation results against experimental data.

  18. Does a pneumotach accurately characterize voice function?

    NASA Astrophysics Data System (ADS)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  19. Accurate thermoplasmonic simulation of metallic nanoparticles

    NASA Astrophysics Data System (ADS)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing

    2017-01-01

    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  20. Accurate method for computing correlated color temperature.

    PubMed

    Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier

    2016-06-27

    For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 106 K.

  1. Accurate Theoretical Thermochemistry for Fluoroethyl Radicals.

    PubMed

    Ganyecz, Ádám; Kállay, Mihály; Csontos, József

    2017-02-09

    An accurate coupled-cluster (CC) based model chemistry was applied to calculate reliable thermochemical quantities for hydrofluorocarbon derivatives including radicals 1-fluoroethyl (CH3-CHF), 1,1-difluoroethyl (CH3-CF2), 2-fluoroethyl (CH2F-CH2), 1,2-difluoroethyl (CH2F-CHF), 2,2-difluoroethyl (CHF2-CH2), 2,2,2-trifluoroethyl (CF3-CH2), 1,2,2,2-tetrafluoroethyl (CF3-CHF), and pentafluoroethyl (CF3-CF2). The model chemistry used contains iterative triple and perturbative quadruple excitations in CC theory, as well as scalar relativistic and diagonal Born-Oppenheimer corrections. To obtain heat of formation values with better than chemical accuracy perturbative quadruple excitations and scalar relativistic corrections were inevitable. Their contributions to the heats of formation steadily increase with the number of fluorine atoms in the radical reaching 10 kJ/mol for CF3-CF2. When discrepancies were found between the experimental and our values it was always possible to resolve the issue by recalculating the experimental result with currently recommended auxiliary data. For each radical studied here this study delivers the best heat of formation as well as entropy data.

  2. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  3. Accurate methods for large molecular systems.

    PubMed

    Gordon, Mark S; Mullin, Jonathan M; Pruitt, Spencer R; Roskop, Luke B; Slipchenko, Lyudmila V; Boatz, Jerry A

    2009-07-23

    Three exciting new methods that address the accurate prediction of processes and properties of large molecular systems are discussed. The systematic fragmentation method (SFM) and the fragment molecular orbital (FMO) method both decompose a large molecular system (e.g., protein, liquid, zeolite) into small subunits (fragments) in very different ways that are designed to both retain the high accuracy of the chosen quantum mechanical level of theory while greatly reducing the demands on computational time and resources. Each of these methods is inherently scalable and is therefore eminently capable of taking advantage of massively parallel computer hardware while retaining the accuracy of the corresponding electronic structure method from which it is derived. The effective fragment potential (EFP) method is a sophisticated approach for the prediction of nonbonded and intermolecular interactions. Therefore, the EFP method provides a way to further reduce the computational effort while retaining accuracy by treating the far-field interactions in place of the full electronic structure method. The performance of the methods is demonstrated using applications to several systems, including benzene dimer, small organic species, pieces of the alpha helix, water, and ionic liquids.

  4. Accurate equilibrium structures for piperidine and cyclohexane.

    PubMed

    Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter

    2015-03-05

    Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.

  5. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  6. Accurate upper body rehabilitation system using kinect.

    PubMed

    Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit

    2016-08-01

    The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.

  7. Noninvasive hemoglobin monitoring: how accurate is enough?

    PubMed

    Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E

    2013-10-01

    Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.

  8. Accurate, reproducible measurement of blood pressure.

    PubMed Central

    Campbell, N R; Chockalingam, A; Fodor, J G; McKay, D W

    1990-01-01

    The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine consumption, smoking and physical exertion within half an hour before measurement. The use of standardized techniques to measure blood pressure will help to avoid large systematic errors. Poor technique can account for differences in readings of more than 15 mm Hg and ultimately misdiagnosis. Most of the recommended procedures are simple and, when routinely incorporated into clinical practice, require little additional time. The equipment must be appropriate and in good condition. Physicians should have a suitable selection of cuff sizes readily available; the use of the correct cuff size is essential to minimize systematic errors in blood pressure measurement. Semiannual calibration of aneroid sphygmomanometers and annual inspection of mercury sphygmomanometers and blood pressure cuffs are recommended. We review the methods recommended for measuring blood pressure and discuss the factors known to produce large differences in blood pressure readings. PMID:2192791

  9. Fast and accurate exhaled breath ammonia measurement.

    PubMed

    Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H

    2014-06-11

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.

  10. Accurate Fission Data for Nuclear Safety

    NASA Astrophysics Data System (ADS)

    Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.

    2014-05-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.

  11. Accurate simulation of optical properties in dyes.

    PubMed

    Jacquemin, Denis; Perpète, Eric A; Ciofini, Ilaria; Adamo, Carlo

    2009-02-17

    Since Antiquity, humans have produced and commercialized dyes. To this day, extraction of natural dyes often requires lengthy and costly procedures. In the 19th century, global markets and new industrial products drove a significant effort to synthesize artificial dyes, characterized by low production costs, huge quantities, and new optical properties (colors). Dyes that encompass classes of molecules absorbing in the UV-visible part of the electromagnetic spectrum now have a wider range of applications, including coloring (textiles, food, paintings), energy production (photovoltaic cells, OLEDs), or pharmaceuticals (diagnostics, drugs). Parallel to the growth in dye applications, researchers have increased their efforts to design and synthesize new dyes to customize absorption and emission properties. In particular, dyes containing one or more metallic centers allow for the construction of fairly sophisticated systems capable of selectively reacting to light of a given wavelength and behaving as molecular devices (photochemical molecular devices, PMDs).Theoretical tools able to predict and interpret the excited-state properties of organic and inorganic dyes allow for an efficient screening of photochemical centers. In this Account, we report recent developments defining a quantitative ab initio protocol (based on time-dependent density functional theory) for modeling dye spectral properties. In particular, we discuss the importance of several parameters, such as the methods used for electronic structure calculations, solvent effects, and statistical treatments. In addition, we illustrate the performance of such simulation tools through case studies. We also comment on current weak points of these methods and ways to improve them.

  12. Tactical Planning Workstation Software Description

    DTIC Science & Technology

    1990-09-01

    Tactical Planning Workstation Software Description 12. PERSONAL AUTHOR(S) Packard, Bruce R. 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year...3-7 3-2. Unit type codes....................................3-7 3-3. Battle function codes ................................ 3-8 3-4...3-9 3-7. Control measure types ...............................3-11 3-8. Product description files

  13. Descripcion y Medida del Bilinguismo a Nivel Colectivo. [Description and Measurement of Bilingualism at the Collective Level

    ERIC Educational Resources Information Center

    Siguan, Miguel

    1976-01-01

    A presentation of a rigorous method allowing an accurate description of collective bilingualism in any given population, including both the speaker's degree of language command and the patterns of linguistic behavior in each of the languages. [In Spanish] (NQ)

  14. RECENT ADVANCES IN QUANTITATIVE NEUROPROTEOMICS

    PubMed Central

    Craft, George E; Chen, Anshu; Nairn, Angus C

    2014-01-01

    The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson’s disease and Alzheimer’s disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to

  15. Recapturing Quantitative Biology.

    ERIC Educational Resources Information Center

    Pernezny, Ken; And Others

    1996-01-01

    Presents a classroom activity on estimating animal populations. Uses shoe boxes and candies to emphasize the importance of mathematics in biology while introducing the methods of quantitative ecology. (JRH)

  16. Quantitative receptor autoradiography

    SciTech Connect

    Boast, C.A.; Snowhill, E.W.; Altar, C.A.

    1986-01-01

    Quantitative receptor autoradiography addresses the topic of technical and scientific advances in the sphere of quantitative autoradiography. The volume opens with a overview of the field from a historical and critical perspective. Following is a detailed discussion of in vitro data obtained from a variety of neurotransmitter systems. The next section explores applications of autoradiography, and the final two chapters consider experimental models. Methodological considerations are emphasized, including the use of computers for image analysis.

  17. Multivariate Quantitative Chemical Analysis

    NASA Technical Reports Server (NTRS)

    Kinchen, David G.; Capezza, Mary

    1995-01-01

    Technique of multivariate quantitative chemical analysis devised for use in determining relative proportions of two components mixed and sprayed together onto object to form thermally insulating foam. Potentially adaptable to other materials, especially in process-monitoring applications in which necessary to know and control critical properties of products via quantitative chemical analyses of products. In addition to chemical composition, also used to determine such physical properties as densities and strengths.

  18. Quantitation and detection of vanadium in biologic and pollution materials

    NASA Technical Reports Server (NTRS)

    Gordon, W. A.

    1974-01-01

    A review is presented of special considerations and methodology for determining vanadium in biological and air pollution materials. In addition to descriptions of specific analysis procedures, general sections are included on quantitation of analysis procedures, sample preparation, blanks, and methods of detection of vanadium. Most of the information presented is applicable to the determination of other trace elements in addition to vanadium.

  19. Nonexperimental Quantitative Research and Its Role in Guiding Instruction

    ERIC Educational Resources Information Center

    Cook, Bryan G.; Cook, Lysandra

    2008-01-01

    Different research designs answer different questions. Educators cannot use nonexperimental quantitative research designs, such as descriptive surveys and correlational research, to determine definitively that an intervention causes improved student outcomes and is an evidence-based practice. However, such research can (a) inform educators about a…

  20. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  1. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  2. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  3. Accurate glucose detection in a small etalon

    NASA Astrophysics Data System (ADS)

    Martini, Joerg; Kuebler, Sebastian; Recht, Michael; Torres, Francisco; Roe, Jeffrey; Kiesel, Peter; Bruce, Richard

    2010-02-01

    We are developing a continuous glucose monitor for subcutaneous long-term implantation. This detector contains a double chamber Fabry-Perot-etalon that measures the differential refractive index (RI) between a reference and a measurement chamber at 850 nm. The etalon chambers have wavelength dependent transmission maxima which dependent linearly on the RI of their contents. An RI difference of ▵n=1.5.10-6 changes the spectral position of a transmission maximum by 1pm in our measurement. By sweeping the wavelength of a single-mode Vertical-Cavity-Surface-Emitting-Laser (VCSEL) linearly in time and detecting the maximum transmission peaks of the etalon we are able to measure the RI of a liquid. We have demonstrated accuracy of ▵n=+/-3.5.10-6 over a ▵n-range of 0 to 1.75.10-4 and an accuracy of 2% over a ▵nrange of 1.75.10-4 to 9.8.10-4. The accuracy is primarily limited by the reference measurement. The RI difference between the etalon chambers is made specific to glucose by the competitive, reversible release of Concanavalin A (ConA) from an immobilized dextran matrix. The matrix and ConA bound to it, is positioned outside the optical detection path. ConA is released from the matrix by reacting with glucose and diffuses into the optical path to change the RI in the etalon. Factors such as temperature affect the RI in measurement and detection chamber equally but do not affect the differential measurement. A typical standard deviation in RI is +/-1.4.10-6 over the range 32°C to 42°C. The detector enables an accurate glucose specific concentration measurement.

  4. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Astrophysics Data System (ADS)

    Wheeler, K.; Knuth, K.; Castle, P.

    2005-12-01

    and IKONOS imagery and the 3-D volume estimates. The combination of these then allow for a rapid and hopefully very accurate estimation of biomass.

  5. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  6. Quantitative photoacoustic tomography based on the radiative transfer equation.

    PubMed

    Yao, Lei; Sun, Yao; Jiang, Huabei

    2009-06-15

    We describe a method for quantitative photoacoustic tomography (PAT) based on the radiative transfer equation (RTE) coupled with the Helmholtz photoacoustic wave equation. This RTE-based quantitative PAT allows for accurate recovery of absolute absorption coefficient images of heterogeneous media and provides significantly improved image reconstruction for the cases where the photon diffusion approximation may fail. The method and associated finite element reconstruction algorithm are validated using a series of tissuelike phantom experiments.

  7. Photon beam description in PEREGRINE for Monte Carlo dose calculations

    SciTech Connect

    Cox, L. J., LLNL

    1997-03-04

    Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.

  8. IRIS: Towards an Accurate and Fast Stage Weight Prediction Method

    NASA Astrophysics Data System (ADS)

    Taponier, V.; Balu, A.

    2002-01-01

    , validated on several technical and econometrical cases, has been used for this purpose. A database of several conventional stages, operated with either solid or liquid propellants, has been made up, in conjunction with an evolutionary set of geometrical, physical and functional parameters likely to contribute to the description of the mass fraction and presumably known at the early steps of the preliminary design. After several iterations aiming at selecting the most influential parameters, polynomial expressions of the mass fraction have been made up, associated to a confidence level. The outcome highlights the real possibility of a parametric formulation of the mass fraction for conventional stages on the basis of a limited number of descriptive parameters and with a high degree of accuracy, lower than 10%. The formulas have been later on tested on existing or preliminary stages not included in the initial database, for validation purposes. Their mass faction is assessed with a comparable accuracy. The polynomial generation method in use allows also for a search of the influence of each parameter. The devised method, suitable for the preliminary design phase, represents, compared to the classical empirical approach, a significant way of improvement of the mass fraction prediction. It enables a rapid dissemination of more accurate and consistent weight data estimates to support system studies. It makes also possible the upstream processing of the preliminary design tasks through a global system approach. This method, currently in the experimental phase, is already in use as a complementary means at the technical underdirectorate of CNES-DLA. * IRIS :Instrument de Recherche des Indices Structuraux

  9. Automated management of life cycle for future network experiment based on description language

    NASA Astrophysics Data System (ADS)

    Niu, Hongxia; Liang, Junxue; Lin, Zhaowen; Ma, Yan

    2016-12-01

    Future network is a complex resources pool including multiple physical resources and virtual resources. Establishing experiment on future network is complicate and tedious. That achieving the automated management of future network experiments is so important. This paper brings forward the way for researching and managing the life cycle of experiment based on the description language. The description language uses the framework, which couples with a low hierarchical structure and a complete description of the network experiment. In this way, the experiment description template can be generated by this description framework accurately and completely. In reality, we can also customize and reuse network experiment by modifying the description template. The results show that this method can achieve the aim for managing the life cycle of network experiment effectively and automatically, which greatly saves time, reduces the difficulty, and implements the reusability of services.

  10. Multiscale schemes for the predictive description and virtual engineering of materials.

    SciTech Connect

    von Lilienfeld-Toal, Otto Anatole

    2010-09-01

    This report documents research carried out by the author throughout his 3-years Truman fellowship. The overarching goal consisted of developing multiscale schemes which permit not only the predictive description but also the computational design of improved materials. Identifying new materials through changes in atomic composition and configuration requires the use of versatile first principles methods, such as density functional theory (DFT). Using DFT, its predictive reliability has been investigated with respect to pseudopotential construction, band-gap, van-der-Waals forces, and nuclear quantum effects. Continuous variation of chemical composition and derivation of accurate energy gradients in compound space has been developed within a DFT framework for free energies of solvation, reaction energetics, and frontier orbital eigenvalues. Similar variations have been leveraged within classical molecular dynamics in order to address thermal properties of molten salt candidates for heat transfer fluids used in solar thermal power facilities. Finally, a combination of DFT and statistical methods has been used to devise quantitative structure property relationships for the rapid prediction of charge mobilities in polyaromatic hydrocarbons.

  11. Robust Retinal Blood Vessel Segmentation Based on Reinforcement Local Descriptions

    PubMed Central

    Li, Meng; Ma, Zhenshen; Liu, Chao; Han, Zhe

    2017-01-01

    Retinal blood vessels segmentation plays an important role for retinal image analysis. In this paper, we propose robust retinal blood vessel segmentation method based on reinforcement local descriptions. A novel line set based feature is firstly developed to capture local shape information of vessels by employing the length prior of vessels, which is robust to intensity variety. After that, local intensity feature is calculated for each pixel, and then morphological gradient feature is extracted for enhancing the local edge of smaller vessel. At last, line set based feature, local intensity feature, and morphological gradient feature are combined to obtain the reinforcement local descriptions. Compared with existing local descriptions, proposed reinforcement local description contains more local information of local shape, intensity, and edge of vessels, which is more robust. After feature extraction, SVM is trained for blood vessel segmentation. In addition, we also develop a postprocessing method based on morphological reconstruction to connect some discontinuous vessels and further obtain more accurate segmentation result. Experimental results on two public databases (DRIVE and STARE) demonstrate that proposed reinforcement local descriptions outperform the state-of-the-art method. PMID:28194407

  12. Robust Retinal Blood Vessel Segmentation Based on Reinforcement Local Descriptions.

    PubMed

    Li, Meng; Ma, Zhenshen; Liu, Chao; Zhang, Guang; Han, Zhe

    2017-01-01

    Retinal blood vessels segmentation plays an important role for retinal image analysis. In this paper, we propose robust retinal blood vessel segmentation method based on reinforcement local descriptions. A novel line set based feature is firstly developed to capture local shape information of vessels by employing the length prior of vessels, which is robust to intensity variety. After that, local intensity feature is calculated for each pixel, and then morphological gradient feature is extracted for enhancing the local edge of smaller vessel. At last, line set based feature, local intensity feature, and morphological gradient feature are combined to obtain the reinforcement local descriptions. Compared with existing local descriptions, proposed reinforcement local description contains more local information of local shape, intensity, and edge of vessels, which is more robust. After feature extraction, SVM is trained for blood vessel segmentation. In addition, we also develop a postprocessing method based on morphological reconstruction to connect some discontinuous vessels and further obtain more accurate segmentation result. Experimental results on two public databases (DRIVE and STARE) demonstrate that proposed reinforcement local descriptions outperform the state-of-the-art method.

  13. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    PubMed Central

    Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.

    2015-01-01

    Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results

  14. Quantitative analysis of endogenous compounds.

    PubMed

    Thakare, Rhishikesh; Chhonker, Yashpal S; Gautam, Nagsen; Alamoudi, Jawaher Abdullah; Alnouti, Yazen

    2016-09-05

    Accurate quantitative analysis of endogenous analytes is essential for several clinical and non-clinical applications. LC-MS/MS is the technique of choice for quantitative analyses. Absolute quantification by LC/MS requires preparing standard curves in the same matrix as the study samples so that the matrix effect and the extraction efficiency for analytes are the same in both the standard and study samples. However, by definition, analyte-free biological matrices do not exist for endogenous compounds. To address the lack of blank matrices for the quantification of endogenous compounds by LC-MS/MS, four approaches are used including the standard addition, the background subtraction, the surrogate matrix, and the surrogate analyte methods. This review article presents an overview these approaches, cite and summarize their applications, and compare their advantages and disadvantages. In addition, we discuss in details, validation requirements and compatibility with FDA guidelines to ensure method reliability in quantifying endogenous compounds. The standard addition, background subtraction, and the surrogate analyte approaches allow the use of the same matrix for the calibration curve as the one to be analyzed in the test samples. However, in the surrogate matrix approach, various matrices such as artificial, stripped, and neat matrices are used as surrogate matrices for the actual matrix of study samples. For the surrogate analyte approach, it is required to demonstrate similarity in matrix effect and recovery between surrogate and authentic endogenous analytes. Similarly, for the surrogate matrix approach, it is required to demonstrate similar matrix effect and extraction recovery in both the surrogate and original matrices. All these methods represent indirect approaches to quantify endogenous compounds and regardless of what approach is followed, it has to be shown that none of the validation criteria have been compromised due to the indirect analyses.

  15. CANISTER HANDLING FACILITY DESCRIPTION DOCUMENT

    SciTech Connect

    J.F. Beesley

    2005-04-21

    The purpose of this facility description document (FDD) is to establish requirements and associated bases that drive the design of the Canister Handling Facility (CHF), which will allow the design effort to proceed to license application. This FDD will be revised at strategic points as the design matures. This FDD identifies the requirements and describes the facility design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This FDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This FDD is part of an iterative design process. It leads the design process with regard to the flowdown of upper tier requirements onto the facility. Knowledge of these requirements is essential in performing the design process. The FDD follows the design with regard to the description of the facility. The description provided in this FDD reflects the current results of the design process.

  16. Micropolar continuum in spatial description

    NASA Astrophysics Data System (ADS)

    Ivanova, Elena A.; Vilchevskaya, Elena N.

    2016-11-01

    Within the spatial description, it is customary to refer thermodynamic state quantities to an elementary volume fixed in space containing an ensemble of particles. During its evolution, the elementary volume is occupied by different particles, each having its own mass, tensor of inertia, angular and linear velocities. The aim of the present paper is to answer the question of how to determine the inertial and kinematic characteristics of the elementary volume. In order to model structural transformations due to the consolidation or defragmentation of particles or anisotropic changes, one should consider the fact that the tensor of inertia of the elementary volume may change. This means that an additional constitutive equation must be formulated. The paper suggests kinetic equations for the tensor of inertia of the elementary volume. It also discusses the specificity of the inelastic polar continuum description within the framework of the spatial description.

  17. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  18. Simple and surprisingly accurate approach to the chemical bond obtained from dimensional scaling.

    PubMed

    Svidzinsky, Anatoly A; Scully, Marlan O; Herschbach, Dudley R

    2005-08-19

    We present a new dimensional scaling transformation of the Schrödinger equation for the two electron bond. This yields, for the first time, a good description of the bond via D scaling. There also emerges, in the large-D limit, an intuitively appealing semiclassical picture, akin to a molecular model proposed by Bohr in 1913. In this limit, the electrons are confined to specific orbits in the scaled space, yet the uncertainty principle is maintained. A first-order perturbation correction, proportional to 1/D, substantially improves the agreement with the exact ground state potential energy curve. The present treatment is very simple mathematically, yet provides a strikingly accurate description of the potential curves for the lowest singlet, triplet, and excited states of H2. We find the modified D-scaling method also gives good results for other molecules. It can be combined advantageously with Hartree-Fock and other conventional methods.

  19. Description du langage scientifique (Description of Scientific Language)

    ERIC Educational Resources Information Center

    Widdowson, H. G.

    1977-01-01

    A description of scientific language using three approaches: text, textualization, and discourse. Scientific discourse is analogous to universal deep structure; text, to surface variations in diverse languages; and textualization, to transformational processes. The relationship of the primary and secondary (scientific) cultures and their languages…

  20. Auteur Description: From the Director's Creative Vision to Audio Description

    ERIC Educational Resources Information Center

    Szarkowska, Agnieszka

    2013-01-01

    In this report, the author follows the suggestion that a film director's creative vision should be incorporated into Audio description (AD), a major technique for making films, theater performances, operas, and other events accessible to people who are blind or have low vision. The author presents a new type of AD for auteur and artistic films:…

  1. Towards Efficient and Accurate Description of Many-Electron Problems: Developments of Static and Time-Dependent Electronic Structure Methods

    NASA Astrophysics Data System (ADS)

    Ding, Feizhi

    Understanding electronic behavior in molecular and nano-scale systems is fundamental to the development and design of novel technologies and materials for application in a variety of scientific contexts from fundamental research to energy conversion. This dissertation aims to provide insights into this goal by developing novel methods and applications of first-principle electronic structure theory. Specifically, we will present new methods and applications of excited state multi-electron dynamics based on the real-time (RT) time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) formalism, and new development of the multi-configuration self-consist field theory (MCSCF) for modeling ground-state electronic structure. The RT-TDHF/TDDFT based developments and applications can be categorized into three broad and coherently integrated research areas: (1) modeling of the interaction between moleculars and external electromagnetic perturbations. In this part we will first prove both analytically and numerically the gauge invariance of the TDHF/TDDFT formalisms, then we will present a novel, efficient method for calculating molecular nonlinear optical properties, and last we will study quantum coherent plasmon in metal namowires using RT-TDDFT; (2) modeling of excited-state charge transfer in molecules. In this part, we will investigate the mechanisms of bridge-mediated electron transfer, and then we will introduce a newly developed non-equilibrium quantum/continuum embedding method for studying charge transfer dynamics in solution; (3) developments of first-principles spin-dependent many-electron dynamics. In this part, we will present an ab initio non-relativistic spin dynamics method based on the two-component generalized Hartree-Fock approach, and then we will generalized it to the two-component TDDFT framework and combine it with the Ehrenfest molecular dynamics approach for modeling the interaction between electron spins and nuclear motion. All these developments and applications will open up new computational and theoretical tools to be applied to the development and understanding of chemical reactions, nonlinear optics, electromagnetism, and spintronics. Lastly, we present a new algorithm for large-scale MCSCF calculations that can utilize massively parallel machines while still maintaining optimal performance for each single processor. This will great improve the efficiency in the MCSCF calculations for studying chemical dissociation and high-accuracy quantum-mechanical simulations.

  2. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...

  3. Quantitative Hydrocarbon Surface Analysis

    NASA Technical Reports Server (NTRS)

    Douglas, Vonnie M.

    2000-01-01

    The elimination of ozone depleting substances, such as carbon tetrachloride, has resulted in the use of new analytical techniques for cleanliness verification and contamination sampling. The last remaining application at Rocketdyne which required a replacement technique was the quantitative analysis of hydrocarbons by infrared spectrometry. This application, which previously utilized carbon tetrachloride, was successfully modified using the SOC-400, a compact portable FTIR manufactured by Surface Optics Corporation. This instrument can quantitatively measure and identify hydrocarbons from solvent flush of hardware as well as directly analyze the surface of metallic components without the use of ozone depleting chemicals. Several sampling accessories are utilized to perform analysis for various applications.

  4. Quantitative photoacoustic tomography

    PubMed Central

    Yuan, Zhen; Jiang, Huabei

    2009-01-01

    In this paper, several algorithms that allow for quantitative photoacoustic reconstruction of tissue optical, acoustic and physiological properties are described in a finite-element method based framework. These quantitative reconstruction algorithms are compared, and the merits and limitations associated with these methods are discussed. In addition, a multispectral approach is presented for concurrent reconstructions of multiple parameters including deoxyhaemoglobin, oxyhaemoglobin and water concentrations as well as acoustic speed. Simulation and in vivo experiments are used to demonstrate the effectiveness of the reconstruction algorithms presented. PMID:19581254

  5. Quantitative imaging of a non-combusting diesel spray using structured laser illumination planar imaging

    NASA Astrophysics Data System (ADS)

    Berrocal, E.; Kristensson, E.; Hottenbach, P.; Aldén, M.; Grünefeld, G.

    2012-12-01

    Due to its transient nature, high atomization process, and rapid generation of fine evaporating droplets, diesel sprays have been, and still remain, one of the most challenging sprays to be fully analyzed and understood by means of non-intrusive diagnostics. The main limitation of laser techniques for quantitative measurements of diesel sprays concerns the detection of the multiple light scattering resulting from the high optical density of such a scattering medium. A second limitation is the extinction of the incident laser radiation as it crosses the spray, as well as the attenuation of the signal which is to be detected. All these issues have strongly motivated, during the past decade, the use of X-ray instead of visible light for dense spray diagnostics. However, we demonstrate in this paper that based on an affordable Nd:YAG laser system, structured laser illumination planar imaging (SLIPI) can provide accurate quantitative description of a non-reacting diesel spray injected at 1,100 bar within a room temperature vessel pressurized at 18.6 bar. The technique is used at λ = 355 nm excitation wavelength with 1.0 mol% TMPD dye concentration, for simultaneous LIF/Mie imaging. Furthermore, a novel dual-SLIPI configuration is tested with Mie scattering detection only. The results confirm that a mapping of both the droplet Sauter mean diameter and extinction coefficient can be obtained by such complementary approaches. These new insights are provided in this article at late times after injection start. It is demonstrated that the application of SLIPI to diesel sprays provides valuable quantitative information which was not previously accessible.

  6. Langley Atmospheric Information Retrieval System (LAIRS): System description and user's guide

    NASA Technical Reports Server (NTRS)

    Boland, D. E., Jr.; Lee, T.

    1982-01-01

    This document presents the user's guide, system description, and mathematical specifications for the Langley Atmospheric Information Retrieval System (LAIRS). It also includes a description of an optimal procedure for operational use of LAIRS. The primary objective of the LAIRS Program is to make it possible to obtain accurate estimates of atmospheric pressure, density, temperature, and winds along Shuttle reentry trajectories for use in postflight data reduction.

  7. WARP: accurate retrieval of shapes using phase of fourier descriptors and time warping distance.

    PubMed

    Bartolini, Ilaria; Ciaccia, Paolo; Patella, Marco

    2005-01-01

    Effective and efficient retrieval of similar shapes from large image databases is still a challenging problem in spite of the high relevance that shape information can have in describing image contents. In this paper, we propose a novel Fourier-based approach, called WARP, for matching and retrieving similar shapes. The unique characteristics of WARP are the exploitation of the phase of Fourier coefficients and the use of the Dynamic Time Warping (DTW) distance to compare shape descriptors. While phase information provides a more accurate description of object boundaries than using only the amplitude of Fourier coefficients, the DTW distance permits us to accurately match images even in the presence of (limited) phase shiftings. In terms of classical precision/recall measures, we experimentally demonstrate that WARP can gain, say, up to 35 percent in precision at a 20 percent recall level with respect to Fourier-based techniques that use neither phase nor DTW distance.

  8. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  9. Uncertainty Quantification for Quantitative Imaging Holdup Measurements

    SciTech Connect

    Bevill, Aaron M; Bledsoe, Keith C

    2016-01-01

    In nuclear fuel cycle safeguards, special nuclear material "held up" in pipes, ducts, and glove boxes causes significant uncertainty in material-unaccounted-for estimates. Quantitative imaging is a proposed non-destructive assay technique with potential to estimate the holdup mass more accurately and reliably than current techniques. However, uncertainty analysis for quantitative imaging remains a significant challenge. In this work we demonstrate an analysis approach for data acquired with a fast-neutron coded aperture imager. The work includes a calibrated forward model of the imager. Cross-validation indicates that the forward model predicts the imager data typically within 23%; further improvements are forthcoming. A new algorithm based on the chi-squared goodness-of-fit metric then uses the forward model to calculate a holdup confidence interval. The new algorithm removes geometry approximations that previous methods require, making it a more reliable uncertainty estimator.

  10. Natural Language Description of Emotion

    ERIC Educational Resources Information Center

    Kazemzadeh, Abe

    2013-01-01

    This dissertation studies how people describe emotions with language and how computers can simulate this descriptive behavior. Although many non-human animals can express their current emotions as social signals, only humans can communicate about emotions symbolically. This symbolic communication of emotion allows us to talk about emotions that we…

  11. Developmental Kindergarten: Definition and Description.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Education, Richmond.

    This paper sets forth a definition and operational description of a developmental program that should be of use as a guide, especially to Virginia's teachers and administrators. Also included in the paper are kindergarten curriculum objectives in the areas of language arts, mathematics, science, art, social studies, family life, health, mental…

  12. Quantitative SPECT of uptake of monoclonal antibodies

    SciTech Connect

    DeNardo, G.L.; Macey, D.J.; DeNardo, S.J.; Zhang, C.G.; Custer, T.R.

    1989-01-01

    Absolute quantitation of the distribution of radiolabeled antibodies is important to the efficient conduct of research with these agents and their ultimate use for imaging and treatment, but is formidable because of the unrestricted nature of their distribution within the patient. Planar imaging methods have been developed and provide an adequate approximation of the distribution of radionuclide for many purposes, particularly when there is considerable specificity of targeting. This is not currently the case for antibodies and is unlikely in the future. Single photon emission computed tomography (SPECT) provides potential for greater accuracy because it reduces problems caused by superimposition of tissues and non-target contributions to target counts. SPECT measurement of radionuclide content requires: (1) accurate determination of camera sensitivity; (2) accurate determination of the number of counts in a defined region of interest; (3) correction for attenuation; (4) correction for scatter and septal penetration; (5) accurate measurement of the administered dose; (6) adequate statistics; and (7) accurate definition of tissue mass or volume. The major impediment to each of these requirements is scatter of many types. The magnitude of this problem can be diminished by improvements in tomographic camera design, computer algorithms, and methodological approaches. 34 references.

  13. Accurate phase measurements for thick spherical objects using optical quadrature microscopy

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; DiMarzio, Charles A.

    2009-02-01

    In vitro fertilization (IVF) procedures have resulted in the birth of over three million babies since 1978. Yet the live birth rate in the United States was only 34% in 2005, with 32% of the successful pregnancies resulting in multiple births. These multiple pregnancies were directly attributed to the transfer of multiple embryos to increase the probability that a single, healthy embryo was included. Current viability markers used for IVF, such as the cell number, symmetry, size, and fragmentation, are analyzed qualitatively with differential interference contrast (DIC) microscopy. However, this method is not ideal for quantitative measures beyond the 8-cell stage of development because the cells overlap and obstruct the view within and below the cluster of cells. We have developed the phase-subtraction cell-counting method that uses the combination of DIC and optical quadrature microscopy (OQM) to count the number of cells accurately in live mouse embryos beyond the 8-cell stage. We have also created a preliminary analysis to measure the cell symmetry, size, and fragmentation quantitatively by analyzing the relative dry mass from the OQM image in conjunction with the phase-subtraction count. In this paper, we will discuss the characterization of OQM with respect to measuring the phase accurately for spherical samples that are much larger than the depth of field. Once fully characterized and verified with human embryos, this methodology could provide the means for a more accurate method to score embryo viability.

  14. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments

    PubMed Central

    Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  15. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments.

    PubMed

    Eter, Wael A; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-04-15

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, (111)In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of (111)In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers.

  16. NEW TARGET AND CONTROL ASSAYS FOR QUANTITATIVE POLYMERASE CHAIN REACTION (QPCR) ANALYSIS OF ENTEROCOCCI IN WATER

    EPA Science Inventory

    Enterococci are frequently monitored in water samples as indicators of fecal pollution. Attention is now shifting from culture based methods for enumerating these organisms to more rapid molecular methods such as QPCR. Accurate quantitative analyses by this method requires highly...

  17. The Quantitative Imaging Network in Precision Medicine

    PubMed Central

    Nordstrom, Robert J.

    2017-01-01

    Precision medicine is a healthcare model that seeks to incorporate a wealth of patient information to identify and classify disease progression and to provide tailored therapeutic solutions for individual patients. Interventions are based on knowledge of molecular and mechanistic causes, pathogenesis and pathology of disease. Individual characteristics of the patients are then used to select appropriate healthcare options. Imaging is playing an increasingly important role in identifying relevant characteristics that help to stratify patients for different interventions. However, lack of standards, limitations in image-processing interoperability, and errors in data collection can limit the applicability of imaging in clinical decision support. Quantitative imaging is the attempt to extract reliable, numerical information from images to eliminate qualitative judgments and errors for providing accurate measures of tumor response to therapy or for predicting future response. This issue of Tomography reports quantitative imaging developments made by several members of the National Cancer Institute Quantitative Imaging Network, a program dedicated to the promotion of quantitative imaging methods for clinical decision support. PMID:28083563

  18. Quantitative Simulation Games

    NASA Astrophysics Data System (ADS)

    Černý, Pavol; Henzinger, Thomas A.; Radhakrishna, Arjun

    While a boolean notion of correctness is given by a preorder on systems and properties, a quantitative notion of correctness is defined by a distance function on systems and properties, where the distance between a system and a property provides a measure of "fit" or "desirability." In this article, we explore several ways how the simulation preorder can be generalized to a distance function. This is done by equipping the classical simulation game between a system and a property with quantitative objectives. In particular, for systems that satisfy a property, a quantitative simulation game can measure the "robustness" of the satisfaction, that is, how much the system can deviate from its nominal behavior while still satisfying the property. For systems that violate a property, a quantitative simulation game can measure the "seriousness" of the violation, that is, how much the property has to be modified so that it is satisfied by the system. These distances can be computed in polynomial time, since the computation reduces to the value problem in limit average games with constant weights. Finally, we demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.

  19. Critical Quantitative Inquiry in Context

    ERIC Educational Resources Information Center

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  20. SiNG-PCRseq: Accurate inter-sequence quantification achieved by spiking-in a neighbor genome for competitive PCR amplicon sequencing.

    PubMed

    Oh, Soo A; Yang, Inchul; Hahn, Yoonsoo; Kang, Yong-Kook; Chung, Sun-Ku; Jeong, Sangkyun

    2015-07-06

    Despite the recent technological advances in DNA quantitation by sequencing, accurate delineation of the quantitative relationship among different DNA sequences is yet to be elaborated due to difficulties in correcting the sequence-specific quantitation biases. We here developed a novel DNA quantitation method via spiking-in a neighbor genome for competitive PCR amplicon sequencing (SiNG-PCRseq). This method utilizes genome-wide chemically equivalent but easily discriminable homologous sequences with a known copy arrangement in the neighbor genome. By comparing the amounts of selected human DNA sequences simultaneously to those of matched sequences in the orangutan genome, we could accurately draw the quantitative relationships for those sequences in the human genome (root-mean-square deviations <0.05). Technical replications of cDNA quantitation performed using different reagents at different time points also resulted in excellent correlations (R(2) > 0.95). The cDNA quantitation using SiNG-PCRseq was highly concordant with the RNA-seq-derived version in inter-sample comparisons (R(2) = 0.88), but relatively discordant in inter-sequence quantitation (R(2) < 0.44), indicating considerable level of sequence-dependent quantitative biases in RNA-seq. Considering the measurement structure explicitly relating the amount of different sequences within a sample, SiNG-PCRseq will facilitate sharing and comparing the quantitation data generated under different spatio-temporal settings.

  1. Quantitative aspects of inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bulska, Ewa; Wagner, Barbara

    2016-10-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.

  2. Quantitative aspects of inductively coupled plasma mass spectrometry.

    PubMed

    Bulska, Ewa; Wagner, Barbara

    2016-10-28

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided.This article is part of the themed issue 'Quantitative mass spectrometry'.

  3. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR

  4. Predicting in vivo glioma growth with the reaction diffusion equation constrained by quantitative magnetic resonance imaging data

    NASA Astrophysics Data System (ADS)

    Hormuth, David A., II; Weis, Jared A.; Barnes, Stephanie L.; Miga, Michael I.; Rericha, Erin C.; Quaranta, Vito; Yankeelov, Thomas E.

    2015-07-01

    Reaction-diffusion models have been widely used to model glioma growth. However, it has not been shown how accurately this model can predict future tumor status using model parameters (i.e., tumor cell diffusion and proliferation) estimated from quantitative in vivo imaging data. To this end, we used in silico studies to develop the methods needed to accurately estimate tumor specific reaction-diffusion model parameters, and then tested the accuracy with which these parameters can predict future growth. The analogous study was then performed in a murine model of glioma growth. The parameter estimation approach was tested using an in silico tumor ‘grown’ for ten days as dictated by the reaction-diffusion equation. Parameters were estimated from early time points and used to predict subsequent growth. Prediction accuracy was assessed at global (total volume and Dice value) and local (concordance correlation coefficient, CCC) levels. Guided by the in silico study, rats (n = 9) with C6 gliomas, imaged with diffusion weighted magnetic resonance imaging, were used to evaluate the model’s accuracy for predicting in vivo tumor growth. The in silico study resulted in low global (tumor volume error <8.8%, Dice >0.92) and local (CCC values >0.80) level errors for predictions up to six days into the future. The in vivo study showed higher global (tumor volume error >11.7%, Dice <0.81) and higher local (CCC <0.33) level errors over the same time period. The in silico study shows that model parameters can be accurately estimated and used to accurately predict future tumor growth at both the global and local scale. However, the poor predictive accuracy in the experimental study suggests the reaction-diffusion equation is an incomplete description of in vivo C6 glioma biology and may require further modeling of intra-tumor interactions including segmentation of (for example) proliferative and necrotic regions.

  5. The Quantitative Preparation of Future Geoscience Graduate Students

    NASA Astrophysics Data System (ADS)

    Manduca, C. A.; Hancock, G. S.

    2006-12-01

    Modern geoscience is a highly quantitative science. In February, a small group of faculty and graduate students from across the country met to discuss the quantitative preparation of geoscience majors for graduate school. The group included ten faculty supervising graduate students in quantitative areas spanning the earth, atmosphere, and ocean sciences; five current graduate students in these areas; and five faculty teaching undergraduate students in the spectrum of institutions preparing students for graduate work. Discussion focused in four key ares: Are incoming graduate students adequately prepared for the quantitative aspects of graduate geoscience programs? What are the essential quantitative skills are that are required for success in graduate school? What are perceived as the important courses to prepare students for the quantitative aspects of graduate school? What programs/resources would be valuable in helping faculty/departments improve the quantitative preparation of students? The participants concluded that strengthening the quantitative preparation of undergraduate geoscience majors would increase their opportunities in graduate school. While specifics differed amongst disciplines, a special importance was placed on developing the ability to use quantitative skills to solve geoscience problems. This requires the ability to pose problems so they can be addressed quantitatively, understand the relationship between quantitative concepts and physical representations, visualize mathematics, test the reasonableness of quantitative results, creatively move forward from existing models/techniques/approaches, and move between quantitative and verbal descriptions. A list of important quantitative competencies desirable in incoming graduate students includes mechanical skills in basic mathematics, functions, multi-variate analysis, statistics and calculus, as well as skills in logical analysis and the ability to learn independently in quantitative ways

  6. Sampling Soil for Characterization and Site Description

    NASA Technical Reports Server (NTRS)

    Levine, Elissa

    1999-01-01

    The sampling scheme for soil characterization within the GLOBE program is uniquely different from the sampling methods of the other protocols. The strategy is based on an understanding of the 5 soil forming factors (parent material, climate, biota, topography, and time) at each study site, and how each of these interact to produce a soil profile with unique characteristics and unique input and control into the atmospheric, biological, and hydrological systems. Soil profile characteristics, as opposed to soil moisture and temperature, vegetative growth, and atmospheric and hydrologic conditions, change very slowly, depending on the parameter being measured, ranging from seasonally to many thousands of years. Thus, soil information, including profile description and lab analysis, is collected only one time for each profile at a site. These data serve two purposes: 1) to supplement existing spatial information about soil profile characteristics across the landscape at local, regional, and global scales, and 2) to provide specific information within a given area about the basic substrate to which elements within the other protocols are linked. Because of the intimate link between soil properties and these other environmental elements, the static soil properties at a given site are needed to accurately interpret and understand the continually changing dynamics of soil moisture and temperature, vegetation growth and phenology, atmospheric conditions, and chemistry and turbidity in surface waters. Both the spatial and specific soil information can be used for modeling purposes to assess and make predictions about global change.

  7. Energy & Climate: Getting Quantitative

    NASA Astrophysics Data System (ADS)

    Wolfson, Richard

    2011-11-01

    A noted environmentalist claims that buying an SUV instead of a regular car is energetically equivalent to leaving your refrigerator door open for seven years. A fossil-fuel apologist argues that solar energy is a pie-in-the-sky dream promulgated by na"ive environmentalists, because there's nowhere near enough solar energy to meet humankind's energy demand. A group advocating shutdown of the Vermont Yankee nuclear plant claims that 70% of its electrical energy is lost in transmission lines. Around the world, thousands agitate for climate action, under the numerical banner ``350.'' Neither the environmentalist, the fossil-fuel apologist, the antinuclear activists, nor most of those marching under the ``350'' banner can back up their assertions with quantitative arguments. Yet questions about energy and its environmental impacts almost always require quantitative answers. Physics can help! This poster gives some cogent examples, based on the newly published 2^nd edition of the author's textbook Energy, Environment, and Climate.

  8. Creating Body Shapes From Verbal Descriptions by Linking Similarity Spaces.

    PubMed

    Hill, Matthew Q; Streuber, Stephan; Hahn, Carina A; Black, Michael J; O'Toole, Alice J

    2016-11-01

    Brief verbal descriptions of people's bodies (e.g., "curvy," "long-legged") can elicit vivid mental images. The ease with which these mental images are created belies the complexity of three-dimensional body shapes. We explored the relationship between body shapes and body descriptions and showed that a small number of words can be used to generate categorically accurate representations of three-dimensional bodies. The dimensions of body-shape variation that emerged in a language-based similarity space were related to major dimensions of variation computed directly from three-dimensional laser scans of 2,094 bodies. This relationship allowed us to generate three-dimensional models of people in the shape space using only their coordinates on analogous dimensions in the language-based description space. Human descriptions of photographed bodies and their corresponding models matched closely. The natural mapping between the spaces illustrates the role of language as a concise code for body shape that captures perceptually salient global and local body features.

  9. Accurate protein crystallography at ultra-high resolution: Valence electron distribution in crambin

    PubMed Central

    Jelsch, Christian; Teeter, Martha M.; Lamzin, Victor; Pichon-Pesme, Virginie; Blessing, Robert H.; Lecomte, Claude

    2000-01-01

    The charge density distribution of a protein has been refined experimentally. Diffraction data for a crambin crystal were measured to ultra-high resolution (0.54 Å) at low temperature by using short-wavelength synchrotron radiation. The crystal structure was refined with a model for charged, nonspherical, multipolar atoms to accurately describe the molecular electron density distribution. The refined parameters agree within 25% with our transferable electron density library derived from accurate single crystal diffraction analyses of several amino acids and small peptides. The resulting electron density maps of redistributed valence electrons (deformation maps) compare quantitatively well with a high-level quantum mechanical calculation performed on a monopeptide. This study provides validation for experimentally derived parameters and a window into charge density analysis of biological macromolecules. PMID:10737790

  10. Accurate protein crystallography at ultra-high resolution: valence electron distribution in crambin.

    PubMed

    Jelsch, C; Teeter, M M; Lamzin, V; Pichon-Pesme, V; Blessing, R H; Lecomte, C

    2000-03-28

    The charge density distribution of a protein has been refined experimentally. Diffraction data for a crambin crystal were measured to ultra-high resolution (0.54 A) at low temperature by using short-wavelength synchrotron radiation. The crystal structure was refined with a model for charged, nonspherical, multipolar atoms to accurately describe the molecular electron density distribution. The refined parameters agree within 25% with our transferable electron density library derived from accurate single crystal diffraction analyses of several amino acids and small peptides. The resulting electron density maps of redistributed valence electrons (deformation maps) compare quantitatively well with a high-level quantum mechanical calculation performed on a monopeptide. This study provides validation for experimentally derived parameters and a window into charge density analysis of biological macromolecules.

  11. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    PubMed Central

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-01-01

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. PMID:18502191

  12. Efficient design, accurate fabrication and effective characterization of plasmonic quasicrystalline arrays of nano-spherical particles

    PubMed Central

    Namin, Farhad A.; Yuwen, Yu A.; Liu, Liu; Panaretos, Anastasios H.; Werner, Douglas H.; Mayer, Theresa S.

    2016-01-01

    In this paper, the scattering properties of two-dimensional quasicrystalline plasmonic lattices are investigated. We combine a newly developed synthesis technique, which allows for accurate fabrication of spherical nanoparticles, with a recently published variation of generalized multiparticle Mie theory to develop the first quantitative model for plasmonic nano-spherical arrays based on quasicrystalline morphologies. In particular, we study the scattering properties of Penrose and Ammann- Beenker gold spherical nanoparticle array lattices. We demonstrate that by using quasicrystalline lattices, one can obtain multi-band or broadband plasmonic resonances which are not possible in periodic structures. Unlike previously published works, our technique provides quantitative results which show excellent agreement with experimental measurements. PMID:26911709

  13. Primary enzyme quantitation

    DOEpatents

    Saunders, G.C.

    1982-03-04

    The disclosure relates to the quantitation of a primary enzyme concentration by utilizing a substrate for the primary enzyme labeled with a second enzyme which is an indicator enzyme. Enzyme catalysis of the substrate occurs and results in release of the indicator enzyme in an amount directly proportional to the amount of primary enzyme present. By quantifying the free indicator enzyme one determines the amount of primary enzyme present.

  14. Continuum description for jointed media

    SciTech Connect

    Thomas, R.K.

    1982-04-01

    A general three-dimensional continuum description is presented for a material containing regularly spaced and approximately parallel jointing planes within a representative elementary volume. Constitutive relationships are introduced for linear behavior of the base material and nonlinear normal and shear behavior across jointing planes. Furthermore, a fracture permeability tensor is calculated so that deformation induced alterations to the in-situ values can be measured. Examples for several strain-controlled loading paths are presented.

  15. GROUNDWATER PROTECTION MANAGEMENT PROGRAM DESCRIPTION.

    SciTech Connect

    PAQUETTE,D.E.; BENNETT,D.B.; DORSCH,W.R.; GOODE,G.A.; LEE,R.J.; KLAUS,K.; HOWE,R.F.; GEIGER,K.

    2002-05-31

    THE DEPARTMENT OF ENERGY ORDER 5400.1, GENERAL ENVIRONMENTAL PROTECTION PROGRAM, REQUIRES THE DEVELOPMENT AND IMPLEMENTATION OF A GROUNDWATER PROTECTION PROGRAM. THE BNL GROUNDWATER PROTECTION MANAGEMENT PROGRAM DESCRIPTION PROVIDES AN OVERVIEW OF HOW THE LABORATORY ENSURES THAT PLANS FOR GROUNDWATER PROTECTION, MONITORING, AND RESTORATION ARE FULLY DEFINED, INTEGRATED, AND MANAGED IN A COST EFFECTIVE MANNER THAT IS CONSISTENT WITH FEDERAL, STATE, AND LOCAL REGULATIONS.

  16. Spacelab Mission 3 experiment descriptions

    NASA Technical Reports Server (NTRS)

    Hill, C. K. (Editor)

    1982-01-01

    The Spacelab 3 mission is the first operational flight of Spacelab aboard the shuttle transportation system. The primary objectives of this mission are to conduct application, science, and technology experimentation that requires the low gravity environment of Earth orbit and an extended duration, stable vehicle attitude with emphasis on materials processing. This document provides descriptions of the experiments to be performed during the Spacelab 3 mission.

  17. Absolute quantitation of protein posttranslational modification isoform.

    PubMed

    Yang, Zhu; Li, Ning

    2015-01-01

    Mass spectrometry has been widely applied in characterization and quantification of proteins from complex biological samples. Because the numbers of absolute amounts of proteins are needed in construction of mathematical models for molecular systems of various biological phenotypes and phenomena, a number of quantitative proteomic methods have been adopted to measure absolute quantities of proteins using mass spectrometry. The liquid chromatography-tandem mass spectrometry (LC-MS/MS) coupled with internal peptide standards, i.e., the stable isotope-coded peptide dilution series, which was originated from the field of analytical chemistry, becomes a widely applied method in absolute quantitative proteomics research. This approach provides more and more absolute protein quantitation results of high confidence. As quantitative study of posttranslational modification (PTM) that modulates the biological activity of proteins is crucial for biological science and each isoform may contribute a unique biological function, degradation, and/or subcellular location, the absolute quantitation of protein PTM isoforms has become more relevant to its biological significance. In order to obtain the absolute cellular amount of a PTM isoform of a protein accurately, impacts of protein fractionation, protein enrichment, and proteolytic digestion yield should be taken into consideration and those effects before differentially stable isotope-coded PTM peptide standards are spiked into sample peptides have to be corrected. Assisted with stable isotope-labeled peptide standards, the absolute quantitation of isoforms of posttranslationally modified protein (AQUIP) method takes all these factors into account and determines the absolute amount of a protein PTM isoform from the absolute amount of the protein of interest and the PTM occupancy at the site of the protein. The absolute amount of the protein of interest is inferred by quantifying both the absolute amounts of a few PTM

  18. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams.

  19. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  20. A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS

    EPA Science Inventory

    While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...

  1. Quantitative Analysis of Intra-chromosomal Contacts: The 3C-qPCR Method.

    PubMed

    Ea, Vuthy; Court, Franck; Forné, Thierry

    2017-01-01

    The chromosome conformation capture (3C) technique is fundamental to many population-based methods investigating chromatin dynamics and organization in eukaryotes. Here, we provide a modified quantitative 3C (3C-qPCR) protocol for improved quantitative analyses of intra-chromosomal contacts. We also describe an algorithm for data normalization which allows more accurate comparisons between contact profiles.

  2. Accurate Identification of MCI Patients via Enriched White-Matter Connectivity Network

    NASA Astrophysics Data System (ADS)

    Wee, Chong-Yaw; Yap, Pew-Thian; Brownyke, Jeffery N.; Potter, Guy G.; Steffens, David C.; Welsh-Bohmer, Kathleen; Wang, Lihong; Shen, Dinggang

    Mild cognitive impairment (MCI), often a prodromal phase of Alzheimer's disease (AD), is frequently considered to be a good target for early diagnosis and therapeutic interventions of AD. Recent emergence of reliable network characterization techniques have made understanding neurological disorders at a whole brain connectivity level possible. Accordingly, we propose a network-based multivariate classification algorithm, using a collection of measures derived from white-matter (WM) connectivity networks, to accurately identify MCI patients from normal controls. An enriched description of WM connections, utilizing six physiological parameters, i.e., fiber penetration count, fractional anisotropy (FA), mean diffusivity (MD), and principal diffusivities (λ 1, λ 2, λ 3), results in six connectivity networks for each subject to account for the connection topology and the biophysical properties of the connections. Upon parcellating the brain into 90 regions-of-interest (ROIs), the average statistics of each ROI in relation to the remaining ROIs are extracted as features for classification. These features are then sieved to select the most discriminant subset of features for building an MCI classifier via support vector machines (SVMs). Cross-validation results indicate better diagnostic power of the proposed enriched WM connection description than simple description with any single physiological parameter.

  3. Collective Cell Motion in an Epithelial Sheet Can Be Quantitatively Described by a Stochastic Interacting Particle Model

    PubMed Central

    Cochet, Olivier; Grasland-Mongrain, Erwan; Silberzan, Pascal; Hakim, Vincent

    2013-01-01

    Modelling the displacement of thousands of cells that move in a collective way is required for the simulation and the theoretical analysis of various biological processes. Here, we tackle this question in the controlled setting where the motion of Madin-Darby Canine Kidney (MDCK) cells in a confluent epithelium is triggered by the unmasking of free surface. We develop a simple model in which cells are described as point particles with a dynamic based on the two premises that, first, cells move in a stochastic manner and, second, tend to adapt their motion to that of their neighbors. Detailed comparison to experimental data show that the model provides a quantitatively accurate description of cell motion in the epithelium bulk at early times. In addition, inclusion of model “leader” cells with modified characteristics, accounts for the digitated shape of the interface which develops over the subsequent hours, providing that leader cells invade free surface more easily than other cells and coordinate their motion with their followers. The previously-described progression of the epithelium border is reproduced by the model and quantitatively explained. PMID:23505356

  4. A comparison of risk assessment techniques from qualitative to quantitative

    SciTech Connect

    Altenbach, T.J.

    1995-02-13

    Risk assessment techniques vary from purely qualitative approaches, through a regime of semi-qualitative to the more traditional quantitative. Constraints such as time, money, manpower, skills, management perceptions, risk result communication to the public, and political pressures all affect the manner in which risk assessments are carried out. This paper surveys some risk matrix techniques, examining the uses and applicability for each. Limitations and problems for each technique are presented and compared to the others. Risk matrix approaches vary from purely qualitative axis descriptions of accident frequency vs consequences, to fully quantitative axis definitions using multi-attribute utility theory to equate different types of risk from the same operation.

  5. How directions of route descriptions influence orientation specificity: the contribution of spatial abilities.

    PubMed

    Meneghetti, Chiara; Muffato, Veronica; Varotto, Diego; De Beni, Rossana

    2017-03-01

    Previous studies found mental representations of route descriptions north-up oriented when egocentric experience (given by the protagonist's initial view) was congruent with the global reference system. This study examines: (a) the development and maintenance of representations derived from descriptions when the egocentric and global reference systems are congruent or incongruent; and (b) how spatial abilities modulate these representations. Sixty participants (in two groups of 30) heard route descriptions of a protagonist's moves starting from the bottom of a layout and headed mainly northwards (SN description) in one group, and headed south from the top (NS description, the egocentric view facing in the opposite direction to the canonical north) in the other. Description recall was tested with map drawing (after hearing the description a first and second time; i.e. Time 1 and 2) and South-North (SN) or North-South (NS) pointing tasks; and spatial objective tasks were administered. The results showed that: (a) the drawings were more rotated in NS than in SN descriptions, and performed better at Time 2 than at Time 1 for both types of description; SN pointing was more accurate than NS pointing for the SN description, while SN and NS pointing accuracy did not differ for the NS description; (b) spatial (rotation) abilities were related to recall accuracy for both types of description, but were more so for the NS ones. Overall, our results showed that the way in which spatial information is conveyed (with/without congruence between the egocentric and global reference systems) and spatial abilities influence the development and maintenance of mental representations.

  6. Effects of a Training Package to Improve the Accuracy of Descriptive Analysis Data Recording

    ERIC Educational Resources Information Center

    Mayer, Kimberly L.; DiGennaro Reed, Florence D.

    2013-01-01

    Functional behavior assessment is an important precursor to developing interventions to address a problem behavior. Descriptive analysis, a type of functional behavior assessment, is effective in informing intervention design only if the gathered data accurately capture relevant events and behaviors. We investigated a training procedure to improve…

  7. Application of the Rangeland Hydrology and Erosion Model to Ecological Site Descriptions and Management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The utility of Ecological Site Descriptions (ESDs) and State-and-Transition Models (STMs) concepts in guiding rangeland management hinges on their ability to accurately describe and predict community dynamics and the associated consequences. For many rangeland ecosystems, plant community dynamics ar...

  8. Quantitative rainbow schlieren deflectometry

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Klimek, Robert B.; Buchele, Donald R.

    1995-01-01

    In the rainbow schlieren apparatus, a continuously graded rainbow filter is placed in the back focal plane of the decollimating lens. Refractive-index gradients in the test section thus appear as gradations in hue rather than irradiance. A simple system is described wherein a conventional color CCD array and video digitizer are used to quantify accurately the color attributes of the resulting image, and hence the associated ray deflections. The present system provides a sensitivity comparable with that of conventional interferometry, while being simpler to implement and less sensitive to mechanical misalignment.

  9. Quantitative imaging with a mobile phone microscope.

    PubMed

    Skandarajah, Arunan; Reber, Clay D; Switz, Neil A; Fletcher, Daniel A

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications.

  10. Quantitative Imaging with a Mobile Phone Microscope

    PubMed Central

    Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  11. Accurate Quantification of Lipid Species by Electrospray Ionization Mass Spectrometry — Meets a Key Challenge in Lipidomics

    PubMed Central

    Yang, Kui; Han, Xianlin

    2011-01-01

    Electrospray ionization mass spectrometry (ESI-MS) has become one of the most popular and powerful technologies to identify and quantify individual lipid species in lipidomics. Meanwhile, quantitative analysis of lipid species by ESI-MS has also become a major obstacle to meet the challenges of lipidomics. Herein, we discuss the principles, advantages, and possible limitations of different mass spectrometry-based methodologies for lipid quantification, as well as a few practical issues important for accurate quantification of individual lipid species. Accordingly, accurate quantification of individual lipid species, one of the key challenges in lipidomics, can be practically met. PMID:22905337

  12. Protein Quantitation of the Developing Cochlea Using Mass Spectrometry.

    PubMed

    Darville, Lancia N F; Sokolowski, Bernd H A

    2016-01-01

    Mass spectrometry-based proteomics allows for the measurement of hundreds to thousands of proteins in a biological system. Additionally, mass spectrometry can also be used to quantify proteins and peptides. However, observing quantitative differences between biological systems using mass spectrometry-based proteomics can be challenging because it is critical to have a method that is fast, reproducible, and accurate. Therefore, to study differential protein expression in biological samples labeling or label-free quantitative methods can be used. Labeling methods have been widely used in quantitative proteomics, however label-free methods have become equally as popular and more preferred because they produce faster, cleaner, and simpler results. Here, we describe the methods by which proteins are isolated and identified from cochlear sensory epithelia tissues at different ages and quantitatively differentiated using label-free mass spectrometry.

  13. A predictable and accurate technique with elastomeric impression materials.

    PubMed

    Barghi, N; Ontiveros, J C

    1999-08-01

    A method for obtaining more predictable and accurate final impressions with polyvinylsiloxane impression materials in conjunction with stock trays is proposed and tested. Heavy impression material is used in advance for construction of a modified custom tray, while extra-light material is used for obtaining a more accurate final impression.

  14. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  15. Orbiter active thermal control system description

    NASA Technical Reports Server (NTRS)

    Laubach, G. E.

    1975-01-01

    A brief description of the Orbiter Active Thermal Control System (ATCS) including (1) major functional requirements of heat load, temperature control and heat sink utilization, (2) the overall system arrangement, and (3) detailed description of the elements of the ATCS.

  16. Standardizing the microsystems technology description

    NASA Astrophysics Data System (ADS)

    Liateni, Karim; Thomas, Gabriel; Hui Bon Hoa, Christophe; Bensaude, David

    2002-04-01

    The microsystems industry is promising a rapid and widespread growth for the coming years. The automotive, network, telecom and electronics industries take advantage of this technology by including it in their products; thus, getting better integration and high energetic performances. Microsystems related software and data exchange have inherited from the IC technology experience or standards, which appear not to fit the advanced level of conception currently needed by microsystems designers. A typical design flow to validate a microsystem device involves several software from disconnected areas like layout editors, FEM simulators, HDL modeling and simulation tools. However, and fabricated microsystem is obtained through execution of a layered process. Process characteristics will be used at each level of the design and analysis. Basically, the designer will have to customize each of his tools after the process. The project introduced here intends to unify the process description language and speed up the critical and tedious CAD customization task. We gather all the information related to the technology of a microsystem process in a single file. It is based on the XML standard format to receive worldwide attention. This format is called XML-MTD, standing for XML Microsystems Technology Description. Built around XML, it is an ASCII format which gives the ability to handle a comprehensive database for technology data. This format is open, given under general public license, but the aim is to manage the format withing a XML-MTD consortium of leader and well-established EDA companies and Foundries. In this way, it will take profit of their experience. For automated configuration of design and analysis tools regarding process-dependant information, we ship the Technology Manger software. Technology Manager links foundries with a large panel of standard EDA and FEA packages used by design teams relying on the Microsystems Technology Description in XML-MTD format.

  17. SNF AGING SYSTEM DESCRIPTION DOCUMENT

    SciTech Connect

    L.L. Swanson

    2005-04-06

    The purpose of this system description document (SDD) is to establish requirements that drive the design of the spent nuclear fuel (SNF) aging system and associated bases, which will allow the design effort to proceed. This SDD will be revised at strategic points as the design matures. This SDD identifies the requirements and describes the system design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This SDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This SDD is part of an iterative design process. It leads the design process with regard to the flow down of upper tier requirements onto the system. Knowledge of these requirements is essential in performing the design process. The SDD follows the design with regard to the description of the system. The description provided in the SDD reflects the current results of the design process. Throughout this SDD, the term aging cask applies to vertical site-specific casks and to horizontal aging modules. The term overpack is a vertical site-specific cask that contains a dual-purpose canister (DPC) or a disposable canister. Functional and operational requirements applicable to this system were obtained from ''Project Functional and Operational Requirements'' (F&OR) (Curry 2004 [DIRS 170557]). Other requirements that support the design process were taken from documents such as ''Project Design Criteria Document'' (PDC) (BSC 2004 [DES 171599]), ''Site Fire Hazards Analyses'' (BSC 2005 [DIRS 172174]), and ''Nuclear Safety Design Bases for License Application'' (BSC 2005 [DIRS 171512]). The documents address requirements in the ''Project Requirements Document'' (PRD) (Canori and Leitner 2003 [DIRS 166275]). This SDD includes several appendices. Appendix A is a Glossary; Appendix B is a list of key system charts, diagrams, drawings, lists and additional supporting information; and Appendix C is a list of

  18. Application of non-self-adjoint operators for description of electronic excitations in metallic lithium

    SciTech Connect

    Popov, A. V.

    2016-01-15

    Metallic lithium is used to demonstrate the possibilities of applying non-self-adjoint operators for quantitative description of orbital excitations of electrons in crystals. It is shown that, the nonequilibrium distribution function can be calculated when solving the spectral problem; therefore, the kinetic properties of a material can also be described with the unified band theory.

  19. A Computer-Based Content Analysis of Interview Texts: Numeric Description and Multivariate Analysis.

    ERIC Educational Resources Information Center

    Bierschenk, B.

    1977-01-01

    A method is described by which cognitive structures in verbal data can be identified and categorized through numerical analysis and quantitative description. Transcriptions of interviews (in this case, the verbal statements of 40 researchers) are manually coded and subjected to analysis following the AaO (Agent action Object) paradigm. The texts…

  20. The Classroom Practice of Creative Arts Education in NSW Primary Schools: A Descriptive Account

    ERIC Educational Resources Information Center

    Power, Bianca; Klopper, Christopher

    2011-01-01

    This article documents the current classroom practice of creative arts education of respondent classroom teachers in the New South Wales Greater Western Region, Australia. The study provides a descriptive account of classroom practice in creative arts education through the employment of a quantitative methodology. A questionnaire was designed and…

  1. Hadl: HUMS Architectural Description Language

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Adavi, V.; Agarwal, N.; Gullapalli, S.; Kumar, P.; Sundaram, P.

    2004-01-01

    Specification of architectures is an important prerequisite for evaluation of architectures. With the increase m the growth of health usage and monitoring systems (HUMS) in commercial and military domains, the need far the design and evaluation of HUMS architectures has also been on the increase. In this paper, we describe HADL, HUMS Architectural Description Language, that we have designed for this purpose. In particular, we describe the features of the language, illustrate them with examples, and show how we use it in designing domain-specific HUMS architectures. A companion paper contains details on our design methodology of HUMS architectures.

  2. IUE/IRA system description

    NASA Technical Reports Server (NTRS)

    Jennings, J.

    1977-01-01

    The IUE/IRA rate sensor system designed to meet the requirements of the International Ultraviolet Explorer spacecraft mission is described. The system consists of the sensor unit containing six rate sensor modules and the electronic control unit containing the rate sensor support electronics and the command/control circuitry. The inertial reference assembly formed by the combined units will provide spacecraft rate information for use in the stabilization and control system. The system is described in terms of functional description, operation redundancy performance, mechanical interface, and electrical interface. Test data obtained from the flight unit are summarized.

  3. Descriptive Model of Generic WAMS

    SciTech Connect

    Hauer, John F.; DeSteese, John G.

    2007-06-01

    The Department of Energy’s (DOE) Transmission Reliability Program is supporting the research, deployment, and demonstration of various wide area measurement system (WAMS) technologies to enhance the reliability of the Nation’s electrical power grid. Pacific Northwest National Laboratory (PNNL) was tasked by the DOE National SCADA Test Bed Program to conduct a study of WAMS security. This report represents achievement of the milestone to develop a generic WAMS model description that will provide a basis for the security analysis planned in the next phase of this study.

  4. The MUNU experiment, general description

    NASA Astrophysics Data System (ADS)

    Amsler, C.; Avenier, M.; Bagieu, G.; Barnoux, C.; Becker, H.-W.; Brissot, R.; Broggini, C.; Busto, J.; Cavaignac, J.-F.; Farine, J.; Filippi, D.; Gervasio, G.; Giarritta, P.; Grgić, G.; Guerre Chaley, B.; Joergens, V.; Koang, D. H.; Lebrun, D.; Luescher, R.; Mattioli, F.; Negrello, M.; Ould-Saada, F.; Paić, A.; Piovan, O.; Puglierin, G.; Schenker, D.; Stutz, A.; Tadsen, A.; Treichel, M.; Vuilleumier, J.-L.; Vuilleumier, J.-M.; MUNU Collaboration

    1997-02-01

    We are building a low background detector based on a gas time projection chamber surrounded by an active anti-Compton shielding. The detector will be installed near a nuclear reactor in Bugey for the experimental study of overlineνee- scattering. We give here a general description of the experiment, and an estimate of the expected counting rate and background. The construction of the time projection chamber is described in details. Results of first test measurements concerning the attenuation length and the spatial as well as energy resolution in the CF 4 fill gas are reported.

  5. Descriptive analyses of caregiver reprimands.

    PubMed

    Sloman, Kimberly N; Vollmer, Timothy R; Cotnoir, Nicole M; Borrero, Carrie S W; Borrero, John C; Samaha, Andrew L; St Peter, Claire C

    2005-01-01

    We conducted descriptive observations of 5 individuals with developmental disabilities and severe problem behavior while they interacted with their caregivers in either simulated environments (an inpatient hospital facility) or in their homes. The focus of the study was on caregiver reprimands and child problem behavior. Thus, we compared the frequency of problem behavior that immediately preceded a caregiver reprimand to that immediately following a caregiver reprimand, and the results showed that the frequency of problem behavior decreased following a reprimand. It is possible that caregiver reprimands are negatively reinforced by the momentary attenuation of problem behavior, and the implications for long- and short-term effects on caregiver behavior are discussed.

  6. Using GPS To Teach More Than Accurate Positions.

    ERIC Educational Resources Information Center

    Johnson, Marie C.; Guth, Peter L.

    2002-01-01

    Undergraduate science majors need practice in critical thinking, quantitative analysis, and judging whether their calculated answers are physically reasonable. Develops exercises using handheld Global Positioning System (GPS) receivers. Reinforces students' abilities to think quantitatively, make realistic "back of the envelope"…

  7. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  8. Staged description of the Finkelstein test.

    PubMed

    Dawson, Courtney; Mudgal, Chaitanya S

    2010-09-01

    We have revisited the original description of the Finkelstein test and review the reasons for its subsequent erroneous description. We have also outlined a staged description of this test, which we have found to be reliable and minimally painful for the diagnosis of de Quervain's tendonitis within our clinical practice.

  9. Pathways to Provenance: "DACS" and Creator Descriptions

    ERIC Educational Resources Information Center

    Weimer, Larry

    2007-01-01

    "Describing Archives: A Content Standard" breaks important ground for American archivists in its distinction between creator descriptions and archival material descriptions. Implementations of creator descriptions, many using Encoded Archival Context (EAC), are found internationally. "DACS"'s optional approach of describing…

  10. EZ-Rhizo: integrated software for the fast and accurate measurement of root system architecture.

    PubMed

    Armengaud, Patrick; Zambaux, Kevin; Hills, Adrian; Sulpice, Ronan; Pattison, Richard J; Blatt, Michael R; Amtmann, Anna

    2009-03-01

    The root system is essential for the growth and development of plants. In addition to anchoring the plant in the ground, it is the site of uptake of water and minerals from the soil. Plant root systems show an astonishing plasticity in their architecture, which allows for optimal exploitation of diverse soil structures and conditions. The signalling pathways that enable plants to sense and respond to changes in soil conditions, in particular nutrient supply, are a topic of intensive research, and root system architecture (RSA) is an important and obvious phenotypic output. At present, the quantitative description of RSA is labour intensive and time consuming, even using the currently available software, and the lack of a fast RSA measuring tool hampers forward and quantitative genetics studies. Here, we describe EZ-Rhizo: a Windows-integrated and semi-automated computer program designed to detect and quantify multiple RSA parameters from plants growing on a solid support medium. The method is non-invasive, enabling the user to follow RSA development over time. We have successfully applied EZ-Rhizo to evaluate natural variation in RSA across 23 Arabidopsis thaliana accessions, and have identified new RSA determinants as a basis for future quantitative trait locus (QTL) analysis.

  11. The accurate assessment of small-angle X-ray scattering data

    SciTech Connect

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.

    2015-01-01

    A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  12. Automated selected reaction monitoring software for accurate label-free protein quantification.

    PubMed

    Teleman, Johan; Karlsson, Christofer; Waldemarson, Sofia; Hansson, Karin; James, Peter; Malmström, Johan; Levander, Fredrik

    2012-07-06

    Selected reaction monitoring (SRM) is a mass spectrometry method with documented ability to quantify proteins accurately and reproducibly using labeled reference peptides. However, the use of labeled reference peptides becomes impractical if large numbers of peptides are targeted and when high flexibility is desired when selecting peptides. We have developed a label-free quantitative SRM workflow that relies on a new automated algorithm, Anubis, for accurate peak detection. Anubis efficiently removes interfering signals from contaminating peptides to estimate the true signal of the targeted peptides. We evaluated the algorithm on a published multisite data set and achieved results in line with manual data analysis. In complex peptide mixtures from whole proteome digests of Streptococcus pyogenes we achieved a technical variability across the entire proteome abundance range of 6.5-19.2%, which was considerably below the total variation across biological samples. Our results show that the label-free SRM workflow with automated data analysis is feasible for large-scale biological studies, opening up new possibilities for quantitative proteomics and systems biology.

  13. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  14. Quantitative structure-retention relationships of pesticides in reversed-phase high-performance liquid chromatography.

    PubMed

    Aschi, Massimiliano; D'Archivio, Angelo Antonio; Maggi, Maria Anna; Mazzeo, Pietro; Ruggieri, Fabrizio

    2007-01-23

    In this paper, a quantitative structure-retention relationships (QSRR) method is employed to predict the retention behaviour of pesticides in reversed-phase high-performance liquid chromatography (HPLC). A six-parameter nonlinear model is developed by means of a feed-forward artificial neural network (ANN) with back-propagation learning rule. Accurate description of the retention factors of 26 compounds including commonly used insecticides, herbicides and fungicides and some metabolites is successfully achieved. In addition to the acetonitrile content, included to describe composition of the water-acetonitrile mobile phase, the octanol-water partition coefficient (from literature) and four quantum chemical descriptors are considered to account for the effect of solute structure on the retention. These are: the total dipole moment, the mean polarizability, the anisotropy of polarizability and a descriptor of hydrogen bonding ability based on the atomic charges on hydrogen bond donor and acceptor chemical functionalities. The proposed nonlinear QSRR model exhibits a high degree of correlation between observed and computed retention factors and a good predictive performance in wide range of mobile phase composition (40-65%, v/v acetonitrile) that supports its application for the prediction of the chromatographic behaviour of unknown pesticides. A multilinear regression model based on the same six descriptors shows a significantly worse predictive capability.

  15. Estimating background-subtracted fluorescence transients in calcium imaging experiments: a quantitative approach.

    PubMed

    Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe

    2013-08-01

    Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies.

  16. Addressing the current bottlenecks of metabolomics: Isotopic Ratio Outlier Analysis™, an isotopic-labeling technique for accurate biochemical profiling.

    PubMed

    de Jong, Felice A; Beecher, Chris

    2012-09-01

    Metabolomics or biochemical profiling is a fast emerging science; however, there are still many associated bottlenecks to overcome before measurements will be considered robust. Advances in MS resolution and sensitivity, ultra pressure LC-MS, ESI, and isotopic approaches such as flux analysis and stable-isotope dilution, have made it easier to quantitate biochemicals. The digitization of mass spectrometers has simplified informatic aspects. However, issues of analytical variability, ion suppression and metabolite identification still plague metabolomics investigators. These hurdles need to be overcome for accurate metabolite quantitation not only for in vitro systems, but for complex matrices such as biofluids and tissues, before it is possible to routinely identify biomarkers that are associated with the early prediction and diagnosis of diseases. In this report, we describe a novel isotopic-labeling method that uses the creation of distinct biochemical signatures to eliminate current bottlenecks and enable accurate metabolic profiling.

  17. Microfabricated tools for quantitative plant biology.

    PubMed

    Elitaş, Meltem; Yüce, Meral; Budak, Hikmet

    2017-03-13

    The development of microfabricated devices that will provide high-throughput quantitative data and high resolution in a fast, repeatable and reproducible manner is essential for plant biology research. Plants have been intensely explored since the beginning of humanity, especially for medical needs. However, plant biology research is still laborious, lacking the latest technological advancements in the laboratory practices. Microfabricated tools can provide a significant contribution to plant biology research since they require small volumes of samples and reagents with minimal cost and labor. Besides, they minimize the wet lab requirements while providing a parallel measurement platform for high-throughput data. Here, we have reviewed the cutting-edge microfabricated technologies developed for plant biology research. The description of the microfabricated device components, their integration with plant science and their substitution with the conventional techniques are presented. Our discussion on the challenges and future opportunities for scientists working at the fascinating intersection between plant science and engineering concludes this study.

  18. Quantitative analysis of continuous intracranial pressure recordings in symptomatic patients with extracranial shunts

    PubMed Central

    Eide, P

    2003-01-01

    Objectives: To explore the outcome of management of possible shunt related symptoms using intracranial pressure (ICP) monitoring, and to identify potential methodological limitations with the current strategies of ICP assessment. Methods: The distribution of persistent symptoms related to extracranial shunt treatment was compared before and after management of shunt failure in 69 consecutive hydrocephalic cases. Management was heavily based on ICP monitoring (calculation of mean ICP and visual determination of plateau waves). After the end of patient management, all ICP curves were re-evaluated using a quantitative method and software (SensometricsTM pressure analyser). The ICP curves were presented as a matrix of numbers of ICP elevations (20 to 35 mm Hg) or depressions (-10 to -5 mm Hg) of different durations (0.5, 1, or 5 minutes). The numbers of ICP elevations/depressions standardised to 10 hours recording time were calculated to allow comparisons of ICP between individuals. Results: After ICP monitoring and management of the putative shunt related symptoms, the symptoms remained unchanged in as many as 58% of the cases, with the highest percentages in those patients with ICP considered normal or too low at the time of ICP monitoring. The quantitative analysis revealed a high frequency of ICP elevations (20 to 35 mm Hg lasting 0.5 to 1 minute) and ICP depressions (-10 to -5 mm Hg lasting 0.5, 1, or 5 minutes), particularly in patients with ICP considered normal. Conclusions: The value of continuous ICP monitoring with ICP analysis using current criteria appears doubtful in the management of possible shunt related symptoms. This may reflect limitations in the strategies of ICP analysis. Calculation of the exact numbers of ICP elevations and depressions may provide a more accurate description of the ICP profile. PMID:12531957

  19. 5-Aminolevulinic Acid-Induced Protoporphyrin IX Fluorescence in Meningioma: Qualitative and Quantitative Measurements In Vivo

    PubMed Central

    Valdes, Pablo A.; Bekelis, Kimon; Harris, Brent T.; Wilson, Brian C.; Leblond, Frederic; Kim, Anthony; Simmons, Nathan E.; Erkmen, Kadir; Paulsen, Keith D.; Roberts, David W.

    2014-01-01

    BACKGROUND The use of 5-aminolevulinic acid (ALA)-induced protoporphyrin IX (PpIX) fluorescence has shown promise as a surgical adjunct for maximizing the extent of surgical resection in gliomas. To date, the clinical utility of 5-ALA in meningiomas is not fully understood, with most descriptive studies using qualitative approaches to 5-ALA-PpIX. OBJECTIVE To assess the diagnostic performance of 5-ALA-PpIX fluorescence during surgical resection of meningioma. METHODS ALA was administered to 15 patients with meningioma undergoing PpIX fluorescence-guided surgery at our institution. At various points during the procedure, the surgeon performed qualitative, visual assessments of fluorescence by using the surgical microscope, followed by a quantitative fluorescence measurement by using an intra-operative probe. Specimens were collected at each point for subsequent neuropathological analysis. Clustered data analysis of variance was used to ascertain a difference between groups, and receiver operating characteristic analyses were performed to assess diagnostic capabilities. RESULTS Red-pink fluorescence was observed in 80% (12/15) of patients, with visible fluorescence generally demonstrating a strong, homogenous character. Quantitative fluorescence measured diagnostically significant PpIX concentrations (CPpIx) in both visibly and nonvisibly fluorescent tissues, with significantly higher CPpIx in both visibly fluorescent (P < .001) and tumor tissue (P = .002). Receiver operating characteristic analyses also showed diagnostic accuracies up to 90% for differentiating tumor from normal dura. CONCLUSION ALA-induced PpIX fluorescence guidance is a potential and promising adjunct in accurately detecting neoplastic tissue during meningioma resective surgery. These results suggest a broader reach for PpIX as a biomarker for meningiomas than was previously noted in the literature. PMID:23887194

  20. Cortical Up State Activity Is Enhanced After Seizures: A Quantitative Analysis

    PubMed Central

    Gerkin, Richard C.; Clem, Roger L.; Shruti, Sonal; Kass, Robert E.; Barth, Alison L.

    2011-01-01

    In the neocortex, neurons participate in epochs of elevated activity, or Up states, during periods of quiescent wakefulness, slow-wave sleep, and general anesthesia. The regulation of firing during and between Up states is of great interest because it can reflect the underlying connectivity and excitability of neurons within the network. Automated analysis of the onset and characteristics of Up state firing across different experiments and conditions requires a robust and accurate method for Up state detection. Using measurements of membrane potential mean and variance calculated from whole-cell recordings of neurons from control and postseizure tissue, the authors have developed such a method. This quantitative and automated method is independent of cell- or condition-dependent variability in underlying noise or tonic firing activity. Using this approach, the authors show that Up state frequency and firing rates are significantly increased in layer 2/3 neocortical neurons 24 hours after chemo-convulsant-induced seizure. Down states in postseizure tissue show greater membrane-potential variance characterized by increased synaptic activity. Previously, the authors have found that postseizure increase in excitability is linked to a gain-of-function in BK channels, and blocking BK channels in vitro and in vivo can decrease excitability and eliminate seizures. Thus, the authors also assessed the effect of BK-channel antagonists on Up state properties in control and postseizure neurons. These data establish a robust and broadly applicable algorithm for Up state detection and analysis, provide a quantitative description of how prior seizures increase spontaneous firing activity in cortical networks, and show how BK-channel antagonists reduce this abnormal activity. PMID:21127407

  1. Problems in publishing accurate color in IEEE journals.

    PubMed

    Vrhel, Michael J; Trussell, H J

    2002-01-01

    To demonstrate the performance of color image processing algorithms, it is desirable to be able to accurately display color images in archival publications. In poster presentations, the authors have substantial control of the printing process, although little control of the illumination. For journal publication, the authors must rely on professional intermediaries (printers) to accurately reproduce their results. Our previous work describes requirements for accurately rendering images using your own equipment. This paper discusses the problems of dealing with intermediaries and offers suggestions for improved communication and rendering.

  2. Fabricating an Accurate Implant Master Cast: A Technique Report.

    PubMed

    Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F

    2015-12-01

    The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated.

  3. Can MRI accurately detect pilon articular malreduction? A quantitative comparison between CT and 3T MRI bone models

    PubMed Central

    Radzi, Shairah; Dlaska, Constantin Edmond; Cowin, Gary; Robinson, Mark; Pratap, Jit; Schuetz, Michael Andreas; Mishra, Sanjay

    2016-01-01

    Background Pilon fracture reduction is a challenging surgery. Radiographs are commonly used to assess the quality of reduction, but are limited in revealing the remaining bone incongruities. The study aimed to develop a method in quantifying articular malreductions using 3D computed tomography (CT) and magnetic resonance imaging (MRI) models. Methods CT and MRI data were acquired using three pairs of human cadaveric ankle specimens. Common tibial pilon fractures were simulated by performing osteotomies to the ankle specimens. Five of the created fractures [three AO type-B (43-B1), and two AO type-C (43-C1) fractures] were then reduced and stabilised using titanium implants, then rescanned. All datasets were reconstructed into CT and MRI models, and were analysed in regards to intra-articular steps and gaps, surface deviations, malrotations and maltranslations of the bone fragments. Results Initial results reveal that type B fracture CT and MRI models differed by ~0.2 (step), ~0.18 (surface deviations), ~0.56° (rotation) and ~0.4 mm (translation). Type C fracture MRI models showed metal artefacts extending to the articular surface, thus unsuitable for analysis. Type C fracture CT models differed from their CT and MRI contralateral models by ~0.15 (surface deviation), ~1.63° (rotation) and ~0.4 mm (translation). Conclusions Type B fracture MRI models were comparable to CT and may potentially be used for the postoperative assessment of articular reduction on a case-to-case basis. PMID:28090442

  4. Toward a More Accurate Quantitation of the Activity of Recombinant Retroviruses: Alternatives to Titer and Multiplicity of Infection

    PubMed Central

    Andreadis, Stylianos; Lavery, Thomas; Davis, Howard E.; Le Doux, Joseph M.; Yarmush, Martin L.; Morgan, Jeffrey R.

    2000-01-01

    In this paper, we present a mathematical model with experimental support of how several key parameters govern the adsorption of active retrovirus particles onto the surface of adherent cells. These parameters, including time of adsorption, volume of virus, and the number, size, and type of target cells, as well as the intrinsic properties of the virus, diffusion coefficient, and half-life (t1/2), have been incorporated into a mathematical expression that describes the rate at which active virus particles adsorb to the cell surface. From this expression, we have obtained estimates of Cvo, the starting concentration of active retrovirus particles. In contrast to titer, Cvo is independent of the specific conditions of the assay. The relatively slow diffusion (D = 2 × 10−8 cm2/s) and rapid decay (t1/2 = 6 to 7 h) of retrovirus particles explain why Cvo values are significantly higher than titer values. Values of Cvo also indicate that the number of defective particles in a retrovirus stock is much lower than previously thought, which has implications especially for the use of retroviruses for in vivo gene therapy. With this expression, we have also computed AVC (active viruses/cell), the number of active retrovirus particles that would adsorb per cell during a given adsorption time. In contrast to multiplicity of infection, which is based on titer and is subject to the same inaccuracies, AVC is based on the physicochemical parameters of the transduction assay and so is a more reliable alternative. PMID:10627536

  5. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    PubMed

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit.

  6. Quantitative analysis in megageomorphology

    NASA Technical Reports Server (NTRS)

    Mayer, L.

    1985-01-01

    Megageomorphology is the study of regional topographic features and their relations to independent geomorphic variables that operate at the regional scale. These independent variables can be classified as either tectonic or climatic in nature. Quantitative megageomorphology stresses the causal relations between plate tectonic factors and landscape features or correlations between climatic factors and geomorphic processes. In addition, the cumulative effects of tectonics and climate on landscape evolution that simultaneously operate in a complex system of energy transfer is of interst. Regional topographic differentiation, say between continents and ocean floors, is largely the result of the different densities and density contrasts within the oceanic and continental lithosphere and their isostatic consequences. Regional tectonic processes that alter these lithospheric characteristics include rifting, collision, subduction, transpression and transtension.

  7. Electronic imaging systems for quantitative electrophoresis of DNA

    SciTech Connect

    Sutherland, J.C.

    1989-01-01

    Gel electrophoresis is one of the most powerful and widely used methods for the separation of DNA. During the last decade, instruments have been developed that accurately quantitate in digital form the distribution of materials in a gel or on a blot prepared from a gel. In this paper, I review the various physical properties that can be used to quantitate the distribution of DNA on gels or blots and the instrumentation that has been developed to perform these tasks. The emphasis here is on DNA, but much of what is said also applies to RNA, proteins and other molecules. 36 refs.

  8. Quantitative Hyperspectral Reflectance Imaging

    PubMed Central

    Klein, Marvin E.; Aalderink, Bernard J.; Padoan, Roberto; de Bruin, Gerrit; Steemers, Ted A.G.

    2008-01-01

    Hyperspectral imaging is a non-destructive optical analysis technique that can for instance be used to obtain information from cultural heritage objects unavailable with conventional colour or multi-spectral photography. This technique can be used to distinguish and recognize materials, to enhance the visibility of faint or obscured features, to detect signs of degradation and study the effect of environmental conditions on the object. We describe the basic concept, working principles, construction and performance of a laboratory instrument specifically developed for the analysis of historical documents. The instrument measures calibrated spectral reflectance images at 70 wavelengths ranging from 365 to 1100 nm (near-ultraviolet, visible and near-infrared). By using a wavelength tunable narrow-bandwidth light-source, the light energy used to illuminate the measured object is minimal, so that any light-induced degradation can be excluded. Basic analysis of the hyperspectral data includes a qualitative comparison of the spectral images and the extraction of quantitative data such as mean spectral reflectance curves and statistical information from user-defined regions-of-interest. More sophisticated mathematical feature extraction and classification techniques can be used to map areas on the document, where different types of ink had been applied or where one ink shows various degrees of degradation. The developed quantitative hyperspectral imager is currently in use by the Nationaal Archief (National Archives of The Netherlands) to study degradation effects of artificial samples and original documents, exposed in their permanent exhibition area or stored in their deposit rooms. PMID:27873831

  9. Quantitative immunoglobulins in adulthood.

    PubMed

    Crisp, Howard C; Quinn, James M

    2009-01-01

    Although age-related changes in serum immunoglobulins are well described in childhood, alterations in immunoglobulins in the elderly are less well described and published. This study was designed to better define expected immunoglobulin ranges and differences in adults of differing decades of life. Sera from 404 patients, aged 20-89 years old were analyzed for quantitative immunoglobulin G (IgG), immunoglobulin M (IgM), and immunoglobulin A (IgA). The patients with diagnoses or medications known to affect immunoglobulin levels were identified while blinded to their immunoglobulin levels. A two-factor ANOVA was performed using decade of life and gender on both the entire sample population as well as the subset without any disease or medication expected to alter immunoglobulin levels. A literature review was also performed on all English language articles evaluating quantitative immunoglobulin levels in adults >60 years old. For the entire population, IgM was found to be higher in women when compared with men (p < 0.001) and lower in the oldest sample population compared with the youngest population (p < 0.001). For the population without diseases known to affect immunoglobulin levels, the differences in IgM with gender and age were maintained (p < or = 0.001) and IgA levels were generally higher in the older population when compared with the younger population (p = 0.009). Elderly patients without disease known to affect immunoglobulin levels have higher serum IgA levels and lower serum IgM levels. Women have higher IgM levels than men throughout life. IgG levels are not significantly altered in an older population.

  10. Accurate determination of rates from non-uniformly sampled relaxation data.

    PubMed

    Stetz, Matthew A; Wand, A Joshua

    2016-08-01

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  11. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  12. FLOWTRAN-TF code description

    SciTech Connect

    Flach, G.P.

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.

  13. FLOWTRAN-TF code description

    SciTech Connect

    Flach, G.P.

    1991-09-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.

  14. Lagrangian description of warm plasmas

    NASA Technical Reports Server (NTRS)

    Kim, H.

    1970-01-01

    Efforts are described to extend the averaged Lagrangian method of describing small signal wave propagation and nonlinear wave interaction, developed by earlier workers for cold plasmas, to the more general conditions of warm collisionless plasmas, and to demonstrate particularly the effectiveness of the method in analyzing wave-wave interactions. The theory is developed for both the microscopic description and the hydrodynamic approximation to plasma behavior. First, a microscopic Lagrangian is formulated rigorously, and expanded in terms of perturbations about equilibrium. Two methods are then described for deriving a hydrodynamic Lagrangian. In the first of these, the Lagrangian is obtained by velocity integration of the exact microscopic Lagrangian. In the second, the expanded hydrodynamic Lagrangian is obtained directly from the expanded microscopic Lagrangian. As applications of the microscopic Lagrangian, the small-signal dispersion relations and the coupled mode equations are derived for all possible waves in a warm infinite, weakly inhomogeneous magnetoplasma, and their interactions are examined.

  15. Statistical description for survival data

    PubMed Central

    2016-01-01

    Statistical description is always the first step in data analysis. It gives investigator a general impression of the data at hand. Traditionally, data are described as central tendency and deviation. However, this framework does not fit to the survival data (also termed time-to-event data). Such data type contains two components. One is the survival time and the other is the status. Researchers are usually interested in the probability of event at a given survival time point. Hazard function, cumulative hazard function and survival function are commonly used to describe survival data. Survival function can be estimated using Kaplan-Meier estimator, which is also the default method in most statistical packages. Alternatively, Nelson-Aalen estimator is available to estimate survival function. Survival functions of subgroups can be compared using log-rank test. Furthermore, the article also introduces how to describe time-to-event data with parametric modeling. PMID:27867953

  16. Accurate polyatomic quantum dynamics studies of combustion reactions. Final progress report, July 1, 1994--June 30, 1998

    SciTech Connect

    Zhang, J.Z.H.

    1998-12-31

    This program is designed to develop accurate yet practical computational methods, primarily based on time-dependent quantum mechanics, for studying the dynamics of polyatomic reactions beyond the atom-diatom systems. Efficient computational methodologies are developed and the applications of these methods to practical chemical reactions relevant to combustion processes are carried out. The program emphasizes the practical aspects of accurate quantum dynamics calculations in order to understand, explain and predict the dynamical properties of important combustion reactions. The aim of this research is to help provide not only qualitative dynamics information but also quantitative prediction of reaction dynamics of combustion reactions at the microscopic level. Through accurate theoretical calculations, the authors wish to be able to quantitatively predict reaction cross sections and rate constants of relatively small gas-phase reactions from first principles that are of direct interest to combustion. The long-term goal of this research is to develop practical computational methods that are capable of quantitatively predicting dynamics of more complex polyatomic gas-phase reactions that are of interest to combustion.

  17. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    PubMed

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid.

  18. A fast, accurate, and reliable reconstruction method of the lumbar spine vertebrae using positional MRI.

    PubMed

    Simons, Craig J; Cobb, Loren; Davidson, Bradley S

    2014-04-01

    In vivo measurement of lumbar spine configuration is useful for constructing quantitative biomechanical models. Positional magnetic resonance imaging (MRI) accommodates a larger range of movement in most joints than conventional MRI and does not require a supine position. However, this is achieved at the expense of image resolution and contrast. As a result, quantitative research using positional MRI has required long reconstruction times and is sensitive to incorrectly identifying the vertebral boundary due to low contrast between bone and surrounding tissue in the images. We present a semi-automated method used to obtain digitized reconstructions of lumbar vertebrae in any posture of interest. This method combines a high-resolution reference scan with a low-resolution postural scan to provide a detailed and accurate representation of the vertebrae in the posture of interest. Compared to a criterion standard, translational reconstruction error ranged from 0.7 to 1.6 mm and rotational reconstruction error ranged from 0.3 to 2.6°. Intraclass correlation coefficients indicated high interrater reliability for measurements within the imaging plane (ICC 0.97-0.99). Computational efficiency indicates that this method may be used to compile data sets large enough to account for population variance, and potentially expand the use of positional MRI as a quantitative biomechanics research tool.

  19. Controlling Hay Fever Symptoms with Accurate Pollen Counts

    MedlinePlus

    ... Library ▸ Hay fever and pollen counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has ... Pongdee, MD, FAAAAI Seasonal allergic rhinitis known as hay fever is caused by pollen carried in the air ...

  20. Digital system accurately controls velocity of electromechanical drive

    NASA Technical Reports Server (NTRS)

    Nichols, G. B.

    1965-01-01

    Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.

  1. Progress toward accurate high spatial resolution actinide analysis by EPMA

    NASA Astrophysics Data System (ADS)

    Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.

    2010-12-01

    High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation

  2. Accurate Ab Initio Calculation of the Isotopic Exchange Equilibrium 10B(OH)3 + 11B(OH)4- = 11B(OH)3 + 10B(OH)4- In Aqueous Solution

    NASA Astrophysics Data System (ADS)

    Tossell, J. A.

    2005-12-01

    effects of counterions or of ionic strength and (e) the proper microscopic descriptions of reactants and products. We will also discuss the relationship between bond strength and isotopomer free energy differences. Our goal is to move the calculation of isotopic exchange K values from its present ad hoc basis to a reliable general methodology and to quantitatively connect K values with other known properties.

  3. Analytical Description of Mutational Effects in Competing Asexual Populations

    PubMed Central

    Pinkel, Daniel

    2007-01-01

    The adaptation of a population to a new environment is a result of selection operating on a suite of stochastically occurring mutations. This article presents an analytical approach to understanding the population dynamics during adaptation, specifically addressing a system in which periods of growth are separated by selection in bottlenecks. The analysis derives simple expressions for the average properties of the evolving population, including a quantitative description of progressive narrowing of the range of selection coefficients of the predominant mutant cells and of the proportion of mutant cells as a function of time. A complete statistical description of the bottlenecks is also presented, leading to a description of the stochastic behavior of the population in terms of effective mutation times. The effective mutation times are related to the actual mutation times by calculable probability distributions, similar to the selection coefficients being highly restricted in their probable values. This analytical approach is used to model recently published experimental data from a bacterial coculture experiment, and the results are compared to those of a numerical model published in conjunction with the data. Finally, experimental designs that may improve measurements of fitness distributions are suggested. PMID:17947437

  4. Time-dependent density-functional description of nuclear dynamics

    NASA Astrophysics Data System (ADS)

    Nakatsukasa, Takashi; Matsuyanagi, Kenichi; Matsuo, Masayuki; Yabana, Kazuhiro

    2016-10-01

    The basic concepts and recent developments in the time-dependent density-functional theory (TDDFT) for describing nuclear dynamics at low energy are presented. The symmetry breaking is inherent in nuclear energy density functionals, which provides a practical description of important correlations at the ground state. Properties of elementary modes of excitation are strongly influenced by the symmetry breaking and can be studied with TDDFT. In particular, a number of recent developments in the linear response calculation have demonstrated their usefulness in the description of collective modes of excitation in nuclei. Unrestricted real-time calculations have also become available in recent years, with new developments for quantitative description of nuclear collision phenomena. There are, however, limitations in the real-time approach; for instance, it cannot describe the many-body quantum tunneling. Thus, the quantum fluctuations associated with slow collective motions are explicitly treated assuming that time evolution of densities is determined by a few collective coordinates and momenta. The concept of collective submanifold is introduced in the phase space associated with the TDDFT and used to quantize the collective dynamics. Selected applications are presented to demonstrate the usefulness and quality of the new approaches. Finally, conceptual differences between nuclear and electronic TDDFT are discussed, with some recent applications to studies of electron dynamics in the linear response and under a strong laser field.

  5. Accurate tracking of high dynamic vehicles with translated GPS

    NASA Astrophysics Data System (ADS)

    Blankshain, Kenneth M.

    The GPS concept and the translator processing system (TPS) which were developed for accurate and cost-effective tracking of various types of high dynamic expendable vehicles are described. A technique used by the translator processing system (TPS) to accomplish very accurate high dynamic tracking is presented. Automatic frequency control and fast Fourier transform processes are combined to track 100 g acceleration and 100 g/s jerk with 1-sigma velocity measurement error less than 1 ft/sec.

  6. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    SciTech Connect

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-03-23

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  7. Binary and nonbinary description of hypointensity for search and retrieval of brain MR images

    NASA Astrophysics Data System (ADS)

    Unay, Devrim; Chen, Xiaojing; Ercil, Aytul; Cetin, Mujdat; Jasinschi, Radu; van Buchem, Marc A.; Ekin, Ahmet

    2009-01-01

    Diagnosis accuracy in the medical field, is mainly affected by either lack of sufficient understanding of some diseases or the inter/intra-observer variability of the diagnoses. We believe that mining of large medical databases can help improve the current status of disease understanding and decision making. In a previous study based on binary description of hypointensity in the brain, it was shown that brain iron accumulation shape provides additional information to the shape-insensitive features, such as the total brain iron load, that are commonly used in clinics. This paper proposes a novel, nonbinary description of hypointensity in the brain based on principal component analysis. We compare the complementary and redundant information provided by the two descriptions using Kendall's rank correlation coefficient in order to better understand the individual descriptions of iron accumulation in the brain and obtain a more robust and accurate search and retrieval system.

  8. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  9. Receptor tyrosine kinase signaling: a view from quantitative proteomics.

    PubMed

    Dengjel, Joern; Kratchmarova, Irina; Blagoev, Blagoy

    2009-10-01

    Growth factor receptor signaling via receptor tyrosine kinases (RTKs) is one of the basic cellular communication principals found in all metazoans. Extracellular signals are transferred via membrane spanning receptors into the cytoplasm, reversible tyrosine phosphorylation being the hallmark of all RTKs. In recent years proteomic approaches have yielded detailed descriptions of cellular signaling events. Quantitative proteomics is able to characterize the exact position and strength of post-translational modifications (PTMs) providing essential information for understanding the molecular basis of signal transduction. Numerous new post-translational modification sites have been identified by quantitative mass spectrometry-based proteomics. In addition, plentiful new players in signal transduction have been identified underlining the complexity and the modular architecture of most signaling networks. In this review, we outline the principles of signal transduction via RTKs and highlight some of the new insights obtained from proteomic approaches such as protein microarrays and quantitative mass spectrometry.

  10. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  11. [Application of the group theory to description of biological objects pseudosymmetry].

    PubMed

    Gelashvili, D B; Chuprunov, E V; Marychev, M O; Somov, N V; Shirokov, A I; Nizhegorodtsev, A A

    2010-01-01

    The application of the group theory to description of biological objects pseudosymmetry is introduced and substantiated by the example of rotatory symmetry of actinomorphic and zygomorphic flowers. Problems of biosymmetrics terminology are considered; point symmetry elements are characterized as being applied to description of flower symmetry; central constructs of the group theory are stated. Application of the Curie principle to biological objects is outlined. Algorithms for quantitative assessment of flower pseudosymmetry are given; the description is made of flower pseudosymmetry in the terms of the group theory, including evolutionary aspect. The conclusion is made that adaptation of the group theory to description of biological objects symmetry (biosymmetrics) is important not only in fundamental respect but also as a tool of inter-disciplinary mutual understanding between biologists, physicists, crystallographers and other specialists whose communicative language is mathematics.

  12. Quantitative Techniques in Volumetric Analysis

    NASA Astrophysics Data System (ADS)

    Zimmerman, John; Jacobsen, Jerrold J.

    1996-12-01

    Quantitative Techniques in Volumetric Analysis is a visual library of techniques used in making volumetric measurements. This 40-minute VHS videotape is designed as a resource for introducing students to proper volumetric methods and procedures. The entire tape, or relevant segments of the tape, can also be used to review procedures used in subsequent experiments that rely on the traditional art of quantitative analysis laboratory practice. The techniques included are: Quantitative transfer of a solid with a weighing spoon Quantitative transfer of a solid with a finger held weighing bottle Quantitative transfer of a solid with a paper strap held bottle Quantitative transfer of a solid with a spatula Examples of common quantitative weighing errors Quantitative transfer of a solid from dish to beaker to volumetric flask Quantitative transfer of a solid from dish to volumetric flask Volumetric transfer pipet A complete acid-base titration Hand technique variations The conventional view of contemporary quantitative chemical measurement tends to focus on instrumental systems, computers, and robotics. In this view, the analyst is relegated to placing standards and samples on a tray. A robotic arm delivers a sample to the analysis center, while a computer controls the analysis conditions and records the results. In spite of this, it is rare to find an analysis process that does not rely on some aspect of more traditional quantitative analysis techniques, such as careful dilution to the mark of a volumetric flask. Figure 2. Transfer of a solid with a spatula. Clearly, errors in a classical step will affect the quality of the final analysis. Because of this, it is still important for students to master the key elements of the traditional art of quantitative chemical analysis laboratory practice. Some aspects of chemical analysis, like careful rinsing to insure quantitative transfer, are often an automated part of an instrumental process that must be understood by the

  13. Quantitative fluorescence tomography using a trimodality system: in vivo validation

    NASA Astrophysics Data System (ADS)

    Lin, Yuting; Barber, William C.; Iwanczyk, Jan S.; Roeck, Werner W.; Nalcioglu, Orhan; Gulsen, Gultekin

    2010-07-01

    A fully integrated trimodality fluorescence, diffuse optical, and x-ray computed tomography (FT/DOT/XCT) system for small animal imaging is reported in this work. The main purpose of this system is to obtain quantitatively accurate fluorescence concentration images using a multimodality approach. XCT offers anatomical information, while DOT provides the necessary background optical property map to improve FT image accuracy. The quantitative accuracy of this trimodality system is demonstrated in vivo. In particular, we show that a 2-mm-diam fluorescence inclusion located 8 mm deep in a nude mouse can only be localized when functional a priori information from DOT is available. However, the error in the recovered fluorophore concentration is nearly 87%. On the other hand, the fluorophore concentration can be accurately recovered within 2% error when both DOT functional and XCT structural a priori information are utilized together to guide and constrain the FT reconstruction algorithm.

  14. Statistical genetics and evolution of quantitative traits

    NASA Astrophysics Data System (ADS)

    Neher, Richard A.; Shraiman, Boris I.

    2011-10-01

    The distribution and heritability of many traits depends on numerous loci in the genome. In general, the astronomical number of possible genotypes makes the system with large numbers of loci difficult to describe. Multilocus evolution, however, greatly simplifies in the limit of weak selection and frequent recombination. In this limit, populations rapidly reach quasilinkage equilibrium (QLE) in which the dynamics of the full genotype distribution, including correlations between alleles at different loci, can be parametrized by the allele frequencies. This review provides a simplified exposition of the concept and mathematics of QLE which is central to the statistical description of genotypes in sexual populations. Key results of quantitative genetics such as the generalized Fisher’s “fundamental theorem,” along with Wright’s adaptive landscape, are shown to emerge within QLE from the dynamics of the genotype distribution. This is followed by a discussion under what circumstances QLE is applicable, and what the breakdown of QLE implies for the population structure and the dynamics of selection. Understanding the fundamental aspects of multilocus evolution obtained through simplified models may be helpful in providing conceptual and computational tools to address the challenges arising in the studies of complex quantitative phenotypes of practical interest.

  15. ROM Plus®: accurate point-of-care detection of ruptured fetal membranes

    PubMed Central

    McQuivey, Ross W; Block, Jon E

    2016-01-01

    Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article. PMID:27274316

  16. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  17. ROM Plus(®): accurate point-of-care detection of ruptured fetal membranes.

    PubMed

    McQuivey, Ross W; Block, Jon E

    2016-01-01

    Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article.

  18. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  19. Bibliometric analysis of Human Factors (1970-2000): a quantitative description of scientific impact.

    PubMed

    Dee, John D; Cassano-Pinché, Andrea; Vicente, Kim J

    2005-01-01

    Bibliometric analyses use the citation history of scientific articles as data to measure scientific impact. This paper describes a bibliometric analysis of the 1682 papers and 2413 authors published in Human Factors from 1970 to 2000. The results show that Human Factors has substantial relative scientific influence, as measured by impact, immediacy, and half-life, exceeding the influence of comparable journals. Like other scientific disciplines, human factors research is a highly stratified activity. Most authors have published only one paper, and many papers are cited infrequently, if ever. A small number of authors account for a disproportionately large number of the papers published and citations received. However, the degree of stratification is not as extreme as in many other disciplines, possibly reflecting the diversity of the human factors discipline. A consistent trend of more authors per paper parallels a similar trend in other fields and may reflect the increasingly interdisciplinary nature of human factors research and a trend toward addressing human-technology interaction in more complex systems. Ten of the most influential papers from each of the last 3 decades illustrate trends in human factors research. Actual or potential applications of this research include considerations for the publication and distribution policy of Human Factors.

  20. Quantitative description of thermodynamic and kinetic properties of the platelet factor 4/heparin bonds

    NASA Astrophysics Data System (ADS)

    Nguyen, Thi-Huong; Greinacher, Andreas; Delcea, Mihaela

    2015-05-01

    Heparin is the most important antithrombotic drug in hospitals. It binds to the endogenous tetrameric protein platelet factor 4 (PF4) forming PF4/heparin complexes which may cause a severe immune-mediated adverse drug reaction, so-called heparin-induced thrombocytopenia (HIT). Although new heparin drugs have been synthesized to reduce such a risk, detailed bond dynamics of the PF4/heparin complexes have not been clearly understood. In this study, single molecule force spectroscopy (SMFS) is utilized to characterize the interaction of PF4 with heparins of defined length (5-, 6-, 8-, 12-, and 16-mers). Analysis of the force-distance curves shows that PF4/heparin binding strength rises with increasing heparin length. In addition, two binding pathways in the PF4/short heparins (<=8-mers) and three binding pathways in the PF4/long heparins (>=8-mers) are identified. We provide a model for the PF4/heparin complexes in which short heparins bind to one PF4 tetramer, while long heparins bind to two PF4 tetramers. We propose that the interaction between long heparins and PF4s is not only due to charge differences as generally assumed, but also due to hydrophobic interaction between two PF4s which are brought close to each other by long heparin. This complicated interaction induces PF4/heparin complexes more stable than other ligand-receptor interactions. Our results also reveal that the boundary between antigenic and non-antigenic heparins is between 8- and 12-mers. These observations are particularly important to understand processes in which PF4-heparin interactions are involved and to develop new heparin-derived drugs.Heparin is the most important antithrombotic drug in hospitals. It binds to the endogenous tetrameric protein platelet factor 4 (PF4) forming PF4/heparin complexes which may cause a severe immune-mediated adverse drug reaction, so-called heparin-induced thrombocytopenia (HIT). Although new heparin drugs have been synthesized to reduce such a risk, detailed bond dynamics of the PF4/heparin complexes have not been clearly understood. In this study, single molecule force spectroscopy (SMFS) is utilized to characterize the interaction of PF4 with heparins of defined length (5-, 6-, 8-, 12-, and 16-mers). Analysis of the force-distance curves shows that PF4/heparin binding strength rises with increasing heparin length. In addition, two binding pathways in the PF4/short heparins (<=8-mers) and three binding pathways in the PF4/long heparins (>=8-mers) are identified. We provide a model for the PF4/heparin complexes in which short heparins bind to one PF4 tetramer, while long heparins bind to two PF4 tetramers. We propose that the interaction between long heparins and PF4s is not only due to charge differences as generally assumed, but also due to hydrophobic interaction between two PF4s which are brought close to each other by long heparin. This complicated interaction induces PF4/heparin complexes more stable than other ligand-receptor interactions. Our results also reveal that the boundary between antigenic and non-antigenic heparins is between 8- and 12-mers. These observations are particularly important to understand processes in which PF4-heparin interactions are involved and to develop new heparin-derived drugs. Electronic supplementary information (ESI) available: S1 - Control experiments for tip and substrate functionalization. S2 - AFM images of the gold surface, the PEG-coated gold surface and the PF4-coated gold surface. S3 - Selected probability distributions of the rupture distances from F-D curve measurements for PEG-NH2/glass, HO05, HO08 and HO16. S4 - EIA measurements for anti-PF4/heparin antibody binding to PF4/heparin complexes. S5 - Bond distances of PF4/heparin complexes at three different loading rate regimes. See DOI: 10.1039/c5nr02132d

  1. Description and Critique of Quantitative Methods for the Allocation of Exploratory Development Resources

    DTIC Science & Technology

    The paper analyzes ten methods for planning the allocation of resources among projects within the exploratory development category of the Defense...research, development, test and evaluation program. Each method is described in terms of a general framework of planning methods and of the factors that...influence the allocation of development resources. A comparative analysis is made of the relative strengths and weaknesses of these methods . The more

  2. CME Article: Perceptions of Acupuncture and Acupressure by Anesthesia Providers: A Quantitative Descriptive Study.

    PubMed

    Faircloth, Amanda C; Dubovoy, Arkadiy; Biddle, Chuck; Dodd-McCue, Diane; Butterworth, John F

    2016-04-01

    Background: Randomized controlled trials show that acupuncture and acupressure support anesthesia management by decreasing anxiety, opioid requirements, and treating post-operative nausea and vomiting. Acupuncture and acupressure have demonstrated clinical usefulness but have not yet diffused into mainstream anesthesia practice. To determine why, this study assessed U.S. anesthesia provider's perceptions of acupuncture and acupressure. Methods: After institutional review board approval, 96 anesthesiology departments stratified by geographic region (Northeast, South, West, and Midwest) and institution type (university medical centers, community hospitals, children's hospitals, and veterans affairs hospitals) were selected for participation in an anonymous, pretested, online survey. The target sample was 1728 providers, of whom 292 (54% anesthesiologists, 44% certified registered nurse anesthetists, 2% anesthesiologist assistants) responded, yielding an overall 17% response rate. Results: Spearman correlation coefficient revealed a statistically significant correlation between acupuncture and geographic region, with the West having the highest predisposition toward acupuncture use (rs  = 0.159, p = 0.007). Women are more likely to use acupuncture than men (rs  = -0.188; p = 0.002). A strong effect size exists between acupuncture and country of pre-anesthesia training (rs  = 1.00; 95% CI = 1.08, 1.16). Some providers have used acupuncture (27%) and acupressure (18%) with positive outcomes; however, the majority have not used these modalities, but would consider using them (54%, SD = 1.44 ; acupressure: 60%, SD = 1.32). Seventy-six percent of respondents would like acupuncture education and 74% would like acupressure education (SDs of 0.43 and 0.44, respectively). Conclusions: While most of the U.S. anesthesia providers in this survey have not used these modalities, they nevertheless report a favorable perception of acupuncture/acupressure's role as part of an anesthetic. This study adds to the body of acupuncture and acupressure research by providing insight into anesthesia providers' perceptions of these alternative medicine modalities.

  3. Scientometric analysis of physics (1979-2008): A quantitative description of scientific impact

    NASA Astrophysics Data System (ADS)

    Zheng, YanNing; Yuan, JunPeng; Pan, YunTao; Zhao, XiaoYuan

    2011-01-01

    Citations are a way to show how researchers build on existing research to further evolve research. The citation count is an indication of the influence of specific articles. The importance of citations means that it is valuable to analyze the articles that are cited the most. This research investigates highly-cited articles in physics (1979-2008) using citation data from the ISI Web of Science. In this study, 1544205 articles were examined. The objective of the analysis was to identify and list the highly-productive countries, institutions, authors, and fields in physics. Based on the analysis, it was found that the USA is the world leader in physics, and Japan has maintained the highest growth rate in physics research since 1990. Furthermore, the research focus at Bell Labs and IBM has played important roles in physics. A striking fact is that the five most active authors are all Japanese, but the five most active institutions are all in the USA. In fact, only The University of Tokyo is ranked among the top 11 institutions, and only American authors have single-author articles ranked among the top 19 articles. The highest-impact articles are distributed across 25 subjects categories. Physics, Multidisciplinary has 424 articles, and is ranked at No. 1 in total articles; followed by Physics, Condensed Matter. The study can provide science policy makers with a picture of innovation capability in this field and help them make better decisions. Hopefully, this study will stimulate useful discussion among scientists and research managers about future research directions.

  4. Automated annotation and quantitative description of ultrasound videos of the fetal heart.

    PubMed

    Bridge, Christopher P; Ioannou, Christos; Noble, J Alison

    2017-02-01

    Interpretation of ultrasound videos of the fetal heart is crucial for the antenatal diagnosis of congenital heart disease (CHD). We believe that automated image analysis techniques could make an important contribution towards improving CHD detection rates. However, to our knowledge, no previous work has been done in this area. With this goal in mind, this paper presents a framework for tracking the key variables that describe the content of each frame of freehand 2D ultrasound scanning videos of the healthy fetal heart. This represents an important first step towards developing tools that can assist with CHD detection in abnormal cases. We argue that it is natural to approach this as a sequential Bayesian filtering problem, due to the strong prior model we have of the underlying anatomy, and the ambiguity of the appearance of structures in ultrasound images. We train classification and regression forests to predict the visibility, location and orientation of the fetal heart in the image, and the viewing plane label from each frame. We also develop a novel adaptation of regression forests for circular variables to deal with the prediction of cardiac phase. Using a particle-filtering-based method to combine predictions from multiple video frames, we demonstrate how to filter this information to give a temporally consistent output at real-time speeds. We present results on a challenging dataset gathered in a real-world clinical setting and compare to expert annotations, achieving similar levels of accuracy to the levels of inter- and intra-observer variation.

  5. A Quantitative Description of Suicide Inhibition of Dichloroacetic Acid in Rats and Mice

    SciTech Connect

    Keys, Deborah A.; Schultz, Irv R.; Mahle, Deirdre A.; Fisher, Jeffrey W.

    2004-09-16

    Dichloroacetic acid (DCA), a minor metabolite of trichloroethylene (TCE) and water disinfection byproduct, remains an important risk assessment issue because of its carcinogenic potency. DCA has been shown to inhibit its own metabolism by irreversibly inactivating glutathione transferase zeta (GSTzeta). To better predict internal dosimetry of DCA, a physiologically based pharmacokinetic (PBPK) model of DCA was developed. Suicide inhibition was described dynamically by varying the rate of maximal GSTzeta mediated metabolism of DCA (Vmax) over time. Resynthesis (zero-order) and degradation (first-order) of metabolic activity were described. Published iv pharmacokinetic studies in native rats were used to estimate an initial Vmax value, with Km set to an in vitro determined value. Degradation and resynthesis rates were set to estimated values from a published immunoreactive GSTzeta protein time course. The first-order inhibition rate, kd, was estimated to this same time course. A secondary, linear non-GSTzeta-mediated metabolic pathway is proposed to fit DCA time courses following treatment with DCA in drinking water. The PBPK model predictions were validated by comparing predicted DCA concentrations to measured concentrations in published studies of rats pretreated with DCA following iv exposure to 0.05 to 20 mg/kg DCA. The same model structure was parameterized to simulate DCA time courses following iv exposure in native and pretreated mice. Blood and liver concentrations during and postexposure to DCA in drinking water were predicted. Comparisons of PBPK model predicted to measured values were favorable, lending support for the further development of this model for application to DCA or TCE human health risk assessment.

  6. Quantitative description of the contribution of heterolytic processes to thermolysis of diacyl peroxides

    SciTech Connect

    Stankevich, A.I.

    1988-09-10

    The thermolysis of acyl acetyl peroxides containing alkyl substituents or a bromine atom at the /alpha/ position of the acyl group takes place with the simultaneous participation of homolytic and heterolytic processes. The contribution from the heterolytic processes increases linearly with decrease in the ionization potential of the corresponding free radical. During the thermolysis of diacyl peroxides in nonpolar solvents the possibility of heterolytic processes appears if the difference between the ionization potential of the radical corresponding to the electron-donating fragment and the electron affinity of the radical corresponding to the electron-withdrawing fragment is less than 480 kJ/mole.

  7. A quantitative description of the sodium current in the rat sympathetic neurone.

    PubMed Central

    Belluzzi, O; Sacchi, O

    1986-01-01

    The somata of rat sympathetic neurones were voltage-clamped in vitro at 27 degrees C using separate intracellular voltage and current micro-electrodes. Na currents were isolated from other current contributions by using: Cd to block the Ca current (ICa) and the related Ca-dependent K current (IK(Ca)), and external tetraethylammonium to suppress the delayed rectifier current (IK(V) ). The holding potential was maintained at -50 mV to inactivate the fast transient K current (IA) when the IA contamination was unacceptable. Step depolarizations beyond -30 mV activated a fast, transient inward current carried by Na ions; it was suppressed by tetrodotoxin and was absent in Na-free solution. Once activated, INa declined exponentially to zero with a voltage-dependent time constant. The underlying conductance, gNa, showed a sigmoidal activation between -30 and +10 mV, with half-activation at -21.1 mV and a maximal value (mean gNa) of 4.44 microS per neurone. The steady-state inactivation level, h infinity, varied with membrane potential, ranging from complete inactivation at -30 mV to minimal inactivation at about -90 mV with a midpoint at -56.2 mV. Double-pulse experiments showed that development and removal of inactivation followed a single-exponential time course; tau h was markedly voltage-dependent and ranged from 46 ms at -50 mV to 2.5 ms at -100 mV. Besides the fast inactivation, the Na conductance showed a slow component of inactivation. The steady-state value, s infinity, was maximal at -80 mV and minimal at -40 mV. The removal of slow inactivation is a two-time-constant process, the first with a time constant in the order of hundreds of milliseconds and the second with a time constant of seconds. Slow inactivation onset appeared to be a faster process than its removal. When slow inactivation was fully removed the peak INa increased by a factor of 1.8. INa was well described by assuming it to be proportional to m3h. The temperature dependence of peak INa, tau m and tau h was studied in the temperature range 17-27 degrees C and found similar to that reported for other preparations. The Q10 of these parameters allowed the reconstruction of the INa kinetic properties at 37 degrees C. PMID:2441037

  8. Factors affecting planned return to work after trauma: A prospective descriptive qualitative and quantitative study.

    PubMed

    Folkard, S S; Bloomfield, T D; Page, P R J; Wilson, D; Ricketts, D M; Rogers, B A

    2016-12-01

    The use of patient reported outcome measures (PROMs) in trauma is limited. The aim of this pilot study is to evaluate qualitative responses and factors affecting planned return to work following significant trauma, for which there is currently a poor evidence base. National ethical approval was obtained for routine prospective PROMs data collection, including EQ-5D, between Sept 2013 and March 2015 for trauma patients admitted to the Sussex Major Trauma Centre (n=92). 84 trauma patients disclosed their intended return to work at discharge. Additional open questions asked 'things done well' and 'things to be improved'. EQ-5D responses were valued using the time trade-off method. Statistical analysis between multiple variables was completed by ANOVA, and with categorical categories by Chi squared analysis. Only 18/68 of patients working at admission anticipated returning to work within 14days post-discharge. The injury severity scores (ISS) of those predicting return to work within two weeks and those predicting return to work longer than two weeks were 14.17 and 13.59, respectively. Increased physicality of work showed a trend towards poorer return to work outcomes, although non-significant in Chi-squared test in groups predicting return in less than or greater than two weeks (4.621, p=0.2017ns). No significant difference was demonstrated in the comparative incomes of patients with different estimated return to work outcomes (ANOVA r(2)=0.001, P=0.9590ns). EQ-5D scores were higher in those predicting return to work within two weeks when compared to greater than two weeks. Qualitative thematic content analysis of open responses was possible for 66/92 of respondents. Prominent positive themes were: care, staff, professionalism, and communication. Prominent negative themes were: food, ward response time, and communication. This pilot study highlights the importance of qualitative PROMs analysis in leading patient-driven improvements in trauma care. We provide standard deviations for ISS scores and EQ-5D scores in our general trauma cohort, for use in sample size calculations for further studies analysing factors affecting return to work after trauma.

  9. Approaches to a Quantitative Analytical Description of Low Frequency Sound Absorption in Sea Water,

    DTIC Science & Technology

    1980-09-01

    Liebermann (1948) found that the absorption coefficient, a, was frequency dependent in the range 100- 1000 kHz and could be attributed to perturbations due...Affecting the Attenuation of Low Frequency Sound in Sea Water", MRL Report No. R-782 (1979). 2. Liebermann , R.M., "Origin of Sound Absorption in Water

  10. Quantitative Description of Obscuration Factors for Electro-Optical and Millimeter Wave Systems

    DTIC Science & Technology

    1986-07-25

    l1 DOD-HDBK-178(ER) TABLE 3-13 CENTRAL AMERICAN INTERIOR WEATHER SUMMARY CHART (Mean value of observations reported over a 10-year period) Wind Cloud...linltatit fralutal of the tit-sell regioM. ,3 12 DOD-HDBK-178(ER) TABLE 3-14 MIDEAST DESERT WEATHER SUMMARY CHART (Mean value of observations reported over a...3-13 DOD-HDBK-178(ER) TABLE 3-15 EASTERN SCANDINAVIA WEATHER SUMMARY CHART ’Mean value of observations reported over a 10-year period) Wind Cloud

  11. Quantitative description of ion transport in Donnan ion exchange membrane systems

    SciTech Connect

    Rush, W.E.; Baker, B.L.

    1980-05-01

    Presented are simplified mass transfer techniques describing the transfer of ions in continuous ion selective membrane systems in which the resistance to ion transport through the membrane is small in relation to the resistance to ion transport in the solution phase. Methods are developed through the application of the transfer unit concept to the Donnan equilibrium. This equilibrium describes the equilibrium ion concentration on either side of an ion selective membrane. Data from one cation selection system is presented as evidence of the validity of these methods. Further techniques are shown that will allow the determination of ion transport given only equipment parameters and solution diffusivities. Supporting data are shown.

  12. Quantitative and Descriptive Comparison of Four Acoustic Analysis Systems: Vowel Measurements

    ERIC Educational Resources Information Center

    Burris, Carlyn; Vorperian, Houri K.; Fourakis, Marios; Kent, Ray D.; Bolt, Daniel M.

    2014-01-01

    Purpose: This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Method: Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9…

  13. Quantitative microbiological risk assessment.

    PubMed

    Hoornstra, E; Notermans, S

    2001-05-21

    The production of safe food is being increasingly based on the use of risk analysis, and this process is now in use to establish national and international food safety objectives. It is also being used more frequently to guarantee that safety objectives are met and that such guarantees are achieved in a cost-effective manner. One part of the overall risk analysis procedure-risk assessment-is the scientific process in which the hazards and risk factors are identified, and the risk estimate or risk profile is determined. Risk assessment is an especially important tool for governments when food safety objectives have to be developed in the case of 'new' contaminants in known products or known contaminants causing trouble in 'new' products. Risk assessment is also an important approach for food companies (i) during product development, (ii) during (hygienic) process optimalization, and (iii) as an extension (validation) of the more qualitative HACCP-plan. This paper discusses these two different types of risk assessment, and uses probability distribution functions to assess the risks posed by Escherichia coli O157:H7 in each case. Such approaches are essential elements of risk management, as they draw on all available information to derive accurate and realistic estimations of the risk posed. The paper also discusses the potential of scenario-analysis in simulating the impact of different or modified risk factors during the consideration of new or improved control measures.

  14. [Quantitative determination of niphensamide by high performance liquid chromatography (HPLC)].

    PubMed

    Long, C; Chen, S; Shi, T

    1998-01-01

    An HPLC method for the quantitative determination of Niphensamide in pesticide powder was developed. Column:Micropak-CH 5 microns (300 mm x 4.0 mm i.d.), mobile phase: CH3OH-H2O(1:1), detector: UV 254 nm, flow rate: 0.7 mL/min, column temperature: 25 degrees C. Under the above conditions, Niphensamide and other components were separated from each other. The method is simple, rapid, sensitive and accurate.

  15. LC-MS systems for quantitative bioanalysis.

    PubMed

    van Dongen, William D; Niessen, Wilfried M A

    2012-10-01

    LC-MS has become the method-of-choice in small-molecule drug bioanalysis (molecular mass <800 Da) and is also increasingly being applied as an alternative to ligand-binding assays for the bioanalytical determination of biopharmaceuticals. Triple quadrupole MS is the established bioanalytical technique due to its unpreceded selectivity and sensitivity, but high-resolution accurate-mass MS is recently gaining ground due to its ability to provide simultaneous quantitative and qualitative analysis of drugs and their metabolites. This article discusses current trends in the field of bioanalytical LC-MS (until September 2012), and provides an overview of currently available commercial triple quadrupole MS and high-resolution LC-MS instruments as applied for the bioanalysis of small-molecule and biopharmaceutical drugs.

  16. Quantitive DNA Fiber Mapping

    SciTech Connect

    Lu, Chun-Mei; Wang, Mei; Greulich-Bode, Karin M.; Weier, Jingly F.; Weier, Heinz-Ulli G.

    2008-01-28

    Several hybridization-based methods used to delineate single copy or repeated DNA sequences in larger genomic intervals take advantage of the increased resolution and sensitivity of free chromatin, i.e., chromatin released from interphase cell nuclei. Quantitative DNA fiber mapping (QDFM) differs from the majority of these methods in that it applies FISH to purified, clonal DNA molecules which have been bound with at least one end to a solid substrate. The DNA molecules are then stretched by the action of a receding meniscus at the water-air interface resulting in DNA molecules stretched homogeneously to about 2.3 kb/{micro}m. When non-isotopically, multicolor-labeled probes are hybridized to these stretched DNA fibers, their respective binding sites are visualized in the fluorescence microscope, their relative distance can be measured and converted into kilobase pairs (kb). The QDFM technique has found useful applications ranging from the detection and delineation of deletions or overlap between linked clones to the construction of high-resolution physical maps to studies of stalled DNA replication and transcription.

  17. Quantitative environmental risk analysis

    SciTech Connect

    Klovning, J.; Nilsen, E.F.

    1995-12-31

    According to regulations relating to implementation and rise of risk analysis in the petroleum activities issued by the Norwegian Petroleum Directorate, it is mandatory for an operator on the Norwegian Continental Shelf to establish acceptance criteria for environmental risk in the activities and carry out environmental risk analysis. This paper presents a {open_quotes}new{close_quotes} method for environmental risk analysis developed by the company. The objective has been to assist the company to meet rules and regulations and to assess and describe the environmental risk in a systematic manner. In the environmental risk analysis the most sensitive biological resource in the affected area is used to assess the environmental damage. The analytical method is based on the methodology for quantitative risk analysis related to loss of life. In addition it incorporates the effect of seasonal fluctuations in the environmental risk evaluations. The paper is describing the function of the main analytical sequences exemplified through an analysis of environmental risk related to exploration drilling in an environmental sensitive area on the Norwegian Continental Shelf.

  18. Quantitative Electron Nanodiffraction.

    SciTech Connect

    Spence, John

    2015-01-30

    This Final report summarizes progress under this award for the final reporting period 2002 - 2013 in our development of quantitive electron nanodiffraction to materials problems, especially devoted to atomistic processes in semiconductors and electronic oxides such as the new artificial oxide multilayers, where our microdiffraction is complemented with energy-loss spectroscopy (ELNES) and aberration-corrected STEM imaging (9). The method has also been used to map out the chemical bonds in the important GaN semiconductor (1) used for solid state lighting, and to understand the effects of stacking sequence variations and interfaces in digital oxide superlattices (8). Other projects include the development of a laser-beam Zernike phase plate for cryo-electron microscopy (5) (based on the Kapitza-Dirac effect), work on reconstruction of molecular images using the scattering from many identical molecules lying in random orientations (4), a review article on space-group determination for the International Tables on Crystallography (10), the observation of energy-loss spectra with millivolt energy resolution and sub-nanometer spatial resolution from individual point defects in an alkali halide, a review article for the Centenary of X-ray Diffration (17) and the development of a new method of electron-beam lithography (12). We briefly summarize here the work on GaN, on oxide superlattice ELNES, and on lithography by STEM.

  19. An accurate density functional theory for the vapor-liquid interface of associating chain molecules based on the statistical associating fluid theory for potentials of variable range

    NASA Astrophysics Data System (ADS)

    Gloor, Guy J.; Jackson, George; Blas, Felipe J.; del Río, Elvira Martín; de Miguel, Enrique

    2004-12-01

    branched chain formation, and network structures) are examined separately. The surface tension of the associating fluid is found to be bounded between the nonassociating and fully associated limits (both of which correspond to equivalent nonassociating systems). The temperature dependence of the surface tension is found to depend strongly on the balance between the strength and range of the association, and on the particular association scheme. In the case of a system with a strong but very localized association interaction, the surface tension exhibits the characteristic "s shaped" behavior with temperature observed in fluids such as water and alkanols. The various types of curves observed in real substances can be reproduced by the theory. It is very gratifying that a DFT based on SAFT-VR free energy can provide an accurate quantitative description of the surface tension of both the model and experimental systems.

  20. Probabilistic description of traffic flow

    NASA Astrophysics Data System (ADS)

    Mahnke, R.; Kaupužs, J.; Lubashevsky, I.

    2005-03-01

    A stochastic description of traffic flow, called probabilistic traffic flow theory, is developed. The general master equation is applied to relatively simple models to describe the formation and dissolution of traffic congestions. Our approach is mainly based on spatially homogeneous systems like periodically closed circular rings without on- and off-ramps. We consider a stochastic one-step process of growth or shrinkage of a car cluster (jam). As generalization we discuss the coexistence of several car clusters of different sizes. The basic problem is to find a physically motivated ansatz for the transition rates of the attachment and detachment of individual cars to a car cluster consistent with the empirical observations in real traffic. The emphasis is put on the analogy with first-order phase transitions and nucleation phenomena in physical systems like supersaturated vapour. The results are summarized in the flux-density relation, the so-called fundamental diagram of traffic flow, and compared with empirical data. Different regimes of traffic flow are discussed: free flow, congested mode as stop-and-go regime, and heavy viscous traffic. The traffic breakdown is studied based on the master equation as well as the Fokker-Planck approximation to calculate mean first passage times or escape rates. Generalizations are developed to allow for on-ramp effects. The calculated flux-density relation and characteristic breakdown times coincide with empirical data measured on highways. Finally, a brief summary of the stochastic cellular automata approach is given.

  1. XML Translator for Interface Descriptions

    NASA Technical Reports Server (NTRS)

    Boroson, Elizabeth R.

    2009-01-01

    A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.

  2. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  3. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  4. Nonexposure accurate location K-anonymity algorithm in LBS.

    PubMed

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  5. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures.

    PubMed

    Mastanduno, Michael A; Gambhir, Sanjiv S

    2016-10-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies.

  6. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures

    PubMed Central

    Mastanduno, Michael A.; Gambhir, Sanjiv S.

    2016-01-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies. PMID:27867695

  7. Integrating Quantitative Knowledge into a Qualitative Gene Regulatory Network

    PubMed Central

    Bourdon, Jérémie; Eveillard, Damien; Siegel, Anne

    2011-01-01

    Despite recent improvements in molecular techniques, biological knowledge remains incomplete. Any theorizing about living systems is therefore necessarily based on the use of heterogeneous and partial information. Much current research has focused successfully on the qualitative behaviors of macromolecular networks. Nonetheless, it is not capable of taking into account available quantitative information such as time-series protein concentration variations. The present work proposes a probabilistic modeling framework that integrates both kinds of information. Average case analysis methods are used in combination with Markov chains to link qualitative information about transcriptional regulations to quantitative information about protein concentrations. The approach is illustrated by modeling the carbon starvation response in Escherichia coli. It accurately predicts the quantitative time-series evolution of several protein concentrations using only knowledge of discrete gene interactions and a small number of quantitative observations on a single protein concentration. From this, the modeling technique also derives a ranking of interactions with respect to their importance during the experiment considered. Such a classification is confirmed by the literature. Therefore, our method is principally novel in that it allows (i) a hybrid model that integrates both qualitative discrete model and quantities to be built, even using a small amount of quantitative information, (ii) new quantitative predictions to be derived, (iii) the robustness and relevance of interactions with respect to phenotypic criteria to be precisely quantified, and (iv) the key features of the model to be extracted that can be used as a guidance to design future experiments. PMID:21935350

  8. Increasing Literacy in Quantitative Methods: The Key to the Future of Canadian Psychology.

    PubMed

    Counsell, Alyssa; Cribbie, Robert A; Harlow, Lisa L

    2016-08-01

    Quantitative methods (QM) dominate empirical research in psychology. Unfortunately most researchers in psychology receive inadequate training in QM. This creates a challenge for researchers who require advanced statistical methods to appropriately analyze their data. Many of the recent concerns about research quality, replicability, and reporting practices are directly tied to the problematic use of QM. As such, improving quantitative literacy in psychology is an important step towards eliminating these concerns. The current paper will include two main sections that discuss quantitative challenges and opportunities. The first section discusses training and resources for students and presents descriptive results on the number of quantitative courses required and available to graduate students in Canadian psychology departments. In the second section, we discuss ways of improving quantitative literacy for faculty, researchers, and clinicians. This includes a strong focus on the importance of collaboration. The paper concludes with practical recommendations for improving quantitative skills and literacy for students and researchers in Canada.

  9. 7 CFR 621.20 - Description.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...

  10. 7 CFR 621.20 - Description.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...

  11. 7 CFR 621.20 - Description.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...

  12. 7 CFR 621.20 - Description.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...

  13. 7 CFR 621.20 - Description.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...

  14. Quantitative Spectroscopy of Deneb

    NASA Astrophysics Data System (ADS)

    Schiller, Florian; Przybilla, N.

    We use the visually brightest A-type supergiant Deneb (A2 Ia) as benchmark for testing a spectro- scopic analysis technique developed for quantitative studies of BA-type supergiants. Our NLTE spectrum synthesis technique allows us to derive stellar parameters and elemental abundances with unprecedented accuracy. The study is based on a high-resolution and high-S/N spectrum obtained with the Echelle spectrograph FOCES on the Calar Alto 2.2 m telescope. Practically all inconsistencies reported in earlier studies are resolved. A self-consistent view of Deneb is thus obtained, allowing us to discuss its evolutionary state in detail by comparison with the most recent generation of evolution models for massive stars. The basic atmospheric parameters Teff = 8525 ± 75 K and log g = 1.10 ± 0.05 dex (cgs) and the distance imply the following fundamental parameters for Deneb: M spec = 17 ± 3 M⊙ , L = 1.77 ± 0.29 · 105 L⊙ and R = 192 ± 16 R⊙ . The derived He and CNO abundances indicate mixing with nuclear processed matter. The high N/C ratio of 4.64 ± 1.39 and a N/O ratio of 0.88 ± 0.07 (mass fractions) could in principle be explained by evolutionary models with initially very rapid rotation. A mass of ˜ 22 M⊙ is implied for the progenitor on the zero-age main se- quence, i.e. it was a late O-type star. Significant mass-loss has occurred, probably enhanced by pronounced centrifugal forces. The observational constraints favour a scenario for the evolu- tion of Deneb where the effects of rotational mixing may be amplified by an interaction with a magnetic field. Analogous analyses of such highly luminous BA-type supergiants will allow for precision studies of different galaxies in the Local Group and beyond.

  15. Quantitative Luminescence Imaging System

    SciTech Connect

    Batishko, C.R.; Stahl, K.A.; Fecht, B.A.

    1992-12-31

    The goal of the MEASUREMENT OF CHEMILUMINESCENCE project is to develop and deliver a suite of imaging radiometric instruments for measuring spatial distributions of chemiluminescence. Envisioned deliverables include instruments working at the microscopic, macroscopic, and life-sized scales. Both laboratory and field portable instruments are envisioned. The project also includes development of phantoms as enclosures for the diazoluminomelanin (DALM) chemiluminescent chemistry. A suite of either phantoms in a variety of typical poses, or phantoms that could be adjusted to a variety of poses, is envisioned. These are to include small mammals (rats), mid-sized mammals (monkeys), and human body parts. A complete human phantom that can be posed is a long-term goal of the development. Taken together, the chemistry and instrumentation provide a means for imaging rf dosimetry based on chemiluminescence induced by the heat resulting from rf energy absorption. The first delivered instrument, the Quantitative Luminescence Imaging System (QLIS), resulted in a patent, and an R&D Magazine 1991 R&D 100 award, recognizing it as one of the 100 most significant technological developments of 1991. The current status of the project is that three systems have been delivered, several related studies have been conducted, two preliminary human hand phantoms have been delivered, system upgrades have been implemented, and calibrations have been maintained. Current development includes sensitivity improvements to the microscope-based system; extension of the large-scale (potentially life-sized targets) system to field portable applications; extension of the 2-D large-scale system to 3-D measurement; imminent delivery of a more refined human hand phantom and a rat phantom; rf, thermal and imaging subsystem integration; and continued calibration and upgrade support.

  16. Quantitative luminescence imaging system

    NASA Astrophysics Data System (ADS)

    Batishko, C. R.; Stahl, K. A.; Fecht, B. A.

    The goal of the Measurement of Chemiluminescence project is to develop and deliver a suite of imaging radiometric instruments for measuring spatial distributions of chemiluminescence. Envisioned deliverables include instruments working at the microscopic, macroscopic, and life-sized scales. Both laboratory and field portable instruments are envisioned. The project also includes development of phantoms as enclosures for the diazoluminomelanin (DALM) chemiluminescent chemistry. A suite of either phantoms in a variety of typical poses, or phantoms that could be adjusted to a variety of poses, is envisioned. These are to include small mammals (rats), mid-sized mammals (monkeys), and human body parts. A complete human phantom that can be posed is a long-term goal of the development. Taken together, the chemistry and instrumentation provide a means for imaging rf dosimetry based on chemiluminescence induced by the heat resulting from rf energy absorption. The first delivered instrument, the Quantitative Luminescence Imaging System (QLIS), resulted in a patent, and an R&D Magazine 1991 R&D 100 award, recognizing it as one of the 100 most significant technological developments of 1991. The current status of the project is that three systems have been delivered, several related studies have been conducted, two preliminary human hand phantoms have been delivered, system upgrades have been implemented, and calibrations have been maintained. Current development includes sensitivity improvements to the microscope-based system; extension of the large-scale (potentially life-sized targets) system to field portable applications; extension of the 2-D large-scale system to 3-D measurement; imminent delivery of a more refined human hand phantom and a rat phantom; rf, thermal and imaging subsystem integration; and continued calibration and upgrade support.

  17. Quantitative determination of ribosome nascent chain stability

    PubMed Central

    Samelson, Avi J.; Jensen, Madeleine K.; Soto, Randy A.; Cate, Jamie H. D.; Marqusee, Susan

    2016-01-01

    Accurate protein folding is essential for proper cellular and organismal function. In the cell, protein folding is carefully regulated; changes in folding homeostasis (proteostasis) can disrupt many cellular processes and have been implicated in various neurodegenerative diseases and other pathologies. For many proteins, the initial folding process begins during translation while the protein is still tethered to the ribosome; however, most biophysical studies of a protein’s energy landscape are carried out in isolation under idealized, dilute conditions and may not accurately report on the energy landscape in vivo. Thus, the energy landscape of ribosome nascent chains and the effect of the tethered ribosome on nascent chain folding remain unclear. Here we have developed a general assay for quantitatively measuring the folding stability of ribosome nascent chains, and find that the ribosome exerts a destabilizing effect on the polypeptide chain. This destabilization decreases as a function of the distance away from the peptidyl transferase center. Thus, the ribosome may add an additional layer of robustness to the protein-folding process by avoiding the formation of stable partially folded states before the protein has completely emerged from the ribosome. PMID:27821780

  18. Quantitative assessment of growth plate activity

    SciTech Connect

    Harcke, H.T.; Macy, N.J.; Mandell, G.A.; MacEwen, G.D.

    1984-01-01

    In the immature skeleton the physis or growth plate is the area of bone least able to withstand external forces and is therefore prone to trauma. Such trauma often leads to premature closure of the plate and results in limb shortening and/or angular deformity (varus or valgus). Active localization of bone seeking tracers in the physis makes bone scintigraphy an excellent method for assessing growth plate physiology. To be most effective, however, physeal activity should be quantified so that serial evaluations are accurate and comparable. The authors have developed a quantitative method for assessing physeal activity and have applied it ot the hip and knee. Using computer acquired pinhole images of the abnormal and contralateral normal joints, ten regions of interest are placed at key locations around each joint and comparative ratios are generated to form a growth plate profile. The ratios compare segmental physeal activity to total growth plate activity on both ipsilateral and contralateral sides and to adjacent bone. In 25 patients, ages 2 to 15 years, with angular deformities of the legs secondary to trauma, Blount's disease, and Perthes disease, this technique is able to differentiate abnormal segmental physeal activity. This is important since plate closure does not usually occur uniformly across the physis. The technique may permit the use of scintigraphy in the prediction of early closure through the quantitative analysis of serial studies.

  19. Quantitative tomographic measurements of opaque multiphase flows

    SciTech Connect

    GEORGE,DARIN L.; TORCZYNSKI,JOHN R.; SHOLLENBERGER,KIM ANN; O'HERN,TIMOTHY J.; CECCIO,STEVEN L.

    2000-03-01

    An electrical-impedance tomography (EIT) system has been developed for quantitative measurements of radial phase distribution profiles in two-phase and three-phase vertical column flows. The EIT system is described along with the computer algorithm used for reconstructing phase volume fraction profiles. EIT measurements were validated by comparison with a gamma-densitometry tomography (GDT) system. The EIT system was used to accurately measure average solid volume fractions up to 0.05 in solid-liquid flows, and radial gas volume fraction profiles in gas-liquid flows with gas volume fractions up to 0.15. In both flows, average phase volume fractions and radial volume fraction profiles from GDT and EIT were in good agreement. A minor modification to the formula used to relate conductivity data to phase volume fractions was found to improve agreement between the methods. GDT and EIT were then applied together to simultaneously measure the solid, liquid, and gas radial distributions within several vertical three-phase flows. For average solid volume fractions up to 0.30, the gas distribution for each gas flow rate was approximately independent of the amount of solids in the column. Measurements made with this EIT system demonstrate that EIT may be used successfully for noninvasive, quantitative measurements of dispersed multiphase flows.

  20. Fully automated quantitative cephalometry using convolutional neural networks.

    PubMed

    Arık, Sercan Ö; Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Quantitative cephalometry plays an essential role in clinical diagnosis, treatment, and surgery. Development of fully automated techniques for these procedures is important to enable consistently accurate computerized analyses. We study the application of deep convolutional neural networks (CNNs) for fully automated quantitative cephalometry for the first time. The proposed framework utilizes CNNs for detection of landmarks that describe the anatomy of the depicted patient and yield quantitative estimation of pathologies in the jaws and skull base regions. We use a publicly available cephalometric x-ray image dataset to train CNNs for recognition of landmark appearance patterns. CNNs are trained to output probabilistic estimations of different landmark locations, which are combined using a shape-based model. We evaluate the overall framework on the test set and compare with other proposed techniques. We use the estimated landmark locations to assess anatomically relevant measurements and classify them into different anatomical types. Overall, our results demonstrate high anatomical landmark detection accuracy ([Formula: see text] to 2% higher success detection rate for a 2-mm range compared with the top benchmarks in the literature) and high anatomical type classification accuracy ([Formula: see text] average classification accuracy for test set). We demonstrate that CNNs, which merely input raw image patches, are promising for accurate quantitative cephalometry.