Science.gov

Sample records for accurate quantitative description

  1. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  2. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  3. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  4. Accurate description of calcium solvation in concentrated aqueous solutions.

    PubMed

    Kohagen, Miriam; Mason, Philip E; Jungwirth, Pavel

    2014-07-17

    Calcium is one of the biologically most important ions; however, its accurate description by classical molecular dynamics simulations is complicated by strong electrostatic and polarization interactions with surroundings due to its divalent nature. Here, we explore the recently suggested approach for effectively accounting for polarization effects via ionic charge rescaling and develop a new and accurate parametrization of the calcium dication. Comparison to neutron scattering and viscosity measurements demonstrates that our model allows for an accurate description of concentrated aqueous calcium chloride solutions. The present model should find broad use in efficient and accurate modeling of calcium in aqueous environments, such as those encountered in biological and technological applications.

  5. A quantitative description for efficient financial markets

    NASA Astrophysics Data System (ADS)

    Immonen, Eero

    2015-09-01

    In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.

  6. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  7. Challenges in accurate quantitation of lysophosphatidic acids in human biofluids

    PubMed Central

    Onorato, Joelle M.; Shipkova, Petia; Minnich, Anne; Aubry, Anne-Françoise; Easter, John; Tymiak, Adrienne

    2014-01-01

    Lysophosphatidic acids (LPAs) are biologically active signaling molecules involved in the regulation of many cellular processes and have been implicated as potential mediators of fibroblast recruitment to the pulmonary airspace, pointing to possible involvement of LPA in the pathology of pulmonary fibrosis. LPAs have been measured in various biological matrices and many challenges involved with their analyses have been documented. However, little published information is available describing LPA levels in human bronchoalveolar lavage fluid (BALF). We therefore conducted detailed investigations into the effects of extensive sample handling and sample preparation conditions on LPA levels in human BALF. Further, targeted lipid profiling of human BALF and plasma identified the most abundant lysophospholipids likely to interfere with LPA measurements. We present the findings from these investigations, highlighting the importance of well-controlled sample handling for the accurate quantitation of LPA. Further, we show that chromatographic separation of individual LPA species from their corresponding lysophospholipid species is critical to avoid reporting artificially elevated levels. The optimized sample preparation and LC/MS/MS method was qualified using a stable isotope-labeled LPA as a surrogate calibrant and used to determine LPA levels in human BALF and plasma from a Phase 0 clinical study comparing idiopathic pulmonary fibrosis patients to healthy controls. PMID:24872406

  8. Fast and Accurate Detection of Multiple Quantitative Trait Loci

    PubMed Central

    Nettelblad, Carl; Holmgren, Sverker

    2013-01-01

    Abstract We present a new computational scheme that enables efficient and reliable quantitative trait loci (QTL) scans for experimental populations. Using a standard brute-force exhaustive search effectively prohibits accurate QTL scans involving more than two loci to be performed in practice, at least if permutation testing is used to determine significance. Some more elaborate global optimization approaches, for example, DIRECT have been adopted earlier to QTL search problems. Dramatic speedups have been reported for high-dimensional scans. However, since a heuristic termination criterion must be used in these types of algorithms, the accuracy of the optimization process cannot be guaranteed. Indeed, earlier results show that a small bias in the significance thresholds is sometimes introduced. Our new optimization scheme, PruneDIRECT, is based on an analysis leading to a computable (Lipschitz) bound on the slope of a transformed objective function. The bound is derived for both infinite- and finite-size populations. Introducing a Lipschitz bound in DIRECT leads to an algorithm related to classical Lipschitz optimization. Regions in the search space can be permanently excluded (pruned) during the optimization process. Heuristic termination criteria can thus be avoided. Hence, PruneDIRECT has a well-defined error bound and can in practice be guaranteed to be equivalent to a corresponding exhaustive search. We present simulation results that show that for simultaneous mapping of three QTLS using permutation testing, PruneDIRECT is typically more than 50 times faster than exhaustive search. The speedup is higher for stronger QTL. This could be used to quickly detect strong candidate eQTL networks. PMID:23919387

  9. A Quantitative Description of FBI Public Relations.

    ERIC Educational Resources Information Center

    Gibson, Dirk C.

    1997-01-01

    States that the Federal Bureau of Investigation (FBI) had the most successful media relations program of all government agencies from the 1930s to the 1980s. Uses quantitative analysis to show why those media efforts were successful. Identifies themes that typified the verbal component of FBI publicity and the broad spectrum of mass communication…

  10. Models in biology: 'accurate descriptions of our pathetic thinking'.

    PubMed

    Gunawardena, Jeremy

    2014-01-01

    In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as 'predictive', in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484

  11. Active contour approach for accurate quantitative airway analysis

    NASA Astrophysics Data System (ADS)

    Odry, Benjamin L.; Kiraly, Atilla P.; Slabaugh, Greg G.; Novak, Carol L.; Naidich, David P.; Lerallut, Jean-Francois

    2008-03-01

    Chronic airway disease causes structural changes in the lungs including peribronchial thickening and airway dilatation. Multi-detector computed tomography (CT) yields detailed near-isotropic images of the lungs, and thus the potential to obtain quantitative measurements of lumen diameter and airway wall thickness. Such measurements would allow standardized assessment, and physicians to diagnose and locate airway abnormalities, adapt treatment, and monitor progress over time. However, due to the sheer number of airways per patient, systematic analysis is infeasible in routine clinical practice without automation. We have developed an automated and real-time method based on active contours to estimate both airway lumen and wall dimensions; the method does not require manual contour initialization but only a starting point on the targeted airway. While the lumen contour segmentation is purely region-based, the estimation of the outer diameter considers the inner wall segmentation as well as local intensity variation, in order anticipate the presence of nearby arteries and exclude them. These properties make the method more robust than the Full-Width Half Maximum (FWHM) approach. Results are demonstrated on a phantom dataset with known dimensions and on a human dataset where the automated measurements are compared against two human operators. The average error on the phantom measurements was 0.10mm and 0.14mm for inner and outer diameters, showing sub-voxel accuracy. Similarly, the mean variation from the average manual measurement was 0.14mm and 0.18mm for inner and outer diameters respectively.

  12. From information theory to quantitative description of steric effects.

    PubMed

    Alipour, Mojtaba; Safari, Zahra

    2016-07-21

    Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.

  13. FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads

    PubMed Central

    Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya

    2012-01-01

    The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith–Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets. PMID:22379138

  14. Accurate description of argon and water adsorption on surfaces of graphene-based carbon allotropes.

    PubMed

    Kysilka, Jiří; Rubeš, Miroslav; Grajciar, Lukáš; Nachtigall, Petr; Bludský, Ota

    2011-10-20

    Accurate interaction energies of nonpolar (argon) and polar (water) adsorbates with graphene-based carbon allotropes were calculated by means of a combined density functional theory (DFT)-ab initio computational scheme. The calculated interaction energy of argon with graphite (-9.7 kJ mol(-1)) is in excellent agreement with the available experimental data. The calculated interaction energy of water with graphene and graphite is -12.8 and -14.6 kJ mol(-1), respectively. The accuracy of combined DFT-ab initio methods is discussed in detail based on a comparison with the highly precise interaction energies of argon and water with coronene obtained at the coupled-cluster CCSD(T) level extrapolated to the complete basis set (CBS) limit. A new strategy for a reliable estimate of the CBS limit is proposed for systems where numerical instabilities occur owing to basis-set near-linear dependence. The most accurate estimate of the argon and water interaction with coronene (-8.1 and -14.0 kJ mol(-1), respectively) is compared with the results of other methods used for the accurate description of weak intermolecular interactions.

  15. Descriptive Quantitative Analysis of Rearfoot Alignment Radiographic Parameters.

    PubMed

    Meyr, Andrew J; Wagoner, Matthew R

    2015-01-01

    Although the radiographic parameters of the transverse talocalcaneal angle (tTCA), calcaneocuboid angle (CCA), talar head uncovering (THU), calcaneal inclination angle (CIA), talar declination angle (TDA), lateral talar-first metatarsal angle (lTFA), and lateral talocalcaneal angle (lTCA) form the basis of the preoperative evaluation and procedure selection for pes planovalgus deformity, the so-called normal values of these measurements are not well-established. The objectives of the present study were to retrospectively evaluate the descriptive statistics of these radiographic parameters (tTCA, CCA, THU, CIA, TDA, lTFA, and lTCA) in a large population, and, second, to determine an objective basis for defining "normal" versus "abnormal" measurements. As a secondary outcome, the relationship of these variables to the body mass index was assessed. Anteroposterior and lateral foot radiographs from 250 consecutive patients without a history of previous foot and ankle surgery and/or trauma were evaluated. The results revealed a mean measurement of 24.12°, 13.20°, 74.32%, 16.41°, 26.64°, 8.37°, and 43.41° for the tTCA, CCA, THU, CIA, TDA, lTFA, and lTCA, respectively. These were generally in line with the reported historical normal values. Descriptive statistical analysis demonstrated that the tTCA, THU, and TDA met the standards to be considered normally distributed but that the CCA, CIA, lTFA, and lTCA demonstrated data characteristics of both parametric and nonparametric distributions. Furthermore, only the CIA (R = -0.2428) and lTCA (R = -0.2449) demonstrated substantial correlation with the body mass index. No differentiations in deformity progression were observed when the radiographic parameters were plotted against each other to lead to a quantitative basis for defining "normal" versus "abnormal" measurements. PMID:26002682

  16. Accurate electronic-structure description of Mn complexes: a GGA+U approach

    NASA Astrophysics Data System (ADS)

    Li, Elise Y.; Kulik, Heather; Marzari, Nicola

    2008-03-01

    Conventional density-functional approach often fail in offering an accurate description of the spin-resolved energetics in transition metals complexes. We will focus here on Mn complexes, where many aspects of the molecular structure and the reaction mechanisms are still unresolved - most notably in the oxygen-evolving complex (OEC) of photosystem II and the manganese catalase (MC). We apply a self-consistent GGA + U approach [1], originally designed within the DFT framework for the treatment of strongly correlated materials, to describe the geometry, the electronic and the magnetic properties of various manganese oxide complexes, finding very good agreement with higher-order ab-initio calculations. In particular, the different oxidation states of dinuclear systems containing the [Mn2O2]^n+ (n= 2, 3, 4) core are investigated, in order to mimic the basic face unit of the OEC complex. [1]. H. J. Kulik, M. Cococcioni, D. A. Scherlis, N. Marzari, Phys. Rev. Lett., 2006, 97, 103001

  17. Accurate scoring of non-uniform sampling schemes for quantitative NMR

    PubMed Central

    Aoto, Phillip C.; Fenwick, R. Bryn; Kroon, Gerard J. A.; Wright, Peter E.

    2014-01-01

    Non-uniform sampling (NUS) in NMR spectroscopy is a recognized and powerful tool to minimize acquisition time. Recent advances in reconstruction methodologies are paving the way for the use of NUS in quantitative applications, where accurate measurement of peak intensities is crucial. The presence or absence of NUS artifacts in reconstructed spectra ultimately determines the success of NUS in quantitative NMR. The quality of reconstructed spectra from NUS acquired data is dependent upon the quality of the sampling scheme. Here we demonstrate that the best performing sampling schemes make up a very small percentage of the total randomly generated schemes. A scoring method is found to accurately predict the quantitative similarity between reconstructed NUS spectra and those of fully sampled spectra. We present an easy-to-use protocol to batch generate and rank optimal Poisson-gap NUS schedules for use with 2D NMR with minimized noise and accurate signal reproduction, without the need for the creation of synthetic spectra. PMID:25063954

  18. A correlative imaging based methodology for accurate quantitative assessment of bone formation in additive manufactured implants.

    PubMed

    Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D

    2016-06-01

    A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants.

  19. A fluorescence-based quantitative real-time PCR assay for accurate Pocillopora damicornis species identification

    NASA Astrophysics Data System (ADS)

    Thomas, Luke; Stat, Michael; Evans, Richard D.; Kennington, W. Jason

    2016-09-01

    Pocillopora damicornis is one of the most extensively studied coral species globally, but high levels of phenotypic plasticity within the genus make species identification based on morphology alone unreliable. As a result, there is a compelling need to develop cheap and time-effective molecular techniques capable of accurately distinguishing P. damicornis from other congeneric species. Here, we develop a fluorescence-based quantitative real-time PCR (qPCR) assay to genotype a single nucleotide polymorphism that accurately distinguishes P. damicornis from other morphologically similar Pocillopora species. We trial the assay across colonies representing multiple Pocillopora species and then apply the assay to screen samples of Pocillopora spp. collected at regional scales along the coastline of Western Australia. This assay offers a cheap and time-effective alternative to Sanger sequencing and has broad applications including studies on gene flow, dispersal, recruitment and physiological thresholds of P. damicornis.

  20. Quantitative proteomics using the high resolution accurate mass capabilities of the quadrupole-orbitrap mass spectrometer.

    PubMed

    Gallien, Sebastien; Domon, Bruno

    2014-08-01

    High resolution/accurate mass hybrid mass spectrometers have considerably advanced shotgun proteomics and the recent introduction of fast sequencing capabilities has expanded its use for targeted approaches. More specifically, the quadrupole-orbitrap instrument has a unique configuration and its new features enable a wide range of experiments. An overview of the analytical capabilities of this instrument is presented, with a focus on its application to quantitative analyses. The high resolution, the trapping capability and the versatility of the instrument have allowed quantitative proteomic workflows to be redefined and new data acquisition schemes to be developed. The initial proteomic applications have shown an improvement of the analytical performance. However, as quantification relies on ion trapping, instead of ion beam, further refinement of the technique can be expected.

  1. Quantitative real-time PCR for rapid and accurate titration of recombinant baculovirus particles.

    PubMed

    Hitchman, Richard B; Siaterli, Evangelia A; Nixon, Clare P; King, Linda A

    2007-03-01

    We describe the use of quantitative PCR (QPCR) to titer recombinant baculoviruses. Custom primers and probe were designed to gp64 and used to calculate a standard curve of QPCR derived titers from dilutions of a previously titrated baculovirus stock. Each dilution was titrated by both plaque assay and QPCR, producing a consistent and reproducible inverse relationship between C(T) and plaque forming units per milliliter. No significant difference was observed between titers produced by QPCR and plaque assay for 12 recombinant viruses, confirming the validity of this technique as a rapid and accurate method of baculovirus titration.

  2. Learning to write without writing: writing accurate descriptions of interactions after learning graph-printed description relations.

    PubMed

    Spear, Jack; Fields, Lanny

    2015-12-01

    Interpreting and describing complex information shown in graphs are essential skills to be mastered by students in many disciplines; both are skills that are difficult to learn. Thus, interventions that produce these outcomes are of great value. Previous research showed that conditional discrimination training that established stimulus control by some elements of graphs and their printed descriptions produced some improvement in the accuracy of students' written descriptions of graphs. In the present experiment, students wrote nearly perfect descriptions of the information conveyed in interaction-based graphs after the establishment of conditional relations between graphs and their printed descriptions. This outcome was achieved with the use of special conditional discrimination training procedures that required participants to attend to many of the key elements of the graphs and the phrases in the printed descriptions that corresponded to the elements in the graphs. Thus, students learned to write full descriptions of the information represented by complex graphs by an automated training procedure that did not involve the direct training of writing.

  3. A quantitatively accurate theory of stable crack growth in single phase ductile metal alloys under the influence of cyclic loading

    NASA Astrophysics Data System (ADS)

    Huffman, Peter oel

    Although fatigue has been a well studied phenomenon over the past century and a half, there has yet to be found a quantitative link between fatigue crack growth rates and materials properties. This work serves to establish that link, in the case of well behaved, single phase, ductile metals. The primary mechanisms of fatigue crack growth are identified in general terms, followed by a description of the dependence of the stress intensity factor range on those mechanisms. A method is presented for calculating the crack growth rate for an ideal, linear elastic, non-brittle material, which is assumed to be similar to the crack growth rate for a real material at very small crack growth rate values. The threshold stress intensity factor is discussed as a consequence of "crack tip healing". Residual stresses are accounted for in the form of an approximated residual stress intensity factor. The results of these calculations are compared to data available in the literature. It is concluded that this work presents a new way to consider crack growth with respect to cyclic loading which is quantitatively accurate, and introduces a new way to consider fracture mechanics with respect to the relatively small, cyclic loads, normally associated with fatigue.

  4. Accurate Construction of Photoactivated Localization Microscopy (PALM) Images for Quantitative Measurements

    PubMed Central

    Coltharp, Carla; Kessler, Rene P.; Xiao, Jie

    2012-01-01

    Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and

  5. Accurate quantitation of MHC-bound peptides by application of isotopically labeled peptide MHC complexes.

    PubMed

    Hassan, Chopie; Kester, Michel G D; Oudgenoeg, Gideon; de Ru, Arnoud H; Janssen, George M C; Drijfhout, Jan W; Spaapen, Robbert M; Jiménez, Connie R; Heemskerk, Mirjam H M; Falkenburg, J H Frederik; van Veelen, Peter A

    2014-09-23

    Knowledge of the accurate copy number of HLA class I presented ligands is important in fundamental and clinical immunology. Currently, the best copy number determinations are based on mass spectrometry, employing single reaction monitoring (SRM) in combination with a known amount of isotopically labeled peptide. The major drawback of this approach is that the losses during sample pretreatment, i.e. immunopurification and filtration steps, are not well defined and must, therefore, be estimated. In addition, such losses can vary for individual peptides. Therefore, we developed a new approach in which isotopically labeled peptide-MHC monomers (hpMHC) are prepared and added directly after cell lysis, i.e. before the usual sample processing. Using this approach, all losses during sample processing can be accounted for and allows accurate determination of specific MHC class I-presented ligands. Our study pinpoints the immunopurification step as the origin of the rather extreme losses during sample pretreatment and offers a solution to account for these losses. Obviously, this has important implications for accurate HLA-ligand quantitation. The strategy presented here can be used to obtain a reliable view of epitope copy number and thus allows improvement of vaccine design and strategies for immunotherapy.

  6. Quantitation and accurate mass analysis of pesticides in vegetables by LC/TOF-MS.

    PubMed

    Ferrer, Imma; Thurman, E Michael; Fernández-Alba, Amadeo R

    2005-05-01

    A quantitative method consisting of solvent extraction followed by liquid chromatography/time-of-flight mass spectrometry (LC/TOF-MS) analysis was developed for the identification and quantitation of three chloronicotinyl pesticides (imidacloprid, acetamiprid, thiacloprid) commonly used on salad vegetables. Accurate mass measurements within 3 ppm error were obtained for all the pesticides studied in various vegetable matrixes (cucumber, tomato, lettuce, pepper), which allowed an unequivocal identification of the target pesticides. Calibration curves covering 2 orders of magnitude were linear over the concentration range studied, thus showing the quantitative ability of TOF-MS as a monitoring tool for pesticides in vegetables. Matrix effects were also evaluated using matrix-matched standards showing no significant interferences between matrixes and clean extracts. Intraday reproducibility was 2-3% relative standard deviation (RSD) and interday values were 5% RSD. The precision (standard deviation) of the mass measurements was evaluated and it was less than 0.23 mDa between days. Detection limits of the chloronicotinyl insecticides in salad vegetables ranged from 0.002 to 0.01 mg/kg. These concentrations are equal to or better than the EU directives for controlled pesticides in vegetables showing that LC/TOF-MS analysis is a powerful tool for identification of pesticides in vegetables. Robustness and applicability of the method was validated for the analysis of market vegetable samples. Concentrations found in these samples were in the range of 0.02-0.17 mg/kg of vegetable. PMID:15859598

  7. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  8. Toward a quantitative description of the neurodynamic organizations of teams.

    PubMed

    Stevens, Ronald H; Galloway, Trysha L

    2014-01-01

    The goal was to develop quantitative models of the neurodynamic organizations of teams that could be used for comparing performance within and across teams and sessions. A symbolic modeling system was developed, where raw electroencephalography (EEG) signals from dyads were first transformed into second-by-second estimates of the cognitive Workload or Engagement of each person and transformed again into symbols representing the aggregated levels of the team. The resulting neurodynamic symbol streams had a persistent structure and contained segments of differential symbol expression. The quantitative Shannon entropy changes during these periods were related to speech, performance, and team responses to task changes. The dyads in an unscripted map navigation task (Human Communication Research Centre (HCRC) Map Task (MT)) developed fluctuating dynamics for Workload and Engagement, as they established their teamwork rhythms, and these were disrupted by external changes to the task. The entropy fluctuations during these disruptions differed in frequency, magnitude, and duration, and were associated with qualitative and quantitative changes in team organization and performance. These results indicate that neurodynamic models may be reliable, sensitive, and valid indicators of the changing neurodynamics of teams around which standardized quantitative models can begin to be developed.

  9. Can Raters with Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?

    ERIC Educational Resources Information Center

    Friedman, Lee; Harvey, Robert J.

    1986-01-01

    Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…

  10. Quantitative Luminescence Imaging System description and user's manual

    SciTech Connect

    Stahl, K.A.; Batishko, C.R.

    1988-06-01

    A Quantitative Luminescence Imaging System (QLIS) was designed and constructed. The system was developed for use in imaging and quantitative analysis of very low light level chemiluminescent phenomena. The luminescent reactions are imaged via microchannel plate image intensifier coupled to a newvicon video camera. The video record of the reaction can be stored on video tape or digitally captured by an image processing system which is integral to a host computer controller. Since the particular experimental conditions for which the QLIS was designed necessitate that the chemiluminescent reaction take place in an rf flux within a waveguide, the system includes a coherent fiber optic image transfer system which allows the video hardware to be mounted externally to the rf waveguide.

  11. Quantitative prediction and molar description of the environment

    PubMed Central

    Baum, William M.

    1989-01-01

    Molecular explanations of behavior, based on momentary events and variables that can be measured each time an event occurs, can be contrasted with molar explanations, based on aggregates of events and variables that can be measured only over substantial periods of time. Molecular analyses cannot suffice for quantitative accounts of behavior, because the historical variables that determine behavior are inevitably molar. When molecular explanations are attempted, they always depend on hypothetical constructs that stand as surrogates for molar environmental variables. These constructs allow no quantitative predictions when they are vague, and when they are made precise, they become superfluous, because they can be replaced with molar measures. In contrast to molecular accounts of phenomena like higher responding on ratio schedules than interval schedules and free-operant avoidance, molar accounts tend to be simple and straightforward. Molar theory incorporates the notion that behavior produces consequences that in turn affect the behavior, the notion that behavior and environment together constitute a feedback system. A feedback function specifies the dependence of consequences on behavior, thereby describing properties of the environment. Feedback functions can be derived for simple schedules, complex schedules, and natural resources. A complete theory of behavior requires describing the environment's feedback functions and the organism's functional relations. Molar thinking, both in the laboratory and in the field, can allow quantitative prediction, the mark of a mature science. PMID:22478030

  12. Multimodal Quantitative Phase Imaging with Digital Holographic Microscopy Accurately Assesses Intestinal Inflammation and Epithelial Wound Healing.

    PubMed

    Lenz, Philipp; Brückner, Markus; Ketelhut, Steffi; Heidemann, Jan; Kemper, Björn; Bettenworth, Dominik

    2016-01-01

    The incidence of inflammatory bowel disease, i.e., Crohn's disease and Ulcerative colitis, has significantly increased over the last decade. The etiology of IBD remains unknown and current therapeutic strategies are based on the unspecific suppression of the immune system. The development of treatments that specifically target intestinal inflammation and epithelial wound healing could significantly improve management of IBD, however this requires accurate detection of inflammatory changes. Currently, potential drug candidates are usually evaluated using animal models in vivo or with cell culture based techniques in vitro. Histological examination usually requires the cells or tissues of interest to be stained, which may alter the sample characteristics and furthermore, the interpretation of findings can vary by investigator expertise. Digital holographic microscopy (DHM), based on the detection of optical path length delay, allows stain-free quantitative phase contrast imaging. This allows the results to be directly correlated with absolute biophysical parameters. We demonstrate how measurement of changes in tissue density with DHM, based on refractive index measurement, can quantify inflammatory alterations, without staining, in different layers of colonic tissue specimens from mice and humans with colitis. Additionally, we demonstrate continuous multimodal label-free monitoring of epithelial wound healing in vitro, possible using DHM through the simple automated determination of the wounded area and simultaneous determination of morphological parameters such as dry mass and layer thickness of migrating cells. In conclusion, DHM represents a valuable, novel and quantitative tool for the assessment of intestinal inflammation with absolute values for parameters possible, simplified quantification of epithelial wound healing in vitro and therefore has high potential for translational diagnostic use. PMID:27685659

  13. Quantitative Description of a Protein Fitness Landscape Based on Molecular Features.

    PubMed

    Meini, María-Rocío; Tomatis, Pablo E; Weinreich, Daniel M; Vila, Alejandro J

    2015-07-01

    Understanding the driving forces behind protein evolution requires the ability to correlate the molecular impact of mutations with organismal fitness. To address this issue, we employ here metallo-β-lactamases as a model system, which are Zn(II) dependent enzymes that mediate antibiotic resistance. We present a study of all the possible evolutionary pathways leading to a metallo-β-lactamase variant optimized by directed evolution. By studying the activity, stability and Zn(II) binding capabilities of all mutants in the preferred evolutionary pathways, we show that this local fitness landscape is strongly conditioned by epistatic interactions arising from the pleiotropic effect of mutations in the different molecular features of the enzyme. Activity and stability assays in purified enzymes do not provide explanatory power. Instead, measurement of these molecular features in an environment resembling the native one provides an accurate description of the observed antibiotic resistance profile. We report that optimization of Zn(II) binding abilities of metallo-β-lactamases during evolution is more critical than stabilization of the protein to enhance fitness. A global analysis of these parameters allows us to connect genotype with fitness based on quantitative biochemical and biophysical parameters.

  14. Quantitative Description of a Protein Fitness Landscape Based on Molecular Features

    PubMed Central

    Meini, María-Rocío; Tomatis, Pablo E.; Weinreich, Daniel M.; Vila, Alejandro J.

    2015-01-01

    Understanding the driving forces behind protein evolution requires the ability to correlate the molecular impact of mutations with organismal fitness. To address this issue, we employ here metallo-β-lactamases as a model system, which are Zn(II) dependent enzymes that mediate antibiotic resistance. We present a study of all the possible evolutionary pathways leading to a metallo-β-lactamase variant optimized by directed evolution. By studying the activity, stability and Zn(II) binding capabilities of all mutants in the preferred evolutionary pathways, we show that this local fitness landscape is strongly conditioned by epistatic interactions arising from the pleiotropic effect of mutations in the different molecular features of the enzyme. Activity and stability assays in purified enzymes do not provide explanatory power. Instead, measurement of these molecular features in an environment resembling the native one provides an accurate description of the observed antibiotic resistance profile. We report that optimization of Zn(II) binding abilities of metallo-β-lactamases during evolution is more critical than stabilization of the protein to enhance fitness. A global analysis of these parameters allows us to connect genotype with fitness based on quantitative biochemical and biophysical parameters. PMID:25767204

  15. Quantitative Description of a Protein Fitness Landscape Based on Molecular Features.

    PubMed

    Meini, María-Rocío; Tomatis, Pablo E; Weinreich, Daniel M; Vila, Alejandro J

    2015-07-01

    Understanding the driving forces behind protein evolution requires the ability to correlate the molecular impact of mutations with organismal fitness. To address this issue, we employ here metallo-β-lactamases as a model system, which are Zn(II) dependent enzymes that mediate antibiotic resistance. We present a study of all the possible evolutionary pathways leading to a metallo-β-lactamase variant optimized by directed evolution. By studying the activity, stability and Zn(II) binding capabilities of all mutants in the preferred evolutionary pathways, we show that this local fitness landscape is strongly conditioned by epistatic interactions arising from the pleiotropic effect of mutations in the different molecular features of the enzyme. Activity and stability assays in purified enzymes do not provide explanatory power. Instead, measurement of these molecular features in an environment resembling the native one provides an accurate description of the observed antibiotic resistance profile. We report that optimization of Zn(II) binding abilities of metallo-β-lactamases during evolution is more critical than stabilization of the protein to enhance fitness. A global analysis of these parameters allows us to connect genotype with fitness based on quantitative biochemical and biophysical parameters. PMID:25767204

  16. Doubly hybrid density functional for accurate descriptions of nonbond interactions, thermochemistry, and thermochemical kinetics

    PubMed Central

    Zhang, Ying; Xu, Xin; Goddard, William A.

    2009-01-01

    We develop and validate a density functional, XYG3, based on the adiabatic connection formalism and the Görling–Levy coupling-constant perturbation expansion to the second order (PT2). XYG3 is a doubly hybrid functional, containing 3 mixing parameters. It has a nonlocal orbital-dependent component in the exchange term (exact exchange) plus information about the unoccupied Kohn–Sham orbitals in the correlation part (PT2 double excitation). XYG3 is remarkably accurate for thermochemistry, reaction barrier heights, and nonbond interactions of main group molecules. In addition, the accuracy remains nearly constant with system size. PMID:19276116

  17. Accurate description of aqueous carbonate ions: an effective polarization model verified by neutron scattering.

    PubMed

    Mason, Philip E; Wernersson, Erik; Jungwirth, Pavel

    2012-07-19

    The carbonate ion plays a central role in the biochemical formation of the shells of aquatic life, which is an important path for carbon dioxide sequestration. Given the vital role of carbonate in this and other contexts, it is imperative to develop accurate models for such a high charge density ion. As a divalent ion, carbonate has a strong polarizing effect on surrounding water molecules. This raises the question whether it is possible to describe accurately such systems without including polarization. It has recently been suggested the lack of electronic polarization in nonpolarizable water models can be effectively compensated by introducing an electronic dielectric continuum, which is with respect to the forces between atoms equivalent to rescaling the ionic charges. Given how widely nonpolarizable models are used to model electrolyte solutions, establishing the experimental validity of this suggestion is imperative. Here, we examine a stringent test for such models: a comparison of the difference of the neutron scattering structure factors of K2CO3 vs KNO3 solutions and that predicted by molecular dynamics simulations for various models of the same systems. We compare standard nonpolarizable simulations in SPC/E water to analogous simulations with effective ion charges, as well as simulations in explicitly polarizable POL3 water (which, however, has only about half the experimental polarizability). It is found that the simulation with rescaled charges is in a very good agreement with the experimental data, which is significantly better than for the nonpolarizable simulation and even better than for the explicitly polarizable POL3 model.

  18. Accurate description of the optical response of a multilayered spherical system in the long wavelength approximation

    NASA Astrophysics Data System (ADS)

    Chung, H. Y.; Guo, G. Y.; Chiang, H.-P.; Tsai, D. P.; Leung, P. T.

    2010-10-01

    The optical response of a multilayered spherical system of unlimited number of layers (a “matryushka”) in the long wavelength limit can be accounted for from the knowledge of the static multipole polarizability of the system to first-order accuracy. However, for systems of ultrasmall dimensions or systems with sizes not-too-small compared to the wavelength, this ordinary quasistatic long wavelength approximation (LWA) becomes inaccurate. Here we introduce two significant modifications of the LWA for such a nanomatryushka in each of the two limits: the nonlocal optical response for ultrasmall systems (<10nm) , and the “finite-wavelength corrections” for systems ˜100nm . This is accomplished by employing the previous work for a single-layer shell, in combination with a certain effective-medium approach formulated recently in the literature. Numerical calculations for the extinction cross sections for such a system of different dimensions are provided as illustrations for these effects. This formulation thus provides significant improvements on the ordinary LWA, yielding enough accuracy for the description of the optical response of these nanoshell systems over an appreciable range of sizes, without resorting to more involved quantum mechanical or fully electrodynamic calculations.

  19. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description

    SciTech Connect

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-14

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ{sub i} of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ{sub i}. A summarizing discussion highlights the achievements of the new theory and of its approximate solution

  20. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description.

    PubMed

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-14

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by

  1. A Novel Approach to Teach the Generation of Bioelectrical Potentials from a Descriptive and Quantitative Perspective

    ERIC Educational Resources Information Center

    Rodriguez-Falces, Javier

    2013-01-01

    In electrophysiology studies, it is becoming increasingly common to explain experimental observations using both descriptive methods and quantitative approaches. However, some electrophysiological phenomena, such as the generation of extracellular potentials that results from the propagation of the excitation source along the muscle fiber, are…

  2. Leadership Styles at Middle- and Early-College Programs: A Quantitative Descriptive Correlational Study

    ERIC Educational Resources Information Center

    Berksteiner, Earl J.

    2013-01-01

    The purpose of this quantitative descriptive correlational study was to determine if associations existed between middle- and early-college (MEC) principals' leadership styles, teacher motivation, and teacher satisfaction. MEC programs were programs designed to assist high school students who were not served well in a traditional setting (Middle…

  3. Models in biology: ‘accurate descriptions of our pathetic thinking’

    PubMed Central

    2014-01-01

    In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484

  4. Accurate description of van der Waals complexes by density functional theory including empirical corrections.

    PubMed

    Grimme, Stefan

    2004-09-01

    An empirical method to account for van der Waals interactions in practical calculations with the density functional theory (termed DFT-D) is tested for a wide variety of molecular complexes. As in previous schemes, the dispersive energy is described by damped interatomic potentials of the form C6R(-6). The use of pure, gradient-corrected density functionals (BLYP and PBE), together with the resolution-of-the-identity (RI) approximation for the Coulomb operator, allows very efficient computations for large systems. Opposed to previous work, extended AO basis sets of polarized TZV or QZV quality are employed, which reduces the basis set superposition error to a negligible extend. By using a global scaling factor for the atomic C6 coefficients, the functional dependence of the results could be strongly reduced. The "double counting" of correlation effects for strongly bound complexes is found to be insignificant if steep damping functions are employed. The method is applied to a total of 29 complexes of atoms and small molecules (Ne, CH4, NH3, H2O, CH3F, N2, F2, formic acid, ethene, and ethine) with each other and with benzene, to benzene, naphthalene, pyrene, and coronene dimers, the naphthalene trimer, coronene. H2O and four H-bonded and stacked DNA base pairs (AT and GC). In almost all cases, very good agreement with reliable theoretical or experimental results for binding energies and intermolecular distances is obtained. For stacked aromatic systems and the important base pairs, the DFT-D-BLYP model seems to be even superior to standard MP2 treatments that systematically overbind. The good results obtained suggest the approach as a practical tool to describe the properties of many important van der Waals systems in chemistry. Furthermore, the DFT-D data may either be used to calibrate much simpler (e.g., force-field) potentials or the optimized structures can be used as input for more accurate ab initio calculations of the interaction energies.

  5. Towards an accurate model of redshift-space distortions: a bivariate Gaussian description for the galaxy pairwise velocity distributions

    NASA Astrophysics Data System (ADS)

    Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi

    2016-10-01

    As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation , such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and variance σ2. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distorsions on all scales, fully capturing the overall linear and nonlinear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of redshift-space distortions is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. More work is needed, but these results indicate a very promising path to make definitive progress in our program to improve RSD estimators.

  6. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  7. How Iron-Containing Proteins Control Dioxygen Chemistry: A Detailed Atomic Level Description Via Accurate Quantum Chemical and Mixed Quantum Mechanics/Molecular Mechanics Calculations.

    SciTech Connect

    Friesner, Richard A.; Baik, Mu-Hyun; Gherman, Benjamin F.; Guallar, Victor; Wirstam, Maria E.; Murphy, Robert B.; Lippard, Stephen J.

    2003-03-01

    Over the past several years, rapid advances in computational hardware, quantum chemical methods, and mixed quantum mechanics/molecular mechanics (QM/MM) techniques have made it possible to model accurately the interaction of ligands with metal-containing proteins at an atomic level of detail. In this paper, we describe the application of our computational methodology, based on density functional (DFT) quantum chemical methods, to two diiron-containing proteins that interact with dioxygen: methane monooxygenase (MMO) and hemerythrin (Hr). Although the active sites are structurally related, the biological function differs substantially. MMO is an enzyme found in methanotrophic bacteria and hydroxylates aliphatic C-H bonds, whereas Hr is a carrier protein for dioxygen used by a number of marine invertebrates. Quantitative descriptions of the structures and energetics of key intermediates and transition states involved in the reaction with dioxygen are provided, allowing their mechanisms to be compared and contrasted in detail. An in-depth understanding of how the chemical identity of the first ligand coordination shell, structural features, electrostatic and van der Waals interactions of more distant shells control ligand binding and reactive chemistry is provided, affording a systematic analysis of how iron-containing proteins process dioxygen. Extensive contact with experiment is made in both systems, and a remarkable degree of accuracy and robustness of the calculations is obtained from both a qualitative and quantitative perspective.

  8. Towards a more accurate microscopic description of the moving contact line problem - incorporating nonlocal effects through a statistical mechanics framework

    NASA Astrophysics Data System (ADS)

    Nold, Andreas; Goddard, Ben; Sibley, David; Kalliadasis, Serafim

    2014-03-01

    Multiscale effects play a predominant role in wetting phenomena such as the moving contact line. An accurate description is of paramount interest for a wide range of industrial applications, yet it is a matter of ongoing research, due to the difficulty of incorporating different physical effects in one model. Important small-scale phenomena are corrections to the attractive fluid-fluid and wall-fluid forces in inhomogeneous density distributions, which often previously have been accounted for by the disjoining pressure in an ad-hoc manner. We systematically derive a novel model for the description of a single-component liquid-vapor multiphase system which inherently incorporates these nonlocal effects. This derivation, which is inspired by statistical mechanics in the framework of colloidal density functional theory, is critically discussed with respect to its assumptions and restrictions. The model is then employed numerically to study a moving contact line of a liquid fluid displacing its vapor phase. We show how nonlocal physical effects are inherently incorporated by the model and describe how classical macroscopic results for the contact line motion are retrieved. We acknowledge financial support from ERC Advanced Grant No. 247031 and Imperial College through a DTG International Studentship.

  9. Qualitative and quantitative description of microstructure of alloys from the Fe-Al system

    NASA Astrophysics Data System (ADS)

    Jabłońska, M.; Mikuśkiewicz, M.; Tomaszewska, A.

    2012-05-01

    The paper presents the test results of qualitative and quantitative analysis of the structure of alloys from Fe-Al system after casting and heat treatment. The analysis were carried out for three alloys, with different content of Al at.%: 36, 38, 48 at %, which were produced by melting and gravity casting. A quantitative evaluation of the structure was made with use of "MET-ILO" application on the basis of images acquired from a light microscope. Moreover the influence of the chemical composition and results of quantitative description of microstructure on the hardness of alloys from the aluminium - iron system were analysed. The obtained research will be used for the development of mathematical models determining the influence of primary structure on the opportunities for plastic deformation of alloys. Structural examination was carried out using scanning electron microscopy (SEM) and scanning transmission electron microscopy (STEM). X-ray diffraction measurements were performed on this alloys.

  10. Quantitative spectroscopy of hot stars: accurate atomic data applied on a large scale as driver of recent breakthroughs

    NASA Astrophysics Data System (ADS)

    Przybilla, Norbert; Schaffenroth, Veronika; Nieva, Maria-Fernanda

    2015-08-01

    OB-type stars present hotbeds for non-LTE physics because of their strong radiation fields that drive the atmospheric plasma out of local thermodynamic equilibrium. We report on recent breakthroughs in the quantitative analysis of the optical and UV-spectra of OB-type stars that were facilitated by application of accurate and precise atomic data on a large scale. An astophysicist's dream has come true, by bringing observed and model spectra into close match over wide parts of the observed wavelength ranges. This facilitates tight observational constraints to be derived from OB-type stars for wide applications in astrophysics. However, despite the progress made, many details of the modelling may be improved further. We discuss atomic data needs in terms of laboratory measurements and also ab-initio calculations. Particular emphasis is given to quantitative spectroscopy in the near-IR, which will be in focus in the era of the upcoming extremely large telescopes.

  11. Restriction Site Tiling Analysis: accurate discovery and quantitative genotyping of genome-wide polymorphisms using nucleotide arrays

    PubMed Central

    2010-01-01

    High-throughput genotype data can be used to identify genes important for local adaptation in wild populations, phenotypes in lab stocks, or disease-related traits in human medicine. Here we advance microarray-based genotyping for population genomics with Restriction Site Tiling Analysis. The approach simultaneously discovers polymorphisms and provides quantitative genotype data at 10,000s of loci. It is highly accurate and free from ascertainment bias. We apply the approach to uncover genomic differentiation in the purple sea urchin. PMID:20403197

  12. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping.

    PubMed

    Lee, Han B; Schwab, Tanya L; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L; Cervera, Roberto Lopez; McNulty, Melissa S; Bostwick, Hannah S; Clark, Karl J

    2016-06-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98-100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  13. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping.

    PubMed

    Lee, Han B; Schwab, Tanya L; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L; Cervera, Roberto Lopez; McNulty, Melissa S; Bostwick, Hannah S; Clark, Karl J

    2016-06-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98-100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  14. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping

    PubMed Central

    Lee, Han B.; Schwab, Tanya L.; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L.; Cervera, Roberto Lopez; McNulty, Melissa S.; Bostwick, Hannah S.; Clark, Karl J.

    2016-01-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98–100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  15. Quantitative methods for three-dimensional comparison and petrographic description of chondrites

    SciTech Connect

    Friedrich, J.M.

    2008-10-20

    X-ray computed tomography can be used to generate three-dimensional (3D) volumetric representations of chondritic meteorites. One of the challenges of using collected X-ray tomographic data is the extraction of useful data for 3D petrographic analysis or description. Here, I examine computer-aided quantitative 3D texture metrics that can be used for the classification of chondritic meteorites. These quantitative techniques are extremely useful for discriminating between chondritic materials, but yield little information on the 3D morphology of chondrite components. To investigate the morphology of chondrite minerals such as Fe(Ni) metal and related sulfides, the homology descriptors known as Betti numbers, are examined. Both methodologies are illustrated with theoretical discussion and examples. Betti numbers may be valuable for examining the nature of metal-silicate structural changes within chondrites with increasing degrees of metamorphism.

  16. Linking descriptive geology and quantitative machine learning through an ontology of lithological concepts

    NASA Astrophysics Data System (ADS)

    Klump, J. F.; Huber, R.; Robertson, J.; Cox, S. J. D.; Woodcock, R.

    2014-12-01

    Despite the recent explosion of quantitative geological data, geology remains a fundamentally qualitative science. Numerical data only constitute a certain part of data collection in the geosciences. In many cases, geological observations are compiled as text into reports and annotations on drill cores, thin sections or drawings of outcrops. The observations are classified into concepts such as lithology, stratigraphy, geological structure, etc. These descriptions are semantically rich and are generally supported by more quantitative observations using geochemical analyses, XRD, hyperspectral scanning, etc, but the goal is geological semantics. In practice it has been difficult to bring the different observations together due to differing perception or granularity of classification in human observation, or the partial observation of only some characteristics using quantitative sensors. In the past years many geological classification schemas have been transferred into ontologies and vocabularies, formalized using RDF and OWL, and published through SPARQL endpoints. Several lithological ontologies were compiled by stratigraphy.net and published through a SPARQL endpoint. This work is complemented by the development of a Python API to integrate this vocabulary into Python-based text mining applications. The applications for the lithological vocabulary and Python API are automated semantic tagging of geochemical data and descriptions of drill cores, machine learning of geochemical compositions that are diagnostic for lithological classifications, and text mining for lithological concepts in reports and geological literature. This combination of applications can be used to identify anomalies in databases, where composition and lithological classification do not match. It can also be used to identify lithological concepts in the literature and infer quantitative values. The resulting semantic tagging opens new possibilities for linking these diverse sources of data.

  17. A Global Approach to Accurate and Automatic Quantitative Analysis of NMR Spectra by Complex Least-Squares Curve Fitting

    NASA Astrophysics Data System (ADS)

    Martin, Y. L.

    The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species

  18. Tools for quantitative form description; an evaluation of different software packages for semi-landmark analysis

    PubMed Central

    Houssaye, Alexandra; Herrel, Anthony; Fabre, Anne-Claire; Cornette, Raphael

    2015-01-01

    The challenging complexity of biological structures has led to the development of several methods for quantitative analyses of form. Bones are shaped by the interaction of historical (phylogenetic), structural, and functional constrains. Consequently, bone shape has been investigated intensively in an evolutionary context. Geometric morphometric approaches allow the description of the shape of an object in all of its biological complexity. However, when biological objects present only few anatomical landmarks, sliding semi-landmarks may provide good descriptors of shape. The sliding procedure, mandatory for sliding semi-landmarks, requires several steps that may be time-consuming. We here compare the time required by two different software packages (‘Edgewarp’ and ‘Morpho’) for the same sliding task, and investigate potential differences in the results and biological interpretation. ‘Morpho’ is much faster than ‘Edgewarp,’ notably as a result of the greater computational power of the ‘Morpho’ software routines and the complexity of the ‘Edgewarp’ workflow. Morphospaces obtained using both software packages are similar and provide a consistent description of the biological variability. The principal differences between the two software packages are observed in areas characterized by abrupt changes in the bone topography. In summary, both software packages perform equally well in terms of the description of biological structures, yet differ in the simplicity of the workflow and time needed to perform the analyses. PMID:26618086

  19. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples.

    PubMed

    Mackie, David M; Jahnke, Justin P; Benyamin, Marcus S; Sumner, James J

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users' purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells.

  20. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples

    PubMed Central

    Mackie, David M.; Jahnke, Justin P.; Benyamin, Marcus S.; Sumner, James J.

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users’ purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells. PMID:26977411

  1. Preferential access to genetic information from endogenous hominin ancient DNA and accurate quantitative SNP-typing via SPEX

    PubMed Central

    Brotherton, Paul; Sanchez, Juan J.; Cooper, Alan; Endicott, Phillip

    2010-01-01

    The analysis of targeted genetic loci from ancient, forensic and clinical samples is usually built upon polymerase chain reaction (PCR)-generated sequence data. However, many studies have shown that PCR amplification from poor-quality DNA templates can create sequence artefacts at significant levels. With hominin (human and other hominid) samples, the pervasive presence of highly PCR-amplifiable human DNA contaminants in the vast majority of samples can lead to the creation of recombinant hybrids and other non-authentic artefacts. The resulting PCR-generated sequences can then be difficult, if not impossible, to authenticate. In contrast, single primer extension (SPEX)-based approaches can genotype single nucleotide polymorphisms from ancient fragments of DNA as accurately as modern DNA. A single SPEX-type assay can amplify just one of the duplex DNA strands at target loci and generate a multi-fold depth-of-coverage, with non-authentic recombinant hybrids reduced to undetectable levels. Crucially, SPEX-type approaches can preferentially access genetic information from damaged and degraded endogenous ancient DNA templates over modern human DNA contaminants. The development of SPEX-type assays offers the potential for highly accurate, quantitative genotyping from ancient hominin samples. PMID:19864251

  2. Quantitative Description of Crystal Nucleation and Growth from in Situ Liquid Scanning Transmission Electron Microscopy.

    PubMed

    Ievlev, Anton V; Jesse, Stephen; Cochell, Thomas J; Unocic, Raymond R; Protopopescu, Vladimir A; Kalinin, Sergei V

    2015-12-22

    Recent advances in liquid cell (scanning) transmission electron microscopy (S)TEM has enabled in situ nanoscale investigations of controlled nanocrystal growth mechanisms. Here, we experimentally and quantitatively investigated the nucleation and growth mechanisms of Pt nanostructures from an aqueous solution of K2PtCl6. Averaged statistical, network, and local approaches have been used for the data analysis and the description of both collective particles dynamics and local growth features. In particular, interaction between neighboring particles has been revealed and attributed to reduction of the platinum concentration in the vicinity of the particle boundary. The local approach for solving the inverse problem showed that particles dynamics can be simulated by a stationary diffusional model. The obtained results are important for understanding nanocrystal formation and growth processes and for optimization of synthesis conditions.

  3. A quantitative index of soil development from field descriptions: Examples from a chronosequence in central California

    USGS Publications Warehouse

    Harden, J.W.

    1982-01-01

    A soil development index has been developed in order to quantitatively measure the degree of soil profile development. This index, which combines eight soil field properties with soil thickness, is designed from field descriptions of the Merced River chronosequence in central California. These eight properties are: clay films, texture plus wet consistence, rubification (color hue and chroma), structure, dry consistence, moist consistence, color value, and pH. Other properties described in the field can be added when more soils are studied. Most of the properties change systematically within the 3 m.y. age span of the Merced River chronosequence. The absence of properties on occasion does not significantly affect the index. Individual quantified field properties, as well as the integrated index, are examined and compared as functions of soil depth and age. ?? 1982.

  4. Accurate, fast and cost-effective diagnostic test for monosomy 1p36 using real-time quantitative PCR.

    PubMed

    Cunha, Pricila da Silva; Pena, Heloisa B; D'Angelo, Carla Sustek; Koiffmann, Celia P; Rosenfeld, Jill A; Shaffer, Lisa G; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5-0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  5. Development and evaluation of a liquid chromatography-mass spectrometry method for rapid, accurate quantitation of malondialdehyde in human plasma.

    PubMed

    Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H

    2016-09-01

    Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618

  6. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    PubMed Central

    Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341

  7. Accurate measurement of circulating mitochondrial DNA content from human blood samples using real-time quantitative PCR.

    PubMed

    Ajaz, Saima; Czajka, Anna; Malik, Afshan

    2015-01-01

    We describe a protocol to accurately measure the amount of human mitochondrial DNA (MtDNA) in peripheral blood samples which can be modified to quantify MtDNA from other body fluids, human cells, and tissues. This protocol is based on the use of real-time quantitative PCR (qPCR) to quantify the amount of MtDNA relative to nuclear DNA (designated the Mt/N ratio). In the last decade, there have been increasing numbers of studies describing altered MtDNA or Mt/N in circulation in common nongenetic diseases where mitochondrial dysfunction may play a role (for review see Malik and Czajka, Mitochondrion 13:481-492, 2013). These studies are distinct from those looking at genetic mitochondrial disease and are attempting to identify acquired changes in circulating MtDNA content as an indicator of mitochondrial function. However, the methodology being used is not always specific and reproducible. As more than 95 % of the human mitochondrial genome is duplicated in the human nuclear genome, it is important to avoid co-amplification of nuclear pseudogenes. Furthermore, template preparation protocols can also affect the results because of the size and structural differences between the mitochondrial and nuclear genomes. Here we describe how to (1) prepare DNA from blood samples; (2) pretreat the DNA to prevent dilution bias; (3) prepare dilution standards for absolute quantification using the unique primers human mitochondrial genome forward primer (hMitoF3) and human mitochondrial genome reverse primer(hMitoR3) for the mitochondrial genome, and human nuclear genome forward primer (hB2MF1) and human nuclear genome reverse primer (hB2MR1) primers for the human nuclear genome; (4) carry out qPCR for either relative or absolute quantification from test samples; (5) analyze qPCR data; and (6) calculate the sample size to adequately power studies. The protocol presented here is suitable for high-throughput use.

  8. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    DOE PAGES

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; Parks, Jerry M.

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalaminmore » in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co axial ligand binding, leading to substantial errors in predicted

  9. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    PubMed

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-01

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  10. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    PubMed

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-01

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  11. Accurate determination of human serum transferrin isoforms: Exploring metal-specific isotope dilution analysis as a quantitative proteomic tool.

    PubMed

    Busto, M Estela del Castillo; Montes-Bayón, Maria; Sanz-Medel, Alfredo

    2006-12-15

    Carbohydrate-deficient transferrin (CDT) measurements are considered a reliable marker for chronic alcohol consumption, and its use is becoming extensive in forensic medicine. However, CDT is not a single molecular entity but refers to a group of sialic acid-deficient transferrin isoforms from mono- to trisialotransferrin. Thus, the development of methods to analyze accurately and precisely individual transferrin isoforms in biological fluids such as serum is of increasing importance. The present work illustrates the use of ICPMS isotope dilution analysis for the quantification of transferrin isoforms once saturated with iron and separated by anion exchange chromatography (Mono Q 5/50) using a mobile phase consisting of a gradient of ammonium acetate (0-250 mM) in 25 mM Tris-acetic acid (pH 6.5). Species-specific and species-unspecific spikes have been explored. In the first part of the study, the use of postcolumn addition of a solution of 200 ng mL(-1) isotopically enriched iron (57Fe, 95%) in 25 mM sodium citrate/citric acid (pH 4) permitted the quantification of individual sialoforms of transferrin (from S2 to S5) in human serum samples of healthy individuals as well as alcoholic patients. Second, the species-specific spike method was performed by synthesizing an isotopically enriched standard of saturated transferrin (saturated with 57Fe). The characterization of the spike was performed by postcolumn reverse isotope dilution analysis (this is, by postcolumn addition of a solution of 200 ng mL(-1) natural iron in sodium citrate/citric acid of pH 4). Also, the stability of the transferrin spike was tested during one week with negligible species transformation. Finally, the enriched transferrin was used to quantify the individual isoforms in the same serum samples obtaining results comparative to those of postcolumn isotope dilution and to those previously published in the literature, demonstrating the suitability of both strategies for quantitative transferrin

  12. Analytical method for the accurate determination of tricothecenes in grains using LC-MS/MS: a comparison between MRM transition and MS3 quantitation.

    PubMed

    Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn

    2012-07-01

    The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are

  13. Quantitative description of photoexcited scanning tunneling spectroscopy and its application to the GaAs(110) surface

    NASA Astrophysics Data System (ADS)

    Schnedler, M.; Portz, V.; Weidlich, P. H.; Dunin-Borkowski, R. E.; Ebert, Ph.

    2015-06-01

    A quantitative description of photoexcited scanning tunneling spectra is developed and applied to photoexcited spectra measured on p -doped nonpolar GaAs(110) surfaces. Under illumination, the experimental spectra exhibit an increase of the tunnel current at negative sample voltages only. In order to analyze the experimental data quantitatively, the potential and charge-carrier distributions of the photoexcited tip-vacuum-semiconductor system are calculated by solving the Poisson as well as the hole and electron continuity equations by a finite-difference algorithm. On this basis, the different contributions to the tunnel current are calculated using an extension of the model of Feenstra and Stroscio to include the light-excited carrier concentrations. The best fit of the calculated tunnel currents to the experimental data is obtained for a tip-induced band bending, which is limited by the partial occupation of the C3 surface state by light-excited electrons. The tunnel current at negative voltages is then composed of a valence band contribution and a photoinduced tunnel current of excited electrons in the conduction band. The quantitative description of the tunnel current developed here is generally applicable and provides a solid foundation for the quantitative interpretation of photoexcited scanning tunneling spectroscopy.

  14. Initial Description of a Quantitative, Cross-Species (Chimpanzee-Human) Social Responsiveness Measure

    ERIC Educational Resources Information Center

    Marrus, Natasha; Faughn, Carley; Shuman, Jeremy; Petersen, Steve E.; Constantino, John N.; Povinelli, Daniel J.; Pruett, John R., Jr.

    2011-01-01

    Objective: Comparative studies of social responsiveness, an ability that is impaired in autism spectrum disorders, can inform our understanding of both autism and the cognitive architecture of social behavior. Because there is no existing quantitative measure of social responsiveness in chimpanzees, we generated a quantitative, cross-species…

  15. Wavelet prism decomposition analysis applied to CARS spectroscopy: a tool for accurate and quantitative extraction of resonant vibrational responses.

    PubMed

    Kan, Yelena; Lensu, Lasse; Hehl, Gregor; Volkmer, Andreas; Vartiainen, Erik M

    2016-05-30

    We propose an approach, based on wavelet prism decomposition analysis, for correcting experimental artefacts in a coherent anti-Stokes Raman scattering (CARS) spectrum. This method allows estimating and eliminating a slowly varying modulation error function in the measured normalized CARS spectrum and yields a corrected CARS line-shape. The main advantage of the approach is that the spectral phase and amplitude corrections are avoided in the retrieved Raman line-shape spectrum, thus significantly simplifying the quantitative reconstruction of the sample's Raman response from a normalized CARS spectrum in the presence of experimental artefacts. Moreover, the approach obviates the need for assumptions about the modulation error distribution and the chemical composition of the specimens under study. The method is quantitatively validated on normalized CARS spectra recorded for equimolar aqueous solutions of D-fructose, D-glucose, and their disaccharide combination sucrose. PMID:27410113

  16. Wavelet prism decomposition analysis applied to CARS spectroscopy: a tool for accurate and quantitative extraction of resonant vibrational responses.

    PubMed

    Kan, Yelena; Lensu, Lasse; Hehl, Gregor; Volkmer, Andreas; Vartiainen, Erik M

    2016-05-30

    We propose an approach, based on wavelet prism decomposition analysis, for correcting experimental artefacts in a coherent anti-Stokes Raman scattering (CARS) spectrum. This method allows estimating and eliminating a slowly varying modulation error function in the measured normalized CARS spectrum and yields a corrected CARS line-shape. The main advantage of the approach is that the spectral phase and amplitude corrections are avoided in the retrieved Raman line-shape spectrum, thus significantly simplifying the quantitative reconstruction of the sample's Raman response from a normalized CARS spectrum in the presence of experimental artefacts. Moreover, the approach obviates the need for assumptions about the modulation error distribution and the chemical composition of the specimens under study. The method is quantitatively validated on normalized CARS spectra recorded for equimolar aqueous solutions of D-fructose, D-glucose, and their disaccharide combination sucrose.

  17. Infectious titres of sheep scrapie and bovine spongiform encephalopathy agents cannot be accurately predicted from quantitative laboratory test results.

    PubMed

    González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda

    2012-11-01

    It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).

  18. Microscope-Quantitative Luminescence Imaging System (M-Qlis) Description and User's Manual

    SciTech Connect

    Stahl, K. A.

    1991-10-01

    A Microscope Quantitative Luminescence Imaging System (M-QLIS} has been designed and constructed. The M-QLIS is designed for use in studies of chemiluminescent phenomena associated with absorption of radio-frequency radiation. The system consists of a radio-frequency waveguide/sample holder, microscope, intensified video camera, radiometric calibration source and optics, and computer-based image processor with radiometric analysis software. The system operation, hardware, software, and radiometric procedures are described.

  19. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR

    PubMed Central

    Zhang, Jing; Teixeira da Silva, Jaime A.; Wang, ChunXia; Sun, HongMei

    2015-01-01

    Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446

  20. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR.

    PubMed

    Li, XueYan; Cheng, JinYun; Zhang, Jing; Teixeira da Silva, Jaime A; Wang, ChunXia; Sun, HongMei

    2015-01-01

    Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446

  1. Quantitative descriptive analysis and principal component analysis for sensory characterization of Indian milk product cham-cham.

    PubMed

    Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C

    2016-02-01

    Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p < 0.05) difference in sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.

  2. Quantitative description of the spatial arrangement of organelles in a polarised secretory epithelial cell: the salivary gland acinar cell

    PubMed Central

    MAYHEW, TERRY M.

    1999-01-01

    Previous quantitative descriptions of cellular ultrastructure have focused on spatial content (volume, surface area and number of organelles and membrane domains). It is possible to complement such descriptions by also quantifying spatial arrangements. Hitherto, applications of stereological methods for achieving this (notably, estimation of covariance and pair correlation functions) have been confined to organ and tissue levels. This study explores 3-dimensional subcellular arrangements of key organelles within acinar cells of rabbit parotid salivary glands, highly polarised epithelial cells specialised for exocrine secretion of α-amylase. It focuses on spatial arrangements of secretion product stores (zymogen granules), rough endoplasmic reticulum (RER) and mitochondria. Systematic random samples of electron microscopical fields of view from 3 rabbits were analysed using test grids bearing linear dipole probes of different sizes. Unbiased estimates of organelle volume densities were obtained by point counting and estimates of covariance and pair correlation functions by dipole counting. Plots of pair correlation functions against dipole length identified spatial arrangement differences between organelle types. Volumes within RER and mitochondrial compartments were positively correlated with themselves at distances below 4 μm and 2 μm respectively but were essentially randomly arranged at longer distances. In sharp contrast, zymogen granules were not randomly arranged. They were clustered at distances below 6–7 μm and more widely scattered at greater distances. These findings provide quantitative confirmation of the polarised arrangement of zymogen granules within acinar cells and further support for the relative invariance of biological organisation between subjects. PMID:10337960

  3. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait.

    PubMed

    Golestan Hashemi, Farahnaz Sadat; Rafii, Mohd Y; Ismail, Mohd Razi; Mohamed, Mahmud Tengku Muda; Rahim, Harun A; Latif, Mohammad Abdul; Aslani, Farzad

    2015-01-01

    When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL) analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP), the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance) was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM). These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice. PMID:26061689

  4. Evaluation of Faecalibacterium 16S rDNA genetic markers for accurate identification of swine faecal waste by quantitative PCR.

    PubMed

    Duan, Chuanren; Cui, Yamin; Zhao, Yi; Zhai, Jun; Zhang, Baoyun; Zhang, Kun; Sun, Da; Chen, Hang

    2016-10-01

    A genetic marker within the 16S rRNA gene of Faecalibacterium was identified for use in a quantitative PCR (qPCR) assay to detect swine faecal contamination in water. A total of 146,038 bacterial sequences were obtained using 454 pyrosequencing. By comparative bioinformatics analysis of Faecalibacterium sequences with those of numerous swine and other animal species, swine-specific Faecalibacterium 16S rRNA gene sequences were identified and Polymerase Chain Okabe (PCR) primer sets designed and tested against faecal DNA samples from swine and non-swine sources. Two PCR primer sets, PFB-1 and PFB-2, showed the highest specificity to swine faecal waste and had no cross-reaction with other animal samples. PFB-1 and PFB-2 amplified 16S rRNA gene sequences from 50 samples of swine with positive ratios of 86 and 90%, respectively. We compared swine-specific Faecalibacterium qPCR assays for the purpose of quantifying the newly identified markers. The quantification limits (LOQs) of PFB-1 and PFB-2 markers in environmental water were 6.5 and 2.9 copies per 100 ml, respectively. Of the swine-associated assays tested, PFB-2 was more sensitive in detecting the swine faecal waste and quantifying the microbial load. Furthermore, the microbial abundance and diversity of the microbiomes of swine and other animal faeces were estimated using operational taxonomic units (OTUs). The species specificity was demonstrated for the microbial populations present in various animal faeces. PMID:27353369

  5. Vision ray calibration for the quantitative geometric description of general imaging and projection optics in metrology

    SciTech Connect

    Bothe, Thorsten; Li Wansong; Schulte, Michael; von Kopylow, Christoph; Bergmann, Ralf B.; Jueptner, Werner P. O.

    2010-10-20

    Exact geometric calibration of optical devices like projectors or cameras is the basis for utilizing them in quantitative metrological applications. The common state-of-the-art photogrammetric pinhole-imaging-based models with supplemental polynomial corrections fail in the presence of nonsymmetric or high-spatial-frequency distortions and in describing caustics efficiently. These problems are solved by our vision ray calibration (VRC), which is proposed in this paper. The VRC takes an optical mapping system modeled as a black box and directly delivers corresponding vision rays for each mapped pixel. The underlying model, the calibration process, and examples are visualized and reviewed, demonstrating the potential of the VRC.

  6. Quantitative and descriptive comparison of four acoustic analysis systems: vowel measurements

    PubMed Central

    Burris, Carlyn; Vorperian, Houri K.; Fourakis, Marios; Kent, Ray D.; Bolt, Daniel M.

    2013-01-01

    Purpose This study examines accuracy and comparability of four trademarked acoustic analysis software packages (AASP): Praat, Wavesurfer, TF32 and CSL using synthesized and natural vowels. Features of AASP are also described. Methods Synthesized and natural vowels were analyzed using each of AASP’s default settings to secure nine acoustic measures: fundamental frequency (F0), formant frequencies (F1-F4), and formant bandwidths (B1-B4). The discrepancy between the software measured values and the input values (synthesized, previously reported, and manual measurements) was used to assess comparability and accuracy. Basic AASP features are described. Results Results indicate that Praat, Wavesurfer, and TF32 generate accurate and comparable F0 and F1-F4 data for synthesized vowels and adult male natural vowels. Results varied by vowel for adult females and children, with some serious errors. Bandwidth measurements by AASPs were highly inaccurate as compared to manual measurements and published data on formant bandwidths. Conclusions Values of F0 and F1-F4 are generally consistent and fairly accurate for adult vowels and for some child vowels using the default settings in Praat, Wavesurfer, and TF32. Manipulation of default settings yields improved output values in TF32 and CSL. Caution is recommended especially before accepting F1-F4 results for children and B1-B4 results for all speakers. PMID:24687465

  7. A quantitative description in three dimensions of oxygen uptake by human red blood cells.

    PubMed Central

    Vandegriff, K D; Olson, J S

    1984-01-01

    Oxygen uptake by human erythrocytes has been examined both experimentally and theoretically in terms of the influence of unstirred solvent layers that are adjacent to the cell surface. A one-dimensional plane sheet model has been compared with more complex spherical and cylindrical coordinate schemes. Although simpler and faster, the plane sheet algorithm is an inadequate representation when unstirred solvent layers are considered. The cylindrical disk model most closely represents the physical geometry of human red cells and is required for a quantitative analysis. In our stopped-flow rapid mixing experiments, the thickness of the unstirred solvent layer expands with time as the residual turbulence decays. This phenomenon has been quantified using a formulation based on previously developed hydrodynamic theories. An initial 10(-4) cm unstirred layer is postulated to occur during mixing and expand rapidly with time by a (t)0.5 function when flow stops. This formula, in combination with the three-dimensional cylinder scheme, has been used to describe quantitatively uptake time courses at various oxygen concentrations, two different external solvent viscosities, and two different internal heme concentrations. PMID:6722268

  8. Initial description of a quantitative, cross-species (chimpanzee-human) social responsiveness measure

    PubMed Central

    Marrus, Natasha; Faughn, Carley; Shuman, Jeremy; Petersen, Steve; Constantino, John; Povinelli, Daniel; Pruett, John R.

    2011-01-01

    Objective Comparative studies of social responsiveness, an ability that is impaired in autistic spectrum disorders, can inform our understanding of both autism and the cognitive architecture of social behavior. Because there is no existing quantitative measure of social responsiveness in chimpanzees, we generated a quantitative, cross-species (human-chimpanzee) social responsiveness measure. Method We translated the Social Responsiveness Scale (SRS), an instrument that quantifies human social responsiveness, into an analogous instrument for chimpanzees. We then retranslated this "Chimp SRS" into a human "Cross-Species SRS" (XSRS). We evaluated three groups of chimpanzees (n=29) with the Chimp SRS and typical and autistic spectrum disorder (ASD) human children (n=20) with the XSRS. Results The Chimp SRS demonstrated strong inter-rater reliability at the three sites (ranges for individual ICCs: .534–.866 and mean ICCs: .851–.970). As has been observed in humans, exploratory principal components analysis of Chimp SRS scores supports a single factor underlying chimpanzee social responsiveness. Human subjects' XSRS scores were fully concordant with their SRS scores (r=.976, p=.001) and distinguished appropriately between typical and ASD subjects. One chimpanzee known for inappropriate social behavior displayed a significantly higher score than all other chimpanzees at its site, demonstrating the scale's ability to detect impaired social responsiveness in chimpanzees. Conclusion Our initial cross-species social responsiveness scale proved reliable and discriminated differences in social responsiveness across (in a relative sense) and within (in a more objectively quantifiable manner) humans and chimpanzees. PMID:21515200

  9. Quantitative description of ion transport via plasma membrane of yeast and small cells

    PubMed Central

    Volkov, Vadim

    2015-01-01

    Modeling of ion transport via plasma membrane needs identification and quantitative understanding of the involved processes. Brief characterization of main ion transport systems of a yeast cell (Pma1, Ena1, TOK1, Nha1, Trk1, Trk2, non-selective cation conductance) and determining the exact number of molecules of each transporter per a typical cell allow us to predict the corresponding ion flows. In this review a comparison of ion transport in small yeast cell and several animal cell types is provided. The importance of cell volume to surface ratio is emphasized. The role of cell wall and lipid rafts is discussed in respect to required increase in spatial and temporary resolution of measurements. Conclusions are formulated to describe specific features of ion transport in a yeast cell. Potential directions of future research are outlined based on the assumptions. PMID:26113853

  10. Quantitative Description of Glycan-Receptor Binding of Influenza A Virus H7 Hemagglutinin

    PubMed Central

    Srinivasan, Karunya; Raman, Rahul; Jayaraman, Akila; Viswanathan, Karthik; Sasisekharan, Ram

    2013-01-01

    In the context of recently emerged novel influenza strains through reassortment, avian influenza subtypes such as H5N1, H7N7, H7N2, H7N3 and H9N2 pose a constant threat in terms of their adaptation to the human host. Among these subtypes, it was recently demonstrated that mutations in H5 and H9 hemagglutinin (HA) in the context of lab-generated reassorted viruses conferred aerosol transmissibility in ferrets (a property shared by human adapted viruses). We previously demonstrated that the quantitative binding affinity of HA to α2→6 sialylated glycans (human receptors) is one of the important factors governing human adaptation of HA. Although the H7 subtype has infected humans causing varied clinical outcomes from mild conjunctivitis to severe respiratory illnesses, it is not clear where the HA of these subtypes stand in regard to human adaptation since its binding affinity to glycan receptors has not yet been quantified. In this study, we have quantitatively characterized the glycan receptor-binding specificity of HAs from representative strains of Eurasian (H7N7) and North American (H7N2) lineages that have caused human infection. Furthermore, we have demonstrated for the first time that two specific mutations; Gln226→Leu and Gly228→Ser in glycan receptor-binding site of H7 HA substantially increase its binding affinity to human receptor. Our findings contribute to a framework for monitoring the evolution of H7 HA to be able to adapt to human host. PMID:23437033

  11. A quantitative description of Ndc80 complex linkage to human kinetochores

    PubMed Central

    Suzuki, Aussie; Badger, Benjamin L.; Salmon, Edward D.

    2015-01-01

    The Ndc80 complex, which mediates end-on attachment of spindle microtubules, is linked to centromeric chromatin in human cells by two inner kinetochore proteins, CENP-T and CENP-C. Here to quantify their relative contributions to Ndc80 recruitment, we combine measurements of kinetochore protein copy number with selective protein depletion assays. This approach reveals about 244 Ndc80 complexes per human kinetochore (∼14 per kinetochore microtubule), 215 CENP-C, 72 CENP-T and only 151 Ndc80s as part of the KMN protein network (1:1:1 Knl1, Mis12 and Ndc80 complexes). Each CENP-T molecule recruits ∼2 Ndc80 complexes; one as part of a KMN network. In contrast, ∼40% of CENP-C recruits only a KMN network. Replacing the CENP-C domain that binds KMN with the CENP-T domain that recruits both an Ndc80 complex and KMN network yielded functional kinetochores. These results provide a quantitative picture of the linkages between centromeric chromatin and the microtubule-binding Ndc80 complex at the human kinetochore. PMID:26345214

  12. Quantitative description of fluid flows produced by left-right cilia in zebrafish.

    PubMed

    Fox, Craig; Manning, M Lisa; Amack, Jeffrey D

    2015-01-01

    Motile cilia generate directional flows that move mucus through airways, cerebrospinal fluid through brain ventricles, and oocytes through fallopian tubes. In addition, specialized monocilia beat in a rotational pattern to create asymmetric flows that are involved in establishing the left-right (LR) body axis during embryogenesis. These monocilia, which we refer to as "left-right cilia," produce a leftward flow of extraembryonic fluid in a transient "organ of asymmetry" that directs asymmetric signaling and development of LR asymmetries in the cardiovascular system and gastrointestinal tract. The asymmetric flows are thought to establish a chemical gradient and/or activate mechanosensitive cilia to initiate calcium ion signals and a conserved Nodal (TGFβ) pathway on the left side of the embryo, but the mechanisms underlying this process remain unclear. The zebrafish organ of asymmetry, called Kupffer's vesicle, provides a useful model system for investigating LR cilia and cilia-powered fluid flows. Here, we describe methods to visualize flows in Kupffer's vesicle using fluorescent microspheres and introduce a new and freely available MATLAB particle tracking code to quantitatively describe these flows. Analysis of normal and aberrant flows indicates this approach is useful for characterizing flow properties that impact LR asymmetry and may be more broadly applicable for quantifying other cilia flows. PMID:25837391

  13. Metabolite profiling of soy sauce using gas chromatography with time-of-flight mass spectrometry and analysis of correlation with quantitative descriptive analysis.

    PubMed

    Yamamoto, Shinya; Bamba, Takeshi; Sano, Atsushi; Kodama, Yukako; Imamura, Miho; Obata, Akio; Fukusaki, Eiichiro

    2012-08-01

    Soy sauces, produced from different ingredients and brewing processes, have variations in components and quality. Therefore, it is extremely important to comprehend the relationship between components and the sensory attributes of soy sauces. The current study sought to perform metabolite profiling in order to devise a method of assessing the attributes of soy sauces. Quantitative descriptive analysis (QDA) data for 24 soy sauce samples were obtained from well selected sensory panelists. Metabolite profiles primarily concerning low-molecular-weight hydrophilic components were based on gas chromatography with time-of-flightmass spectrometry (GC/TOFMS). QDA data for soy sauces were accurately predicted by projection to latent structure (PLS), with metabolite profiles serving as explanatory variables and QDA data set serving as a response variable. Moreover, analysis of correlation between matrices of metabolite profiles and QDA data indicated contributing compounds that were highly correlated with QDA data. Especially, it was indicated that sugars are important components of the tastes of soy sauces. This new approach which combines metabolite profiling with QDA is applicable to analysis of sensory attributes of food as a result of the complex interaction between its components. This approach is effective to search important compounds that contribute to the attributes.

  14. Use of quantitative shape-activity relationships to model the photoinduced toxicity of polycyclic aromatic hydrocarbons: Electron density shape features accurately predict toxicity

    SciTech Connect

    Mezey, P.G.; Zimpel, Z.; Warburton, P.; Walker, P.D.; Irvine, D.G.; Huang, X.D.; Dixon, D.G.; Greenberg, B.M.

    1998-07-01

    The quantitative shape-activity relationship (QShAR) methodology, based on accurate three-dimensional electron densities and detailed shape analysis methods, has been applied to a Lemna gibba photoinduced toxicity data set of 16 polycyclic aromatic hydrocarbon (PAH) molecules. In the first phase of the studies, a shape fragment QShAR database of PAHs was developed. The results provide a very good match to toxicity based on a combination of the local shape features of single rings in comparison to the central ring of anthracene and a more global shape feature involving larger molecular fragments. The local shape feature appears as a descriptor of the susceptibility of PAHs to photomodification and the global shape feature is probably related to photosensitization activity.

  15. Accurate and easy-to-use assessment of contiguous DNA methylation sites based on proportion competitive quantitative-PCR and lateral flow nucleic acid biosensor.

    PubMed

    Xu, Wentao; Cheng, Nan; Huang, Kunlun; Lin, Yuehe; Wang, Chenguang; Xu, Yuancong; Zhu, Longjiao; Du, Dan; Luo, Yunbo

    2016-06-15

    Many types of diagnostic technologies have been reported for DNA methylation, but they require a standard curve for quantification or only show moderate accuracy. Moreover, most technologies have difficulty providing information on the level of methylation at specific contiguous multi-sites, not to mention easy-to-use detection to eliminate labor-intensive procedures. We have addressed these limitations and report here a cascade strategy that combines proportion competitive quantitative PCR (PCQ-PCR) and lateral flow nucleic acid biosensor (LFNAB), resulting in accurate and easy-to-use assessment. The P16 gene with specific multi-methylated sites, a well-studied tumor suppressor gene, was used as the target DNA sequence model. First, PCQ-PCR provided amplification products with an accurate proportion of multi-methylated sites following the principle of proportionality, and double-labeled duplex DNA was synthesized. Then, a LFNAB strategy was further employed for amplified signal detection via immune affinity recognition, and the exact level of site-specific methylation could be determined by the relative intensity of the test line and internal reference line. This combination resulted in all recoveries being greater than 94%, which are pretty satisfactory recoveries in DNA methylation assessment. Moreover, the developed cascades show significantly high usability as a simple, sensitive, and low-cost tool. Therefore, as a universal platform for sensing systems for the detection of contiguous multi-sites of DNA methylation without external standards and expensive instrumentation, this PCQ-PCR-LFNAB cascade method shows great promise for the point-of-care diagnosis of cancer risk and therapeutics.

  16. Simultaneous measurement in mass and mass/mass mode for accurate qualitative and quantitative screening analysis of pharmaceuticals in river water.

    PubMed

    Martínez Bueno, M J; Ulaszewska, Maria M; Gomez, M J; Hernando, M D; Fernández-Alba, A R

    2012-09-21

    A new approach for the analysis of pharmaceuticals (target and non-target) in water by LC-QTOF-MS is described in this work. The study has been designed to assess the performance of the simultaneous quantitative screening of target compounds, and the qualitative analysis of non-target analytes, in just one run. The features of accurate mass full scan mass spectrometry together with high MS/MS spectral acquisition rates - by means of information dependent acquisition (IDA) - have demonstrated their potential application in this work. Applying this analytical strategy, an identification procedure is presented based on library searching for compounds which were not included a priori in the analytical method as target compounds, thus allowing their characterization by data processing of accurate mass measurements in MS and MS/MS mode. The non-target compounds identified in river water samples were ketorolac, trazodone, fluconazole, metformin and venlafaxine. Simultaneously, this strategy allowed for the identification of other compounds which were not included in the library by screening the highest intensity peaks detected in the samples and by analysis of the full scan TOF-MS, isotope pattern and MS/MS spectra - the example of loratadine (histaminergic) is described. The group of drugs of abuse selected as target compounds for evaluation included analgesics, opioids and psychostimulants. Satisfactory results regarding sensitivity and linearity of the developed method were obtained. Limits of detection for the selected target compounds were from 0.003 to 0.01 μg/L and 0.01 to 0.5 μg/L, in MS and MS/MS mode, respectively - by direct sample injection of 100 μL.

  17. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    SciTech Connect

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  18. Accurate, quantitative assays for the hydrolysis of soluble type I, II, and III /sup 3/H-acetylated collagens by bacterial and tissue collagenases

    SciTech Connect

    Mallya, S.K.; Mookhtiar, K.A.; Van Wart, H.E.

    1986-11-01

    Accurate and quantitative assays for the hydrolysis of soluble /sup 3/H-acetylated rat tendon type I, bovine cartilage type II, and human amnion type III collagens by both bacterial and tissue collagenases have been developed. The assays are carried out at any temperature in the 1-30/sup 0/C range in a single reaction tube and the progress of the reaction is monitored by withdrawing aliquots as a function of time, quenching with 1,10-phenanthroline, and quantitation of the concentration of hydrolysis fragments. The latter is achieved by selective denaturation of these fragments by incubation under conditions described in the previous paper of this issue. The assays give percentages of hydrolysis of all three collagen types by neutrophil collagenase that agree well with the results of gel electrophoresis experiments. The initial rates of hydrolysis of all three collagens are proportional to the concentration of both neutrophil or Clostridial collagenases over a 10-fold range of enzyme concentrations. All three assays can be carried out at collagen concentrations that range from 0.06 to 2 mg/ml and give linear double reciprocal plots for both tissue and bacterial collagenases that can be used to evaluate the kinetic parameters K/sub m/ and k/sub cat/ or V/sub max/. The assay developed for the hydrolysis of rat type I collagen by neutrophil collagenase is shown to be more sensitive by at least one order of magnitude than comparable assays that use rat type I collagen fibrils or gels as substrate.

  19. Sensory descriptive quantitative analysis of unpasteurized and pasteurized juçara pulp (Euterpe edulis) during long-term storage

    PubMed Central

    da Silva, Paula Porrelli Moreira; Casemiro, Renata Cristina; Zillo, Rafaela Rebessi; de Camargo, Adriano Costa; Prospero, Evanilda Teresinha Perissinotto; Spoto, Marta Helena Fillet

    2014-01-01

    This study evaluated the effect of pasteurization followed by storage under different conditions on the sensory attributes of frozen juçara pulp using quantitative descriptive analysis (QDA). Pasteurization of packed frozen pulp was performed by its immersion in stainless steel tank containing water (80°C) for 5 min, followed by storage under refrigerated and frozen conditions. A trained sensory panel evaluated the samples (6°C) on day 1, 15, 30, 45, 60, 75, and 90. Sensory attributes were separated as follows: appearance (foamy, heterogeneous, purple, brown, oily, and creamy), aroma (sweet and fermented), taste (astringent, bitter, and sweet), and texture (oily and consistent), and compared to a reference material. In general, unpasteurized frozen pulp showed the highest score for foamy appearance, and pasteurized samples showed highest scores to creamy appearance. Pasteurized samples remained stable regarding brown color development while unpasteurized counterparts presented increase. Color is an important attribute related to the product identity. All attributes related to taste and texture remained constant during storage for all samples. Pasteurization followed by storage under frozen conditions has shown to be the best conservation method as samples submitted to such process received the best sensory evaluation, described as foamy, slightly heterogeneous, slightly bitter, and slightly astringent. PMID:25473489

  20. Allele Specific Locked Nucleic Acid Quantitative PCR (ASLNAqPCR): An Accurate and Cost-Effective Assay to Diagnose and Quantify KRAS and BRAF Mutation

    PubMed Central

    Morandi, Luca; de Biase, Dario; Visani, Michela; Cesari, Valentina; De Maglio, Giovanna; Pizzolitto, Stefano; Pession, Annalisa; Tallini, Giovanni

    2012-01-01

    The use of tyrosine kinase inhibitors (TKIs) requires the testing for hot spot mutations of the molecular effectors downstream the membrane-bound tyrosine kinases since their wild type status is expected for response to TKI therapy. We report a novel assay that we have called Allele Specific Locked Nucleic Acid quantitative PCR (ASLNAqPCR). The assay uses LNA-modified allele specific primers and LNA-modified beacon probes to increase sensitivity, specificity and to accurately quantify mutations. We designed primers specific for codon 12/13 KRAS mutations and BRAF V600E, and validated the assay with 300 routine samples from a variety of sources, including cytology specimens. All were analyzed by ASLNAqPCR and Sanger sequencing. Discordant cases were pyrosequenced. ASLNAqPCR correctly identified BRAF and KRAS mutations in all discordant cases and all had a mutated/wild type DNA ratio below the analytical sensitivity of the Sanger method. ASLNAqPCR was 100% specific with greater accuracy, positive and negative predictive values compared with Sanger sequencing. The analytical sensitivity of ASLNAqPCR is 0.1%, allowing quantification of mutated DNA in small neoplastic cell clones. ASLNAqPCR can be performed in any laboratory with real-time PCR equipment, is very cost-effective and can easily be adapted to detect hot spot mutations in other oncogenes. PMID:22558339

  1. Mitochondrial DNA as a non-invasive biomarker: accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias.

    PubMed

    Malik, Afshan N; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as β-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a "dilution bias" when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  2. Detection and quantitation of trace phenolphthalein (in pharmaceutical preparations and in forensic exhibits) by liquid chromatography-tandem mass spectrometry, a sensitive and accurate method.

    PubMed

    Sharma, Kakali; Sharma, Shiba P; Lahiri, Sujit C

    2013-01-01

    Phenolphthalein, an acid-base indicator and laxative, is important as a constituent of widely used weight-reducing multicomponent food formulations. Phenolphthalein is an useful reagent in forensic science for the identification of blood stains of suspected victims and for apprehending erring officials accepting bribes in graft or trap cases. The pink-colored alkaline hand washes originating from the phenolphthalein-smeared notes can easily be determined spectrophotometrically. But in many cases, colored solution turns colorless with time, which renders the genuineness of bribe cases doubtful to the judiciary. No method is known till now for the detection and identification of phenolphthalein in colorless forensic exhibits with positive proof. Liquid chromatography-tandem mass spectrometry had been found to be most sensitive, accurate method capable of detection and quantitation of trace phenolphthalein in commercial formulations and colorless forensic exhibits with positive proof. The detection limit of phenolphthalein was found to be 1.66 pg/L or ng/mL, and the calibration curve shows good linearity (r(2) = 0.9974). PMID:23106487

  3. Allele specific locked nucleic acid quantitative PCR (ASLNAqPCR): an accurate and cost-effective assay to diagnose and quantify KRAS and BRAF mutation.

    PubMed

    Morandi, Luca; de Biase, Dario; Visani, Michela; Cesari, Valentina; De Maglio, Giovanna; Pizzolitto, Stefano; Pession, Annalisa; Tallini, Giovanni

    2012-01-01

    The use of tyrosine kinase inhibitors (TKIs) requires the testing for hot spot mutations of the molecular effectors downstream the membrane-bound tyrosine kinases since their wild type status is expected for response to TKI therapy. We report a novel assay that we have called Allele Specific Locked Nucleic Acid quantitative PCR (ASLNAqPCR). The assay uses LNA-modified allele specific primers and LNA-modified beacon probes to increase sensitivity, specificity and to accurately quantify mutations. We designed primers specific for codon 12/13 KRAS mutations and BRAF V600E, and validated the assay with 300 routine samples from a variety of sources, including cytology specimens. All were analyzed by ASLNAqPCR and Sanger sequencing. Discordant cases were pyrosequenced. ASLNAqPCR correctly identified BRAF and KRAS mutations in all discordant cases and all had a mutated/wild type DNA ratio below the analytical sensitivity of the Sanger method. ASLNAqPCR was 100% specific with greater accuracy, positive and negative predictive values compared with Sanger sequencing. The analytical sensitivity of ASLNAqPCR is 0.1%, allowing quantification of mutated DNA in small neoplastic cell clones. ASLNAqPCR can be performed in any laboratory with real-time PCR equipment, is very cost-effective and can easily be adapted to detect hot spot mutations in other oncogenes.

  4. Coupling geostatistics to detailed reservoir description allows better visualization and more accurate characterization/simulation of turbidite reservoirs: Elk Hills oil field, California

    SciTech Connect

    Allan, M.E.; Wilson, M.L.; Wightman, J. )

    1996-01-01

    The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.

  5. Coupling geostatistics to detailed reservoir description allows better visualization and more accurate characterization/simulation of turbidite reservoirs: Elk Hills oil field, California

    SciTech Connect

    Allan, M.E.; Wilson, M.L.; Wightman, J.

    1996-12-31

    The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity & permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic & petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.

  6. A mathematical recursive model for accurate description of the phase behavior in the near-critical region by Generalized van der Waals Equation

    NASA Astrophysics Data System (ADS)

    Kim, Jibeom; Jeon, Joonhyeon

    2015-01-01

    Recently, related studies on Equation Of State (EOS) have reported that generalized van der Waals (GvdW) shows poor representations in the near critical region for non-polar and non-sphere molecules. Hence, there are still remains a problem of GvdW parameters to minimize loss in describing saturated vapor densities and vice versa. This paper describes a recursive model GvdW (rGvdW) for an accurate representation of pure fluid materials in the near critical region. For the performance evaluation of rGvdW in the near critical region, other EOS models are also applied together with two pure molecule group: alkane and amine. The comparison results show rGvdW provides much more accurate and reliable predictions of pressure than the others. The calculating model of EOS through this approach gives an additional insight into the physical significance of accurate prediction of pressure in the nearcritical region.

  7. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  8. A name for the 'blueberry tetra', an aquarium trade popular species of Hyphessobrycon Durbin (Characiformes, Characidae), with comments on fish species descriptions lacking accurate type locality.

    PubMed

    Marinho, M M F; Dagosta, F C P; Camelier, P; Oyakawa, O T

    2016-07-01

    A new species of Hyphessobrycon is described from a tributary of the upper Rio Tapajós, Amazon basin, Mato Grosso, Brazil. Its exuberant colour in life, with blue to purple body and red fins, is appreciated in the aquarium trade. Characters to diagnose the new species from all congeners are the presence of a single humeral blotch, absence of a distinct caudal-peduncle blotch, absence of a well-defined dark mid-lateral stripe on body, the presence of 16-18 branched anal-fin rays, nine branched dorsal-fin rays and six branched pelvic-fin rays. A brief comment on fish species descriptions solely based on aquarium material and its consequence for conservation policies is provided.

  9. A name for the 'blueberry tetra', an aquarium trade popular species of Hyphessobrycon Durbin (Characiformes, Characidae), with comments on fish species descriptions lacking accurate type locality.

    PubMed

    Marinho, M M F; Dagosta, F C P; Camelier, P; Oyakawa, O T

    2016-07-01

    A new species of Hyphessobrycon is described from a tributary of the upper Rio Tapajós, Amazon basin, Mato Grosso, Brazil. Its exuberant colour in life, with blue to purple body and red fins, is appreciated in the aquarium trade. Characters to diagnose the new species from all congeners are the presence of a single humeral blotch, absence of a distinct caudal-peduncle blotch, absence of a well-defined dark mid-lateral stripe on body, the presence of 16-18 branched anal-fin rays, nine branched dorsal-fin rays and six branched pelvic-fin rays. A brief comment on fish species descriptions solely based on aquarium material and its consequence for conservation policies is provided. PMID:27245763

  10. Accurate Descriptions of Hot Flow Behaviors Across β Transus of Ti-6Al-4V Alloy by Intelligence Algorithm GA-SVR

    NASA Astrophysics Data System (ADS)

    Wang, Li-yong; Li, Le; Zhang, Zhi-hua

    2016-09-01

    Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.

  11. Accurate Descriptions of Hot Flow Behaviors Across β Transus of Ti-6Al-4V Alloy by Intelligence Algorithm GA-SVR

    NASA Astrophysics Data System (ADS)

    Wang, Li-yong; Li, Le; Zhang, Zhi-hua

    2016-07-01

    Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.

  12. A settling curve modeling method for quantitative description of the dispersion stability of carbon nanotubes in aquatic environments.

    PubMed

    Zhou, Lixia; Zhu, Dunxue; Zhang, Shujuan; Pan, Bingcai

    2015-03-01

    Understanding the aggregation and deposition behavior of carbon nanotubes (CNTs) is of great significance in terms of their fate and transport in the environment. Attachment efficiency is a widely used index for well-dispersed CNT solutions. However, in natural waters, CNTs are usually heterogeneous in particle size. The attachment efficiency method is not applicable to such systems. Describing the dispersion stability of CNTs in natural aquatic systems is still a challenge. In this work, a settling curve modeling (SCM) method was developed for the description of the aggregation and deposition behavior of CNTs in aqueous solutions. The effects of water chemistry (natural organic matter, pH, and ionic strength) on the aggregation and deposition behavior of pristine and surface-functionalized multi-walled carbon nanotubes (MWCNTs) were systematically studied to evaluate the reliability of the SCM method. The results showed that, as compared to particle size and optical density, the centrifugal sedimentation rate constant (ks) from the settling curve profile is a practical, useful and reliable index for the description of heterogeneous CNT suspensions. The SCM method was successfully applied to MWCNT in three natural waters. The constituents in water, especially organic matter, determine the dispersion stability of MWCNTs in natural water bodies.

  13. Quantitative description of the properties of extended defects in silicon by means of electron- and laser-beam-induced currents

    SciTech Connect

    Shabelnikova, Ya. L. Yakimov, E. B.; Nikolaev, D. P.; Chukalina, M. V.

    2015-06-15

    A solar cell on a wafer of multicrystalline silicon containing grain boundaries was studied by the induced-current method. The sample was scanned by an electron beam and by a laser beam at two wavelengths (980 and 635 nm). The recorded induced-current maps were aligned by means of a specially developed code, that enabled to analyze the same part of the grain boundary for three types of measurements. Optimization of the residual between simulated induced-current profiles and those obtained experimentally yielded quantitative estimates of the characteristics of a sample and its defects: the diffusion length of minority carriers and recombination velocity at the grain boundary.

  14. A quantitative description of the epididymis and its microvasculature: an age-related study in the rat.

    PubMed Central

    Markey, C M; Meyer, G T

    1992-01-01

    The morphology of the epididymal duct and, in particular, the epididymal microvasculature was examined at the light microscope level in young sexually-mature rats (3-5 months) and aged rats (18 months) to investigate the structural changes that may occur within the organ as a result of ageing, and which may predispose the organ to pathological changes. Quantitative data on the microvascular network of the epididymis (percentage of capillaries in the interstitial region, average area and surface density of the capillary lumen) were collected in 4 regions of the epididymis: the initial segment, caput, corpus and cauda. Epithelial cell height, epididymal lumen diameter, number of smooth muscle cells and percentage of smooth muscle surrounding the duct were also assessed within the same 4 regions. The data for both young and aged groups revealed a trend of decreasing capillary size from the initial segment of the epididymis to the cauda by 23%. Further, the percentage of capillaries within the interstitial region of the epididymis decreases dramatically (52%) in the same direction. The possible contribution of lymphatic capillaries to the data is discussed. The data revealed that none of the parameters assessed changed significantly up to 18 months of age. The quantitative data on the microvascular morphology of the epididymis presented in this study provide the basis for subsequent studies directed at the blood flow dynamics of the organ. Images Fig. 1 Fig. 2 PMID:1506280

  15. Evaluation of texture parameters for the quantitative description of multimodal nonlinear optical images from atherosclerotic rabbit arteries

    NASA Astrophysics Data System (ADS)

    Mostaço-Guidolin, Leila B.; C-T Ko, Alex; Popescu, Dan P.; Smith, Michael S. D.; Kohlenberg, Elicia K.; Shiomi, Masashi; Major, Arkady; Sowa, Michael G.

    2011-08-01

    The composition and structure of atherosclerotic lesions can be directly related to the risk they pose to the patient. Multimodal nonlinear optical (NLO) microscopy provides a powerful means to visualize the major extracellular components of the plaque that critically determine its structure. Textural features extracted from NLO images were investigated for their utility in providing quantitative descriptors of structural and compositional changes associated with plaque development. Ten texture parameters derived from the image histogram and gray level co-occurrence matrix were examined that highlight specific structural and compositional motifs that distinguish early and late stage plaques. Tonal-texture parameters could be linked to key histological features that characterize vulnerable plaque: the thickness and density of the fibrous cap, size of the atheroma, and the level of inflammation indicated through lipid deposition. Tonal and texture parameters from NLO images provide objective metrics that correspond to structural and biochemical changes that occur within the vessel wall in early and late stage atherosclerosis.

  16. Quantitation of Compounds in Wine Using (1)H NMR Spectroscopy: Description of the Method and Collaborative Study.

    PubMed

    Godelmann, Rolf; Kost, Christian; Patz, Claus-Dieter; Ristow, Reinhard; Wachter, Helmut

    2016-09-01

    To examine whether NMR analysis is a suitable method for the quantitative determination of wine components, an international collaborative trial was organized to evaluate the method according to the international regulations and guidelines of the German Institute for Standardization/International Organization for Standardization, AOAC INTERNATIONAL, the International Union of Pure and Applied Chemistry, and the International Organization of Vine and Wine. Sugars such as glucose; acids such as malic, acetic, fumaric, and shikimic acids (the latter two as minor components); and sorbic acid, a preservative, were selected for the exemplary quantitative determination of substances in wine. Selection criteria for the examination of sample material included different NMR spectral signal types (singlet and multiplet), as well as the suitability of the proposed substances for manual integration at different levels of challenge (e.g., interference as a result of the necessary suppression of a water signal or the coverage of different typical wine concentration ranges for a selection of major components, minor components, and additives). To show that this method can be universally applied, NMR measurement and the method of evaluation were not strictly elucidated. Fifteen international laboratories participated in the collaborative trial and determined six parameters in 10 samples. The values, in particular the reproducibility SD (SR), were compared with the expected Horwitz SD (SH) by forming the quotient SR/SH (i.e., the HorRat value). The resulting HorRat values of most parameters were predominantly between 0.6 and 1.5, and thus of an acceptable range. PMID:27436715

  17. Fathers' feelings related to their partners' childbirth and views on their presence during labour and childbirth: A descriptive quantitative study.

    PubMed

    He, Hong-Gu; Vehviläinen-Julkunen, Katri; Qian, Xiao-Fang; Sapountzi-Krepia, Despina; Gong, Yuhua; Wang, Wenru

    2015-05-01

    This study examined Chinese fathers' feelings about their partners' delivery and views on their presence during labour and birth. A questionnaire survey was conducted with 403 fathers whose partners gave birth in one provincial hospital in China. Data were analysed by descriptive statistics, χ(2)-test and content analysis. The results indicated that more than 80% of fathers experienced feelings of pride related to fatherhood and of love towards their partners and newborns. Significant differences in fathers' feelings were found between subgroups with regard to age, education, employment, presence in the delivery room, method of birth and whether preparatory visits had been made to the hospital. The majority who answered an open-ended question on the meaning of fathers' presence in the delivery room held a positive attitude towards fathers' presence at labour and birth, as their presence could empower their partners and provide psychological support. This study indicates fathers' presence at delivery and birth is important and that younger fathers need more support. It also provides evidence for clinical practice and future interventions to improve fathers' psychological health and experiences.

  18. Quantitative description of the lie-to-sit-to-stand-to-walk transfer by a single body-fixed sensor.

    PubMed

    Bagalà, Fabio; Klenk, Jochen; Cappello, Angelo; Chiari, Lorenzo; Becker, Clemens; Lindemann, Ulrich

    2013-07-01

    Sufficient capacity and quality of performance of complex movement patterns during daily activity, such as standing up from a bed, is a prerequisite for independent living and also may be an indicator of fall risk. Until now, the transfer from lying-to-sit-to-stand-to-walk (LSSW) was investigated by functional testing, subjective rating or for activity classification of subtasks. The aim of this study was to use a single body-fixed inertial sensor to describe the complex movement of the LSSW transfer. Fifteen older patients of a geriatric rehabilitation clinic (median age 81 years) and ten young, healthy persons (median age 37 years) were instructed to stand up from bed in a continuous movement and to start walking. Data acquisition was performed using an inertial measurement unit worn on the lower back. Parameters extracted from the sensor outputs were able to correctly classify the subjects into a correct group with sensitivity and specificity between 90% and 100%. ICCs 3,1 of the descriptive parameters ranged between 0.85 and 0.95 in the cohort of older patients. The different strategies adopted to transfer from lying to standing up were estimated through an extended Kalman filter. The results obtained in this study suggest the usability of the instrumented LSSW test in clinical settings.

  19. Quantitative analysis

    PubMed Central

    Nevin, John A.

    1984-01-01

    Quantitative analysis permits the isolation of invariant relations in the study of behavior. The parameters of these relations can serve as higher-order dependent variables in more extensive analyses. These points are illustrated by reference to quantitative descriptions of performance maintained by concurrent schedules, multiple schedules, and signal-detection procedures. Such quantitative descriptions of empirical data may be derived from mathematical theories, which in turn can lead to novel empirical analyses so long as their terms refer to behavioral and environmental events. Thus, quantitative analysis is an integral aspect of the experimental analysis of behavior. PMID:16812400

  20. Quantitative profiling of bile acids in biofluids and tissues based on accurate mass high resolution LC-FT-MS: compound class targeting in a metabolomics workflow.

    PubMed

    Bobeldijk, Ivana; Hekman, Maarten; de Vries-van der Weij, Jitske; Coulier, Leon; Ramaker, Raymond; Kleemann, Robert; Kooistra, Teake; Rubingh, Carina; Freidig, Andreas; Verheij, Elwin

    2008-08-15

    We report a sensitive, generic method for quantitative profiling of bile acids and other endogenous metabolites in small quantities of various biological fluids and tissues. The method is based on a straightforward sample preparation, separation by reversed-phase high performance liquid-chromatography mass spectrometry (HPLC-MS) and electrospray ionisation in the negative ionisation mode (ESI-). Detection is performed in full scan using the linear ion trap Fourier transform mass spectrometer (LTQ-FTMS) generating data for many (endogenous) metabolites, not only bile acids. A validation of the method in urine, plasma and liver was performed for 17 bile acids including their taurine, sulfate and glycine conjugates. The method is linear in the 0.01-1 microM range. The accuracy in human plasma ranges from 74 to 113%, in human urine 77 to 104% and in mouse liver 79 to 140%. The precision ranges from 2 to 20% for pooled samples even in studies with large number of samples (n>250). The method was successfully applied to a multi-compartmental APOE*3-Leiden mouse study, the main goal of which was to analyze the effect of increasing dietary cholesterol concentrations on hepatic cholesterol homeostasis and bile acid synthesis. Serum and liver samples from different treatment groups were profiled with the new method. Statistically significant differences between the diet groups were observed regarding total as well as individual bile acid concentrations.

  1. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction.

    PubMed

    Lu, Y; Rong, C Z; Zhao, J Y; Lao, X J; Xie, L; Li, S; Qin, X

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  2. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction.

    PubMed

    Lu, Y; Rong, C Z; Zhao, J Y; Lao, X J; Xie, L; Li, S; Qin, X

    2016-08-25

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period.

  3. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction

    PubMed Central

    Lu, Y.; Rong, C.Z.; Zhao, J.Y.; Lao, X.J.; Xie, L.; Li, S.; Qin, X.

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  4. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    PubMed

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts.

  5. Qualitative and quantitative descriptions of temperature: a study of the terminology used by local television weather forecasters to describe thermal sensation.

    PubMed

    Brunskill, Jeffrey C

    2010-03-01

    This paper presents a study of the relationship between quantitative and qualitative descriptions of temperature. Online weather forecast narratives produced by local television forecasters were collected from affiliates in 23 cities throughout the northeastern, central and southern portions of the United States from August 2007 to July 2008. The narratives were collected to study the terminology and reference frames that local forecasters use to describe predicted temperatures for the following day. The main objectives were to explore the adjectives used to describe thermal conditions and the impact that geographical and seasonal variations in thermal conditions have on these descriptions. The results of this empirical study offer some insights into the structure of weather narratives and suggest that spatiotemporal variations in the weather impact how forecasters describe the temperature to their local audiences. In a broader sense, this investigation builds upon research in biometeorology, urban planning and linguistics that has explored the physiological and psychological factors that influence subjective assessments of thermal sensation and comfort. The results of this study provide a basis to reason about how thermal comfort is conveyed in meteorological communications and how experiential knowledge derived from daily observations of the weather influence how we think about and discuss the weather.

  6. Effect of preservative addition on sensory and dynamic profile of Lucanian dry-sausages as assessed by quantitative descriptive analysis and temporal dominance of sensations.

    PubMed

    Braghieri, Ada; Piazzolla, Nicoletta; Galgano, Fernanda; Condelli, Nicola; De Rosa, Giuseppe; Napolitano, Fabio

    2016-12-01

    The quantitative descriptive analysis (QDA) was combined with temporal dominance of sensations (TDS) to assess the sensory properties of Lucanian dry-sausages either added with nitrate, nitrite and l-ascorbic acid (NS), or not (NNS). Both QDA and TDS differentiated the two groups of sausages. NNS products were perceived with higher intensity of hardness (P<0.05) and tended to be perceived with higher intensities of flavor (P<0.10), pepper (P<0.20), and oiliness (P<0.20), while resulting lower in chewiness (P<0.20). TDS showed that in all the sausages hardness was the first dominant attribute; then, in NNS products flavor remained dominant until the end of tasting, whereas in NS products oiliness prevailed. In conclusion, TDS showed that the perception of some textural parameters, such as oiliness, during mastication was more dominant in NS products, whereas using conventional QDA this attribute appeared higher in sausages manufactured without preservatives. Therefore, TDS provided additional information for the description and differentiation of Lucanian sausages. PMID:27486959

  7. Qualitative and quantitative descriptions of temperature: a study of the terminology used by local television weather forecasters to describe thermal sensation.

    PubMed

    Brunskill, Jeffrey C

    2010-03-01

    This paper presents a study of the relationship between quantitative and qualitative descriptions of temperature. Online weather forecast narratives produced by local television forecasters were collected from affiliates in 23 cities throughout the northeastern, central and southern portions of the United States from August 2007 to July 2008. The narratives were collected to study the terminology and reference frames that local forecasters use to describe predicted temperatures for the following day. The main objectives were to explore the adjectives used to describe thermal conditions and the impact that geographical and seasonal variations in thermal conditions have on these descriptions. The results of this empirical study offer some insights into the structure of weather narratives and suggest that spatiotemporal variations in the weather impact how forecasters describe the temperature to their local audiences. In a broader sense, this investigation builds upon research in biometeorology, urban planning and linguistics that has explored the physiological and psychological factors that influence subjective assessments of thermal sensation and comfort. The results of this study provide a basis to reason about how thermal comfort is conveyed in meteorological communications and how experiential knowledge derived from daily observations of the weather influence how we think about and discuss the weather. PMID:19876657

  8. A gel-free MS-based quantitative proteomic approach accurately measures cytochrome P450 protein concentrations in human liver microsomes.

    PubMed

    Wang, Michael Zhuo; Wu, Judy Qiju; Dennison, Jennifer B; Bridges, Arlene S; Hall, Stephen D; Kornbluth, Sally; Tidwell, Richard R; Smith, Philip C; Voyksner, Robert D; Paine, Mary F; Hall, James Edwin

    2008-10-01

    The human cytochrome P450 (P450) superfamily consists of membrane-bound proteins that metabolize a myriad of xenobiotics and endogenous compounds. Quantification of P450 expression in various tissues under normal and induced conditions has an important role in drug safety and efficacy. Conventional immunoquantification methods have poor dynamic range, low throughput, and a limited number of specific antibodies. Recent advances in MS-based quantitative proteomics enable absolute protein quantification in a complex biological mixture. We have developed a gel-free MS-based protein quantification strategy to quantify CYP3A enzymes in human liver microsomes (HLM). Recombinant protein-derived proteotypic peptides and synthetic stable isotope-labeled proteotypic peptides were used as calibration standards and internal standards, respectively. The lower limit of quantification was approximately 20 fmol P450. In two separate panels of HLM examined (n = 11 and n = 22), CYP3A, CYP3A4 and CYP3A5 concentrations were determined reproducibly (CV or=0.87) and marker activities (r(2)>or=0.88), including testosterone 6beta-hydroxylation (CYP3A), midazolam 1'-hydroxylation (CYP3A), itraconazole 6-hydroxylation (CYP3A4) and CYP3A5-mediated vincristine M1 formation (CYP3A5). Taken together, our MS-based method provides a specific, sensitive and reliable means of P450 protein quantification and should facilitate P450 characterization during drug development, especially when specific substrates and/or antibodies are unavailable.

  9. The behavioral satiety sequence in pigeons (Columba livia). Description and development of a method for quantitative analysis.

    PubMed

    Spudeit, William Anderson; Sulzbach, Natalia Saretta; Bittencourt, Myla de A; Duarte, Anita Maurício Camillo; Liang, Hua; Lino-de-Oliveira, Cilene; Marino-Neto, José

    2013-10-01

    resembled that observed in rodents and primates. This pattern can be quantitatively described and compared using different suitable and coordinated behavioral measures, enabling further studies on the comparative and evolutionary aspects of the mechanisms that shape the post-consummatory behavioral flux in amniotes.

  10. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    PubMed

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  11. Improved Detection System Description and New Method for Accurate Calibration of Micro-Channel Plate Based Instruments and Its Use in the Fast Plasma Investigation on NASA's Magnetospheric MultiScale Mission

    NASA Technical Reports Server (NTRS)

    Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.; Zeuch, M. A.; Pollock, C. J.; Jacques, A. D.

    2015-01-01

    system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. More precise calibration is highly desirable as the instruments will produce higher quality raw data that will require less post-acquisition data correction using results from in-flight pitch angle distribution measurements and ground calibration measurements. The detection system description and the fundamental concepts of this new calibration method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters and how to choose the optimum detection system operating point. This new method has been successfully applied to achieve a highly accurate calibration of the DESs and DISs of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown that, with further detailed modeling, this method can be extended for use in flight to achieve and maintain a highly accurate detection system calibration across a large number of instruments during the mission.

  12. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  13. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  14. TopCAT and PySESA: Open-source software tools for point cloud decimation, roughness analyses, and quantitative description of terrestrial surfaces

    NASA Astrophysics Data System (ADS)

    Hensleigh, J.; Buscombe, D.; Wheaton, J. M.; Brasington, J.; Welcker, C. W.; Anderson, K.

    2015-12-01

    The increasing use of high-resolution topography (HRT) constructed from point clouds obtained from technology such as LiDAR, SoNAR, SAR, SfM and a variety of range-imaging techniques, has created a demand for custom analytical tools and software for point cloud decimation (data thinning and gridding) and spatially explicit statistical analysis of terrestrial surfaces. We will present on a number of analytical and computational tools designed to quantify surface roughness and texture, directly from point clouds in a variety of ways (using spatial- and frequency-domain statistics). TopCAT (Topographic Point Cloud Analysis Toolkit; Brasington et al., 2012) and PySESA (Python program for Spatially Explicit Spectral Analysis) both work by applying a small moving window to (x,y,z) data to calculate a suite of (spatial and spectral domain) statistics, which are then spatially-referenced on a regular (x,y) grid at a user-defined resolution. Collectively, these tools facilitate quantitative description of surfaces and may allow, for example, fully automated texture characterization and segmentation, roughness and grain size calculation, and feature detection and classification, on very large point clouds with great computational efficiency. Using tools such as these, it may be possible to detect geomorphic change in surfaces which have undergone minimal elevation difference, for example deflation surfaces which have coarsened but undergone no net elevation change, or surfaces which have eroded and accreted, leaving behind a different textural surface expression than before. The functionalities of the two toolboxes are illustrated with example high-resolution bathymetric point cloud data collected with multibeam echosounder, and topographic data collected with LiDAR.

  15. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  16. Measuring the Internal Structure and Physical Conditions in Star and Planet Forming Clouds Cores: Towards a Quantitative Description of Cloud Evolution

    NASA Technical Reports Server (NTRS)

    Lada, Charles J.

    2004-01-01

    This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular cloud cores and investigate the physical conditions which give rise to star and planet formation. The goals of this program are to acquire, reduce and analyze deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds in order to determine the detailed initial conditions for star formation from quantitative measurements of the internal structure of starless cloud cores and to quantitatively investigate the evolution of such structure through the star and planet formation process.

  17. Measuring the Internal Structure and Physical Conditions in Star and Planet Forming Clouds Core: Toward a Quantitative Description of Cloud Evolution

    NASA Technical Reports Server (NTRS)

    Lada, Charles J.

    2005-01-01

    This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular cloud cores and investigate the physical conditions which give rise to star and planet formation. The goals of this program are to acquire, reduce and analyze deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds in order to internal structure of starless cloud cores and to quantitatively investigate the evolution of such structure through the star and planet formation process. During the second year of this grant, progress toward these goals is discussed.

  18. Development of a mechanism and an accurate and simple mathematical model for the description of drug release: Application to a relevant example of acetazolamide-controlled release from a bio-inspired elastin-based hydrogel.

    PubMed

    Fernández-Colino, A; Bermudez, J M; Arias, F J; Quinteros, D; Gonzo, E

    2016-04-01

    Transversality between mathematical modeling, pharmacology, and materials science is essential in order to achieve controlled-release systems with advanced properties. In this regard, the area of biomaterials provides a platform for the development of depots that are able to achieve controlled release of a drug, whereas pharmacology strives to find new therapeutic molecules and mathematical models have a connecting function, providing a rational understanding by modeling the parameters that influence the release observed. Herein we present a mechanism which, based on reasonable assumptions, explains the experimental data obtained very well. In addition, we have developed a simple and accurate “lumped” kinetics model to correctly fit the experimentally observed drug-release behavior. This lumped model allows us to have simple analytic solutions for the mass and rate of drug release as a function of time without limitations of time or mass of drug released, which represents an important step-forward in the area of in vitro drug delivery when compared to the current state of the art in mathematical modeling. As an example, we applied the mechanism and model to the release data for acetazolamide from a recombinant polymer. Both materials were selected because of a need to develop a suitable ophthalmic formulation for the treatment of glaucoma. The in vitro release model proposed herein provides a valuable predictive tool for ensuring product performance and batch-to-batch reproducibility, thus paving the way for the development of further pharmaceutical devices.

  19. Development of a mechanism and an accurate and simple mathematical model for the description of drug release: Application to a relevant example of acetazolamide-controlled release from a bio-inspired elastin-based hydrogel.

    PubMed

    Fernández-Colino, A; Bermudez, J M; Arias, F J; Quinteros, D; Gonzo, E

    2016-04-01

    Transversality between mathematical modeling, pharmacology, and materials science is essential in order to achieve controlled-release systems with advanced properties. In this regard, the area of biomaterials provides a platform for the development of depots that are able to achieve controlled release of a drug, whereas pharmacology strives to find new therapeutic molecules and mathematical models have a connecting function, providing a rational understanding by modeling the parameters that influence the release observed. Herein we present a mechanism which, based on reasonable assumptions, explains the experimental data obtained very well. In addition, we have developed a simple and accurate “lumped” kinetics model to correctly fit the experimentally observed drug-release behavior. This lumped model allows us to have simple analytic solutions for the mass and rate of drug release as a function of time without limitations of time or mass of drug released, which represents an important step-forward in the area of in vitro drug delivery when compared to the current state of the art in mathematical modeling. As an example, we applied the mechanism and model to the release data for acetazolamide from a recombinant polymer. Both materials were selected because of a need to develop a suitable ophthalmic formulation for the treatment of glaucoma. The in vitro release model proposed herein provides a valuable predictive tool for ensuring product performance and batch-to-batch reproducibility, thus paving the way for the development of further pharmaceutical devices. PMID:26838852

  20. DNA DAMAGE QUANTITATION BY ALKALINE GEL ELECTROPHORESIS.

    SciTech Connect

    SUTHERLAND,B.M.; BENNETT,P.V.; SUTHERLAND, J.C.

    2004-03-24

    Physical and chemical agents in the environment, those used in clinical applications, or encountered during recreational exposures to sunlight, induce damages in DNA. Understanding the biological impact of these agents requires quantitation of the levels of such damages in laboratory test systems as well as in field or clinical samples. Alkaline gel electrophoresis provides a sensitive (down to {approx} a few lesions/5Mb), rapid method of direct quantitation of a wide variety of DNA damages in nanogram quantities of non-radioactive DNAs from laboratory, field, or clinical specimens, including higher plants and animals. This method stems from velocity sedimentation studies of DNA populations, and from the simple methods of agarose gel electrophoresis. Our laboratories have developed quantitative agarose gel methods, analytical descriptions of DNA migration during electrophoresis on agarose gels (1-6), and electronic imaging for accurate determinations of DNA mass (7-9). Although all these components improve sensitivity and throughput of large numbers of samples (7,8,10), a simple version using only standard molecular biology equipment allows routine analysis of DNA damages at moderate frequencies. We present here a description of the methods, as well as a brief description of the underlying principles, required for a simplified approach to quantitation of DNA damages by alkaline gel electrophoresis.

  1. Dissociation coefficients of protein adsorption to nanoparticles as quantitative metrics for description of the protein corona: A comparison of experimental techniques and methodological relevance.

    PubMed

    Hühn, Jonas; Fedeli, Chiara; Zhang, Qian; Masood, Atif; Del Pino, Pablo; Khashab, Niveen M; Papini, Emanuele; Parak, Wolfgang J

    2016-06-01

    Protein adsorption to nanoparticles is described as a chemical reaction in which proteins attach to binding sites on the nanoparticle surface. This process is defined by a dissociation coefficient, which tells how many proteins are adsorbed per nanoparticle in dependence of the protein concentration. Different techniques to experimentally determine dissociation coefficients of protein adsorption to nanoparticles are reviewed. Results of more than 130 experiments in which dissociation coefficients have been determined are compared. Data show that different methods, nanoparticle systems, and proteins can lead to significantly different dissociation coefficients. However, we observed a clear tendency of smaller dissociation coefficients upon less negative towards more positive zeta potentials of the nanoparticles. The zeta potential thus is a key parameter influencing protein adsorption to the surface of nanoparticles. Our analysis highlights the importance of the characterization of the parameters governing protein-nanoparticle interaction for quantitative evaluation and objective literature comparison. PMID:26748245

  2. Rigour in quantitative research.

    PubMed

    Claydon, Leica Sarah

    2015-07-22

    This article which forms part of the research series addresses scientific rigour in quantitative research. It explores the basis and use of quantitative research and the nature of scientific rigour. It examines how the reader may determine whether quantitative research results are accurate, the questions that should be asked to determine accuracy and the checklists that may be used in this process. Quantitative research has advantages in nursing, since it can provide numerical data to help answer questions encountered in everyday practice.

  3. Habitat suitability for juvenile common sole ( Solea solea, L.) in the Bay of Biscay (France): A quantitative description using indicators based on epibenthic fauna

    NASA Astrophysics Data System (ADS)

    Le Pape, Olivier; Baulier, Loîc; Cloarec, Aurélie; Martin, Jocelyne; Le Loc'h, François; Désaunay, Yves

    2007-02-01

    This study describes the spatial distribution of young-of-the-year common sole based on beam trawl surveys conducted in late summer in the coastal and estuarine parts of the Bay of Biscay (France). Previous studies showed that habitat suitability for juvenile common sole varies according to physical factors and notably bathymetry and sediment structure. Nevertheless, the use of these descriptors alone to model habitat suitability led to considerable unexplained variability in juvenile common sole distribution. Hence, the epibenthic macro- and megafauna collected during beam trawl surveys was taken into account to improve models of habitat suitability for these juvenile flatfish. Ecotrophic guilds based on life traits (behaviour, mobility and feeding) were used to develop generic indicators of trawled benthic fauna. These synthetic descriptors were used in generalized linear models of habitat suitability in order to characterize the distribution of juvenile common sole. This approach significantly improved the description based on physical descriptors and allowed demonstrating that young common sole distribution is related to the density of trawled deposit and suspension feeders and also of carnivorous organisms. These models provide a reliable method to develop indicators of nursery habitat suitability from trawl survey data with the aim of assessing and surveying their quality.

  4. Orientation-guided two-scale approach for the segmentation and quantitative description of woven bundles of fibers from three-dimensional tomographic images

    NASA Astrophysics Data System (ADS)

    Chapoullié, Cédric; Da Costa, Jean-Pierre; Cataldi, Michel; Vignoles, Gérard L.; Germain, Christian

    2015-11-01

    This paper proposes a two-scale approach for the description of fibrous materials from tomographic data. It operates at two scales: coarse scale to describe weaving patterns and fine scale to depict fiber layout within yarns. At both scales, the proposed approach starts with the segmentation of yarns and fibers. Then, the fibrous structure (fiber diameters, fiber and yarn orientations, fiber density within yarns) is described. The segmentation algorithms are applied to a chunk of a woven ceramic-matrix composite observed at yarn and fiber scales using tomographic data from the European synchrotron radiation facility. The fiber and yarn segmentation results allow investigation of intrayarn fiber layout. The analysis of intrayarn fiber density and orientations shows the effects of the weaving process on fiber organization, in particular fiber compaction or yarn shearing. These results pave the way toward a deeper analysis of such materials. Indeed, the data collected with the proposed methods are a key starting point for realistic image synthesis. Such images may in turn be used to validate the fiber and yarn segmentation algorithms. Besides, and above all, they will allow material behavior simulation, aiming at the evaluation of the material's strengths and weaknesses inferred from its fibrous architecture.

  5. The targeted proteins in tumor cells treated with the α-lactalbumin-oleic acid complex examined by descriptive and quantitative liquid chromatography-tandem mass spectrometry.

    PubMed

    Fang, B; Zhang, M; Fan, X; Ren, F Z

    2016-08-01

    An α-lactalbumin-oleic acid (α-LA-OA) complex has exhibited selective antitumor activity in animal models and clinical trials. Although apoptosis and autophagy are activated and the functions of several organelles are disrupted in response to α-LA-OA, the detailed antitumor mechanism remains unclear. In this study, we used a novel technique, isobaric tags for relative and absolute quantitation, to analyze the proteome of tumor cells treated with α-LA-OA. We identified 112 differentially expressed proteins: 95 were upregulated to satisfy the metabolism of tumor cells; 17 were downregulated and targets of α-LA-OA. According to the differentially expressed proteins, α-LA-OA exerted its antitumor activity by disrupting cytoskeleton stability and cell motility, and by inhibiting DNA, lipid, and ATP synthesis, leading to cellular stress and activation of programmed cell death. This study provides a systematic evaluation of the antitumor activity of α-LA-OA, identifying its interacting targets and establishing the theoretical basis of α-LA-OA for use in cancer therapy. PMID:27236751

  6. Quantitative chromatin pattern description in Feulgen-stained nuclei as a diagnostic tool to characterize the oligodendroglial and astroglial components in mixed oligo-astrocytomas.

    PubMed

    Decaestecker, C; Lopes, B S; Gordower, L; Camby, I; Cras, P; Martin, J J; Kiss, R; VandenBerg, S R; Salmon, I

    1997-04-01

    The oligoastrocytoma, as a mixed glioma, represents a nosologic dilemma with respect to precisely defining the oligodendroglial and astroglial phenotypes that constitute the neoplastic cell lineages of these tumors. In this study, cell image analysis with Feulgen-stained nuclei was used to distinguish between oligodendroglial and astrocytic phenotypes in oligodendrogliomas and astrocytomas and then applied to mixed oligoastrocytomas. Quantitative features with respect to chromatin pattern (30 variables) and DNA ploidy (8 variables) were evaluated on Feulgen-stained nuclei in a series of 71 gliomas using computer-assisted microscopy. These included 32 oligodendrogliomas (OLG group: 24 grade II and 8 grade III tumors according to the WHO classification), 32 astrocytomas (AST group: 13 grade II and 19 grade III tumors), and 7 oligoastrocytomas (OLGAST group). Initially, image analysis with multivariate statistical analyses (Discriminant Analysis) could identify each glial tumor group. Highly significant statistical differences were obtained distinguishing the morphonuclear features of oligodendrogliomas from those of astrocytomas, regardless of their histological grade. When compared with the 7 mixed oligoastrocytomas under study, 5 exhibited DNA ploidy and chromatin pattern characteristics similar to grade II oligodendrogliomas, I to grade III oligodendrogliomas, and I to grade II astrocytomas. Using multifactorial statistical analyses (Discriminant Analysis combined with Principal Component Analysis). It was possible to quantify the proportion of "typical" glial cell phenotypes that compose grade II and III oligodendrogliomas and grade II and III astrocytomas in each mixed glioma. Cytometric image analysis may be an important adjunct to routine histopathology for the reproducible identification of neoplasms containing a mixture of oligodendroglial and astrocytic phenotypes. PMID:9100670

  7. Drought description

    USGS Publications Warehouse

    Matalas, N.C.

    1991-01-01

    What constitutes a comprehensive description of drought, a description forming a basis for answering why a drought occurred is outlined. The description entails two aspects that are "naturally" coupled, named physical and economic, and treats the set of hydrologic measures of droughts in terms of their multivariate distribution, rather than in terms of a collection of the marginal distributions. ?? 1991 Springer-Verlag.

  8. A method to accurately quantitate intensities of (32)P-DNA bands when multiple bands appear in a single lane of a gel is used to study dNTP insertion opposite a benzo[a]pyrene-dG adduct by Sulfolobus DNA polymerases Dpo4 and Dbh.

    PubMed

    Sholder, Gabriel; Loechler, Edward L

    2015-01-01

    Quantitating relative (32)P-band intensity in gels is desired, e.g., to study primer-extension kinetics of DNA polymerases (DNAPs). Following imaging, multiple (32)P-bands are often present in lanes. Though individual bands appear by eye to be simple and well-resolved, scanning reveals they are actually skewed-Gaussian in shape and neighboring bands are overlapping, which complicates quantitation, because slower migrating bands often have considerable contributions from the trailing edges of faster migrating bands. A method is described to accurately quantitate adjacent (32)P-bands, which relies on having a standard: a simple skewed-Gaussian curve from an analogous pure, single-component band (e.g., primer alone). This single-component scan/curve is superimposed on its corresponding band in an experimentally determined scan/curve containing multiple bands (e.g., generated in a primer-extension reaction); intensity exceeding the single-component scan/curve is attributed to other components (e.g., insertion products). Relative areas/intensities are determined via pixel analysis, from which relative molarity of components is computed. Common software is used. Commonly used alternative methods (e.g., drawing boxes around bands) are shown to be less accurate. Our method was used to study kinetics of dNTP primer-extension opposite a benzo[a]pyrene-N(2)-dG-adduct with four DNAPs, including Sulfolobus solfataricus Dpo4 and Sulfolobus acidocaldarius Dbh. Vmax/Km is similar for correct dCTP insertion with Dpo4 and Dbh. Compared to Dpo4, Dbh misinsertion is slower for dATP (∼20-fold), dGTP (∼110-fold) and dTTP (∼6-fold), due to decreases in Vmax. These findings provide support that Dbh is in the same Y-Family DNAP class as eukaryotic DNAP κ and bacterial DNAP IV, which accurately bypass N(2)-dG adducts, as well as establish the scan-method described herein as an accurate method to quantitate relative intensity of overlapping bands in a single lane, whether generated

  9. A method to accurately quantitate intensities of (32)P-DNA bands when multiple bands appear in a single lane of a gel is used to study dNTP insertion opposite a benzo[a]pyrene-dG adduct by Sulfolobus DNA polymerases Dpo4 and Dbh.

    PubMed

    Sholder, Gabriel; Loechler, Edward L

    2015-01-01

    Quantitating relative (32)P-band intensity in gels is desired, e.g., to study primer-extension kinetics of DNA polymerases (DNAPs). Following imaging, multiple (32)P-bands are often present in lanes. Though individual bands appear by eye to be simple and well-resolved, scanning reveals they are actually skewed-Gaussian in shape and neighboring bands are overlapping, which complicates quantitation, because slower migrating bands often have considerable contributions from the trailing edges of faster migrating bands. A method is described to accurately quantitate adjacent (32)P-bands, which relies on having a standard: a simple skewed-Gaussian curve from an analogous pure, single-component band (e.g., primer alone). This single-component scan/curve is superimposed on its corresponding band in an experimentally determined scan/curve containing multiple bands (e.g., generated in a primer-extension reaction); intensity exceeding the single-component scan/curve is attributed to other components (e.g., insertion products). Relative areas/intensities are determined via pixel analysis, from which relative molarity of components is computed. Common software is used. Commonly used alternative methods (e.g., drawing boxes around bands) are shown to be less accurate. Our method was used to study kinetics of dNTP primer-extension opposite a benzo[a]pyrene-N(2)-dG-adduct with four DNAPs, including Sulfolobus solfataricus Dpo4 and Sulfolobus acidocaldarius Dbh. Vmax/Km is similar for correct dCTP insertion with Dpo4 and Dbh. Compared to Dpo4, Dbh misinsertion is slower for dATP (∼20-fold), dGTP (∼110-fold) and dTTP (∼6-fold), due to decreases in Vmax. These findings provide support that Dbh is in the same Y-Family DNAP class as eukaryotic DNAP κ and bacterial DNAP IV, which accurately bypass N(2)-dG adducts, as well as establish the scan-method described herein as an accurate method to quantitate relative intensity of overlapping bands in a single lane, whether generated

  10. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  11. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  12. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  13. Anaphoric Descriptions

    ERIC Educational Resources Information Center

    Beller, Charley

    2013-01-01

    The study of definite descriptions has been a central part of research in linguistics and philosophy of language since Russell's seminal work "On Denoting" (Russell 1905). In that work Russell quickly dispatches analyses of denoting expressions with forms like "no man," "some man," "a man," and "every…

  14. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  15. Quantitative Graphics in Newspapers.

    ERIC Educational Resources Information Center

    Tankard, James W., Jr.

    The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…

  16. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  17. Quantitative social science

    NASA Astrophysics Data System (ADS)

    Weidlich, W.

    1987-03-01

    General concepts for the quantitative description of the dynamics of social processes are introduced. They allow for embedding social science into the conceptual framework of synergetics. Equations of motion for the socioconfiguration are derived on the stochastic and quasideterministic level. As an application the migration of interacting human populations is treated. The solutions of the nonlinear migratory equations include limit cycles and strange attractors. The empiric evaluation of interregional migratory dynamics is exemplified in the case of Germany.

  18. Descriptive thermodynamics

    NASA Astrophysics Data System (ADS)

    Ford, David; Huntsman, Steven

    2006-06-01

    Thermodynamics (in concert with its sister discipline, statistical physics) can be regarded as a data reduction scheme based on partitioning a total system into a subsystem and a bath that weakly interact with each other. Whereas conventionally, the systems investigated require this form of data reduction in order to facilitate prediction, a different problem also occurs, in the context of communication networks, markets, etc. Such “empirically accessible” systems typically overwhelm observers with the sort of information that in the case of (say) a gas is effectively unobtainable. What is required for such complex interacting systems is not prediction (this may be impossible when humans besides the observer are responsible for the interactions) but rather, description as a route to understanding. Still, the need for a thermodynamical data reduction scheme remains. In this paper, we show how an empirical temperature can be computed for finite, empirically accessible systems, and further outline how this construction allows the age-old science of thermodynamics to be fruitfully applied to them.

  19. Phenomenological description of phase inversion.

    PubMed

    Piela, K; Ooms, G; Sengers, J V

    2009-02-01

    We propose an extended Ginzburg-Landau model for a description of the ambivalence region associated with the phenomenon of phase inversion observed in dispersed water-oil flow through a pipe. In analogy to the classical mean-field theory of phase transitions, it is shown that a good quantitative representation of the ambivalence region is obtained by using the injected phase volume fraction and a friction factor as the appropriate physical parameters.

  20. Quantitative electrical imaging in permafrost rock walls

    NASA Astrophysics Data System (ADS)

    Krautblatter, M.; Kemna, A.

    2012-04-01

    Several authors provided indications of the changing stability of permafrost rockwalls in different high-mountain environments. Anticipation of the hazard induced by permafrost rock slope failure requires monitoring of thermal and hydrological regimes inside the rock mass and quantitative geophysical methods could theoretically provide certain information on both. Electrical resistivity tomography (ERT) in frozen rockwalls could become a key method for such investigations since freezing and temperature changes induce significant and recognizable changes in resistivity. Inferring reliable thermal state variables from ERT images, however, requires a quantitative approach involving calibrated temperature-resistivity (T-ρ) relationships as well as an adequate resistance error description in the ERT inversion process. Testing T-ρ relationships from a double-digit number of low-porosity sedimentary, metamorphic and igneous rocks from Alpine and Arctic permafrost rockwalls in the laboratory, we found evidence that exponential T-ρ paths developed by McGinnis et al. (1973) do not describe the resistivity behaviour of hard rocks undergoing freezing or melting correctly, as freezing occurs in confined space. We hypothesize that bilinear functions of unfrozen and frozen T-ρ paths offer a better approximation. Separate linear approximation of unfrozen, supercooled and frozen T-ρ behaviour could help to provide more accurate temperature estimates from the resistivity of permafrost rocks. Utilizing a T-ρ relationship in an imaging framework requires a quantitative ERT approach (Krautblatter et al., 2010), where the correct description of data errors and the right degree of data fitting are most crucial issues. Over-fitting the data (corresponding to underestimating the data error) should be avoided, because this typically leads to artefacts in the images - often mistaken as evidence of high spatial resolution -, as should under-fitting (overestimating the data error), which

  1. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  2. Quantitative research.

    PubMed

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  3. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  4. Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.

    PubMed

    Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique

    2013-06-01

    The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530

  5. Quantitative Thinking.

    ERIC Educational Resources Information Center

    DuBridge, Lee A.

    An appeal for more research to determine how to educate children as effectively as possible is made. Mathematics teachers can readily examine the educational problems of today in their classrooms since learning progress in mathematics can easily be measured and evaluated. Since mathematics teachers have learned to think in quantitative terms and…

  6. QUANTITATIVE MORPHOLOGY

    EPA Science Inventory

    Abstract: In toxicology, the role of quantitative assessment of brain morphology can be understood in the context of two types of treatment-related alterations. One type of alteration is specifically associated with treatment and is not observed in control animals. Measurement ...

  7. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  8. Soft Biometrics; Human Identification Using Comparative Descriptions.

    PubMed

    Reid, Daniel A; Nixon, Mark S; Stevenage, Sarah V

    2014-06-01

    Soft biometrics are a new form of biometric identification which use physical or behavioral traits that can be naturally described by humans. Unlike other biometric approaches, this allows identification based solely on verbal descriptions, bridging the semantic gap between biometrics and human description. To permit soft biometric identification the description must be accurate, yet conventional human descriptions comprising of absolute labels and estimations are often unreliable. A novel method of obtaining human descriptions will be introduced which utilizes comparative categorical labels to describe differences between subjects. This innovative approach has been shown to address many problems associated with absolute categorical labels-most critically, the descriptions contain more objective information and have increased discriminatory capabilities. Relative measurements of the subjects' traits can be inferred from comparative human descriptions using the Elo rating system. The resulting soft biometric signatures have been demonstrated to be robust and allow accurate recognition of subjects. Relative measurements can also be obtained from other forms of human representation. This is demonstrated using a support vector machine to determine relative measurements from gait biometric signatures-allowing retrieval of subjects from video footage by using human comparisons, bridging the semantic gap.

  9. Soft Biometrics; Human Identification Using Comparative Descriptions.

    PubMed

    Reid, Daniel A; Nixon, Mark S; Stevenage, Sarah V

    2014-06-01

    Soft biometrics are a new form of biometric identification which use physical or behavioral traits that can be naturally described by humans. Unlike other biometric approaches, this allows identification based solely on verbal descriptions, bridging the semantic gap between biometrics and human description. To permit soft biometric identification the description must be accurate, yet conventional human descriptions comprising of absolute labels and estimations are often unreliable. A novel method of obtaining human descriptions will be introduced which utilizes comparative categorical labels to describe differences between subjects. This innovative approach has been shown to address many problems associated with absolute categorical labels-most critically, the descriptions contain more objective information and have increased discriminatory capabilities. Relative measurements of the subjects' traits can be inferred from comparative human descriptions using the Elo rating system. The resulting soft biometric signatures have been demonstrated to be robust and allow accurate recognition of subjects. Relative measurements can also be obtained from other forms of human representation. This is demonstrated using a support vector machine to determine relative measurements from gait biometric signatures-allowing retrieval of subjects from video footage by using human comparisons, bridging the semantic gap. PMID:26353282

  10. Towards an accurate specific reaction parameter density functional for water dissociation on Ni(111): RPBE versus PW91.

    PubMed

    Jiang, Bin; Guo, Hua

    2016-08-01

    In search for an accurate description of the dissociative chemisorption of water on the Ni(111) surface, we report a new nine-dimensional potential energy surface (PES) based on a large number of density functional theory points using the RPBE functional. Seven-dimensional quantum dynamical calculations have been carried out on the RPBE PES, followed by site averaging and lattice effect corrections, yielding sticking probabilities that are compared with both the previous theoretical results based on a PW91 PES and experiment. It is shown that the RPBE functional increases the reaction barrier, but has otherwise a minor impact on the PES topography. Better agreement with experimental results is obtained with the new PES, but the agreement is still not quantitative. Possible sources of the remaining discrepancies are discussed.

  11. Quantitative Literacy: Geosciences and Beyond

    NASA Astrophysics Data System (ADS)

    Richardson, R. M.; McCallum, W. G.

    2002-12-01

    Quantitative literacy seems like such a natural for the geosciences, right? The field has gone from its origin as a largely descriptive discipline to one where it is hard to imagine failing to bring a full range of mathematical tools to the solution of geological problems. Although there are many definitions of quantitative literacy, we have proposed one that is analogous to the UNESCO definition of conventional literacy: "A quantitatively literate person is one who, with understanding, can both read and represent quantitative information arising in his or her everyday life." Central to this definition is the concept that a curriculum for quantitative literacy must go beyond the basic ability to "read and write" mathematics and develop conceptual understanding. It is also critical that a curriculum for quantitative literacy be engaged with a context, be it everyday life, humanities, geoscience or other sciences, business, engineering, or technology. Thus, our definition works both within and outside the sciences. What role do geoscience faculty have in helping students become quantitatively literate? Is it our role, or that of the mathematicians? How does quantitative literacy vary between different scientific and engineering fields? Or between science and nonscience fields? We will argue that successful quantitative literacy curricula must be an across-the-curriculum responsibility. We will share examples of how quantitative literacy can be developed within a geoscience curriculum, beginning with introductory classes for nonmajors (using the Mauna Loa CO2 data set) through graduate courses in inverse theory (using singular value decomposition). We will highlight six approaches to across-the curriculum efforts from national models: collaboration between mathematics and other faculty; gateway testing; intensive instructional support; workshops for nonmathematics faculty; quantitative reasoning requirement; and individual initiative by nonmathematics faculty.

  12. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  13. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  14. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  15. Acquisition of teleological descriptions

    NASA Astrophysics Data System (ADS)

    Franke, David W.

    1992-03-01

    Teleology descriptions capture the purpose of an entity, mechanism, or activity with which they are associated. These descriptions can be used in explanation, diagnosis, and design reuse. We describe a technique for acquiring teleological descriptions expressed in the teleology language TeD. Acquisition occurs during design by observing design modifications and design verification. We demonstrate the acquisition technique in an electronic circuit design.

  16. Writing job descriptions.

    PubMed

    Schaffner, M

    1990-01-01

    The skill of writing job descriptions begins with an understanding of the advantages, as well as the basic elements, of a well written description. The end result should be approved and updated as needed. Having a better understanding of this process makes writing the job description a challenge rather than a chore.

  17. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  18. Joint Control and the Selection of Stimuli from Their Description

    ERIC Educational Resources Information Center

    Lowenkron, Barry

    2006-01-01

    This research examined the role the two constituents of joint control, the tact and the echoic, play in producing accurate selections of novel stimuli in response to their spoken descriptions. Experiment 1 examined the role of tacts. In response to unfamiliar spoken descriptions, children learned to select from among six successively presented…

  19. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  20. Quantitative representation and description of intravoxel fiber complexity in HARDI

    NASA Astrophysics Data System (ADS)

    Sun, Chang-yu; Chu, Chun-yu; Liu, Wan-yu; Hsu, Edward W.; Korenberg, Julie R.; Zhu, Yue-min

    2015-11-01

    Diffusion tensor imaging and high angular resolution diffusion imaging are often used to analyze the fiber complexity of tissues. In these imaging techniques, the most commonly calculated metric is anisotropy, such as fractional anisotropy (FA), generalized anisotropy (GA), and generalized fractional anisotropy (GFA). The basic idea underlying these metrics is to compute the deviation from free or spherical diffusion. However, in many cases, the question is not really to know whether it concerns spherical diffusion. Instead, the main concern is to describe and quantify fiber complexity such as fiber crossing in a voxel. In this context, it would be more direct and effective to compute the deviation from a single fiber bundle instead of a sphere. We propose a new metric, called PEAM (PEAnut Metric), which is based on computing the deviation of orientation diffusion functions (ODFs) from a single fiber bundle ODF represented by a peanut. As an example, the proposed PEAM metric is used to classify intravoxel fiber configurations. The results on simulated data, physical phantom data and real brain data consistently showed that the proposed PEAM provides greater accuracy than FA, GA and GFA and enables parallel and complex fibers to be better distinguished.

  1. Quantitative description of realistic wealth distributions by kinetic trading models

    NASA Astrophysics Data System (ADS)

    Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro

    2008-10-01

    Data on wealth distributions in trading markets show a power law behavior x-(1+α) at the high end, where, in general, α is greater than 1 (Pareto’s law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets.

  2. Dual Enrollment in a Rural Environment: A Descriptive Quantitative Study

    ERIC Educational Resources Information Center

    Dodge, Mary Beth

    2012-01-01

    Dual enrollment is a federally funded program that offers high school students the opportunity to earn both high school and postsecondary credits for the same course. While the phenomenon of concurrent enrollment in postsecondary and college educational programs is not new, political support and public funding has drawn focus to the policies of…

  3. QUANTITATIVE SOIL DESCRIPTIONS FOR ECOREGIONS OF THE UNITED STATES

    EPA Science Inventory

    Researchers have defined ecological regions of the United States based on patterns in the coincidence of terrestrial, aquatic, abiotic and biotic characteristics that are associated with spatial differences in ecosystems. Ecoregions potentially facilitate regional research, monit...

  4. Multimedia content description framework

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Kim, Michelle Yoonk Yung (Inventor); Li, Chung-Sheng (Inventor); Mohan, Rakesh (Inventor); Smith, John Richard (Inventor)

    2003-01-01

    A framework is provided for describing multimedia content and a system in which a plurality of multimedia storage devices employing the content description methods of the present invention can interoperate. In accordance with one form of the present invention, the content description framework is a description scheme (DS) for describing streams or aggregations of multimedia objects, which may comprise audio, images, video, text, time series, and various other modalities. This description scheme can accommodate an essentially limitless number of descriptors in terms of features, semantics or metadata, and facilitate content-based search, index, and retrieval, among other capabilities, for both streamed or aggregated multimedia objects.

  5. Quantitative spectroscopy of hot stars

    NASA Technical Reports Server (NTRS)

    Kudritzki, R. P.; Hummer, D. G.

    1990-01-01

    A review on the quantitative spectroscopy (QS) of hot stars is presented, with particular attention given to the study of photospheres, optically thin winds, unified model atmospheres, and stars with optically thick winds. It is concluded that the results presented here demonstrate the reliability of Qs as a unique source of accurate values of the global parameters (effective temperature, surface gravity, and elemental abundances) of hot stars.

  6. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  7. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  8. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  9. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  10. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  11. Two pre-Willan descriptions of psoriasis.

    PubMed

    De Bersaques, Jean

    2012-01-01

    Accurate descriptions of skin lesions, and in particular of those of what we now call osiriasis vulgaris, are rare before the book of Willan's On Cutaneous Diseases at the very beginning of the 19th century. Here we present two instances in which such clinical descriptions are given. Benjamin Franklin wrote about his own skin lesions and their evolution. Dr. Willam Falconer, physician in Bath, England, presents the clinical symptoms and his results with 83 patients with 'lepra graecorum' (the name used at that time) treated between 1772 and 1775. One can wonder why such a now frequent, obvious and distinctive disease had not attracted more attention. PMID:22902228

  12. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  13. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  14. Precocious quantitative cognition in monkeys.

    PubMed

    Ferrigno, Stephen; Hughes, Kelly D; Cantlon, Jessica F

    2016-02-01

    Basic quantitative abilities are thought to have an innate basis in humans partly because the ability to discriminate quantities emerges early in child development. If humans and nonhuman primates share this developmentally primitive foundation of quantitative reasoning, then this ability should be present early in development across species and should emerge earlier in monkeys than in humans because monkeys mature faster than humans. We report that monkeys spontaneously make accurate quantity choices by 1 year of age in a task that human children begin to perform only at 2.5 to 3 years of age. Additionally, we report that the quantitative sensitivity of infant monkeys is equal to that of the adult animals in their group and that rates of learning do not differ between infant and adult animals. This novel evidence of precocious quantitative reasoning in infant monkeys suggests that human quantitative reasoning shares its early developing foundation with other primates. The data further suggest that early developing components of primate quantitative reasoning are constrained by maturational factors related to genetic development as opposed to learning experience alone. PMID:26187058

  15. Precocious quantitative cognition in monkeys.

    PubMed

    Ferrigno, Stephen; Hughes, Kelly D; Cantlon, Jessica F

    2016-02-01

    Basic quantitative abilities are thought to have an innate basis in humans partly because the ability to discriminate quantities emerges early in child development. If humans and nonhuman primates share this developmentally primitive foundation of quantitative reasoning, then this ability should be present early in development across species and should emerge earlier in monkeys than in humans because monkeys mature faster than humans. We report that monkeys spontaneously make accurate quantity choices by 1 year of age in a task that human children begin to perform only at 2.5 to 3 years of age. Additionally, we report that the quantitative sensitivity of infant monkeys is equal to that of the adult animals in their group and that rates of learning do not differ between infant and adult animals. This novel evidence of precocious quantitative reasoning in infant monkeys suggests that human quantitative reasoning shares its early developing foundation with other primates. The data further suggest that early developing components of primate quantitative reasoning are constrained by maturational factors related to genetic development as opposed to learning experience alone.

  16. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  17. Provocative Opinion: Descriptive Chemistry.

    ERIC Educational Resources Information Center

    Bent, Henry A.; Bent, Brian E.

    1987-01-01

    Discusses many of the distinctions that chemists draw between theoretical chemistry and descriptive chemistry, along with the tendency for chemical educators to adopt the type of chemistry they feel is most important to teach. Uses examples to argue that theoretical chemistry and descriptive chemistry are, at the bottom line, the same. (TW)

  18. A Descriptive Experiment.

    ERIC Educational Resources Information Center

    Brand, Josef

    1979-01-01

    In this experiment in description, students in a high school honors English class were asked to select a surrealistic painting and capture it in writing. Their compositions were given to art students who tried to reproduce the paintings from the written descriptions. (SJL)

  19. Physics 3204. Course Description.

    ERIC Educational Resources Information Center

    Newfoundland and Labrador Dept. of Education.

    A description of the physics 3204 course in Newfoundland and Labrador is provided. The description includes: (1) statement of purpose, including general objectives of science education; (2) a list of six course objectives; (3) course content for units on sound, light, optical instruments, electrostatics, current electricity, Michael Faraday and…

  20. Job descriptions made easy.

    PubMed

    Miller, Larry

    2014-01-01

    The act of writing a job description can be a daunting and difficult task for many managers. This article focuses on the key concepts of What, How, and Measureable Results as they relate to an employee's job duties. When the answers to these three elements are articulated, they define the core responsibilities of any job that form the basis for an effective job description.

  1. A fast and accurate method for echocardiography strain rate imaging

    NASA Astrophysics Data System (ADS)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  2. The use of experimental bending tests to more accurate numerical description of TBC damage process

    NASA Astrophysics Data System (ADS)

    Sadowski, T.; Golewski, P.

    2016-04-01

    Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.

  3. Toward an Accurate Density-Functional Tight-Binding Description of Zinc-Containing Compounds.

    PubMed

    Moreira, Ney H; Dolgonos, Grygoriy; Aradi, Bálint; da Rosa, Andreia L; Frauenheim, Thomas

    2009-03-10

    An extended self-consistent charge density-functional tight-binding (SCC-DFTB) parametrization for Zn-X (X = H, C, N, O, S, and Zn) interactions has been derived. The performance of this new parametrization has been validated by calculating the structural and energetic properties of zinc solid phases such as bulk Zn, ZnO, and ZnS; ZnO surfaces and nanostructures; adsorption of small species (H, CO2, and NH3) on ZnO surfaces; and zinc-containing complexes mimicking the biological environment. Our results show that the derived parameters are universal and fully transferable, describing all the above-mentioned systems with accuracies comparable to those of first-principles DFT results. PMID:26610226

  4. Hardware description languages

    NASA Technical Reports Server (NTRS)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  5. Description scheme for video editing work

    NASA Astrophysics Data System (ADS)

    Ruiloba, Rosa I.; Joly, Philippe

    2001-03-01

    This article presents a Description Scheme (DS) to describe the audio-visual documents from the video editing work point of view. This DS is based on edition techniques used in the video edition domain. The main objective of this DS is to provide a complete, modular and extensible description of the structure of the video documents based on editing process. This VideoEditing DS is generic in the sense that it may be used in a large number of applications such as video document indexing and analysis, description of Edit Decision List and elaboration of editing patterns. It is based on accurate and complete definitions of shots and transition effects required for video document analysis applications. The VideoEditing DS allows three levels of description : analytic, synthetic and semantic. In the DS, the higher (resp. the lower) is the element of description, the more analytic (resp. synthetic) is the information. %Phil This DS allows describing the editing work made by editing boards, using more detailed descriptors of Shots and Transition DSs. These elements are provided to define editing patterns that allow several possible reconstructions of movies depending on, for example, the target audience. A part of the video description made with this DS may be automatically produced by the video to shots segmentation algorithms (analytic DSs ) or by editing software, at the same time the edition work is made. This DS gives an answer to the needs related to the exchange of editing work descriptions between editing softwares. At the same time, the same DS provide an analytic description of editing work which is complementary to existing standards for Edit Decision Lists like SMPTE or AAF.

  6. Quantitative Decision Support Requires Quantitative User Guidance

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  7. Behavioral Assembly Required: Particularly for Quantitative Courses

    ERIC Educational Resources Information Center

    Mazen, Abdelmagid

    2008-01-01

    This article integrates behavioral approaches into the teaching and learning of quantitative subjects with application to statistics. Focusing on the emotional component of learning, the article presents a system dynamic model that provides descriptive and prescriptive accounts of learners' anxiety. Metaphors and the metaphorizing process are…

  8. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  9. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  10. CRAC2 model description

    SciTech Connect

    Ritchie, L.T.; Alpert, D.J.; Burke, R.P.; Johnson, J.D.; Ostmeyer, R.M.; Aldrich, D.C.; Blond, R.M.

    1984-03-01

    The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.

  11. A new HPLC method for azithromycin quantitation.

    PubMed

    Zubata, Patricia; Ceresole, Rita; Rosasco, Maria Ana; Pizzorno, Maria Teresa

    2002-02-01

    A simple liquid chromatographic method was developed for the estimation of azithromycin raw material and in pharmaceutical forms. The sample was chromatographed on a reverse phase C18 column and eluants monitored at a wavelength of 215 nm. The method was accurate, precise and sufficiently selective. It is applicable for its quantitation, stability and dissolution tests.

  12. Quantitative Analysis of Radar Returns from Insects

    NASA Technical Reports Server (NTRS)

    Riley, J. R.

    1979-01-01

    When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.

  13. Increasing the quantitative bandwidth of NMR measurements.

    PubMed

    Power, J E; Foroozandeh, M; Adams, R W; Nilsson, M; Coombes, S R; Phillips, A R; Morris, G A

    2016-02-18

    The frequency range of quantitative NMR is increased from tens to hundreds of kHz by a new pulse sequence, CHORUS. It uses chirp pulses to excite uniformly over very large bandwidths, yielding accurate integrals even for nuclei such as (19)F that have very wide spectra. PMID:26789115

  14. Increasing the quantitative bandwidth of NMR measurements.

    PubMed

    Power, J E; Foroozandeh, M; Adams, R W; Nilsson, M; Coombes, S R; Phillips, A R; Morris, G A

    2016-02-18

    The frequency range of quantitative NMR is increased from tens to hundreds of kHz by a new pulse sequence, CHORUS. It uses chirp pulses to excite uniformly over very large bandwidths, yielding accurate integrals even for nuclei such as (19)F that have very wide spectra.

  15. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  16. Quantitative Literacy Provision in the First Year of Medical Studies

    ERIC Educational Resources Information Center

    Frith, V.

    2011-01-01

    This article presents a description of and motivation for the quantitative literacy (numeracy) intervention in the first year of medical studies at a South African university. This intervention is a response to the articulation gap between the quantitative literacy of many first-year medical students and the demands of their curriculum.…

  17. Teaching Descriptive Style.

    ERIC Educational Resources Information Center

    Brashers, H. C.

    1968-01-01

    As the inexperienced writer becomes aware of the issues involved in the composition of effective descriptive prose, he also develops a consistent control over his materials. The persona he chooses, if coherently thought out, can function as an index of many choices, helping him to manipulate the tone, intent, and mood of this style; to regulate…

  18. Accurate vessel segmentation with constrained B-snake.

    PubMed

    Yuanzhi Cheng; Xin Hu; Ji Wang; Yadong Wang; Tamura, Shinichi

    2015-08-01

    We describe an active contour framework with accurate shape and size constraints on the vessel cross-sectional planes to produce the vessel segmentation. It starts with a multiscale vessel axis tracing in a 3D computed tomography (CT) data, followed by vessel boundary delineation on the cross-sectional planes derived from the extracted axis. The vessel boundary surface is deformed under constrained movements on the cross sections and is voxelized to produce the final vascular segmentation. The novelty of this paper lies in the accurate contour point detection of thin vessels based on the CT scanning model, in the efficient implementation of missing contour points in the problematic regions and in the active contour model with accurate shape and size constraints. The main advantage of our framework is that it avoids disconnected and incomplete segmentation of the vessels in the problematic regions that contain touching vessels (vessels in close proximity to each other), diseased portions (pathologic structure attached to a vessel), and thin vessels. It is particularly suitable for accurate segmentation of thin and low contrast vessels. Our method is evaluated and demonstrated on CT data sets from our partner site, and its results are compared with three related methods. Our method is also tested on two publicly available databases and its results are compared with the recently published method. The applicability of the proposed method to some challenging clinical problems, the segmentation of the vessels in the problematic regions, is demonstrated with good results on both quantitative and qualitative experimentations; our segmentation algorithm can delineate vessel boundaries that have level of variability similar to those obtained manually.

  19. Recommended procedures and techniques for the petrographic description of bituminous coals

    USGS Publications Warehouse

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    Modern coal petrology requires rapid and precise description of great numbers of coal core or bench samples in order to acquire the information required to understand and predict vertical and lateral variation of coal quality for correlation with coal-bed thickness, depositional environment, suitability for technological uses, etc. Procedures for coal description vary in accordance with the objectives of the description. To achieve our aim of acquiring the maximum amount of quantitative information within the shortest period of time, we have adopted a combined megascopic-microscopic procedure. Megascopic analysis is used to identify the distinctive lithologies present, and microscopic analysis is required only to describe representative examples of the mixed lithologies observed. This procedure greatly decreases the number of microscopic analyses needed for adequate description of a sample. For quantitative megascopic description of coal microlithotypes, microlithotype assemblages, and lithotypes, we use (V) for vitrite or vitrain, (E) for liptite, (I) for inertite or fusain, (M) for mineral layers or lenses other than iron sulfide, (S) for iron sulfide, and (X1), (X2), etc. for mixed lithologies. Microscopic description is expressed in terms of V representing the vitrinite maceral group, E the exinite group, I the inertinite group, and M mineral components. volume percentages are expressed as subscripts. Thus (V)20(V80E10I5M5)80 indicates a lithotype or assemblage of microlithotypes consisting of 20 vol. % vitrite and 80% of a mixed lithology having a modal maceral composition V80E10I5M5. This bulk composition can alternatively be recalculated and described as V84E8I4M4. To generate these quantitative data rapidly and accurately, we utilize an automated image analysis system (AIAS). Plots of VEIM data on easily constructed ternary diagrams provide readily comprehended illustrations of the range of modal composition of the lithologic units making up a given coal

  20. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  1. Quantitative Surface Chirality Detection with Sum Frequency Generation Vibrational Spectroscopy: Twin Polarization Angle Approach

    SciTech Connect

    Wei, Feng; Xu, Yanyan; Guo, Yuan; Liu, Shi-lin; Wang, Hongfei

    2009-12-27

    Here we report a novel twin polarization angle (TPA) approach in the quantitative chirality detection with the surface sum-frequency generation vibrational spectroscopy (SFG-VS). Generally, the achiral contribution dominates the surface SFG-VS signal, and the pure chiral signal is usually two or three orders of magnitude smaller. Therefore, it has been difficult to make quantitative detection and analysis of the chiral contributions to the surface SFG- VS signal. In the TPA method, by varying together the polarization angles of the incoming visible light and the sum frequency signal at fixed s or p polarization of the incoming infrared beam, the polarization dependent SFG signal can give not only direct signature of the chiral contribution in the total SFG-VS signal, but also the accurate measurement of the chiral and achiral components in the surface SFG signal. The general description of the TPA method is presented and the experiment test of the TPA approach is also presented for the SFG-VS from the S- and R-limonene chiral liquid surfaces. The most accurate degree of chiral excess values thus obtained for the 2878 cm⁻¹ spectral peak of the S- and R-limonene liquid surfaces are (23.7±0.4)% and ({25.4±1.3)%, respectively.

  2. Guidance to Achieve Accurate Aggregate Quantitation in Biopharmaceuticals by SV-AUC.

    PubMed

    Arthur, Kelly K; Kendrick, Brent S; Gabrielson, John P

    2015-01-01

    The levels and types of aggregates present in protein biopharmaceuticals must be assessed during all stages of product development, manufacturing, and storage of the finished product. Routine monitoring of aggregate levels in biopharmaceuticals is typically achieved by size exclusion chromatography (SEC) due to its high precision, speed, robustness, and simplicity to operate. However, SEC is error prone and requires careful method development to ensure accuracy of reported aggregate levels. Sedimentation velocity analytical ultracentrifugation (SV-AUC) is an orthogonal technique that can be used to measure protein aggregation without many of the potential inaccuracies of SEC. In this chapter, we discuss applications of SV-AUC during biopharmaceutical development and how characteristics of the technique make it better suited for some applications than others. We then discuss the elements of a comprehensive analytical control strategy for SV-AUC. Successful implementation of these analytical control elements ensures that SV-AUC provides continued value over the long time frames necessary to bring biopharmaceuticals to market.

  3. Quantitative measures for redox signaling.

    PubMed

    Pillay, Ché S; Eagling, Beatrice D; Driscoll, Scott R E; Rohwer, Johann M

    2016-07-01

    Redox signaling is now recognized as an important regulatory mechanism for a number of cellular processes including the antioxidant response, phosphokinase signal transduction and redox metabolism. While there has been considerable progress in identifying the cellular machinery involved in redox signaling, quantitative measures of redox signals have been lacking, limiting efforts aimed at understanding and comparing redox signaling under normoxic and pathogenic conditions. Here we have outlined some of the accepted principles for redox signaling, including the description of hydrogen peroxide as a signaling molecule and the role of kinetics in conferring specificity to these signaling events. Based on these principles, we then develop a working definition for redox signaling and review a number of quantitative methods that have been employed to describe signaling in other systems. Using computational modeling and published data, we show how time- and concentration- dependent analyses, in particular, could be used to quantitatively describe redox signaling and therefore provide important insights into the functional organization of redox networks. Finally, we consider some of the key challenges with implementing these methods. PMID:27151506

  4. Spacelab J experiment descriptions

    SciTech Connect

    Miller, T.Y.

    1993-08-01

    Brief descriptions of the experiment investigations for the Spacelab J Mission which was launched from the Kennedy Space Center aboard the Endeavour in Sept. 1992 are presented. Experiments cover the following: semiconductor crystals; single crystals; superconducting composite materials; crystal growth; bubble behavior in weightlessness; microgravity environment; health monitoring of Payload Specialists; cultured plant cells; effect of low gravity on calcium metabolism and bone formation; and circadian rhythm. Separate abstracts have been prepared for articles from this report.

  5. Spacelab J experiment descriptions

    NASA Technical Reports Server (NTRS)

    Miller, Teresa Y. (Editor)

    1993-01-01

    Brief descriptions of the experiment investigations for the Spacelab J Mission which was launched from the Kennedy Space Center aboard the Endeavour in Sept. 1992 are presented. Experiments cover the following: semiconductor crystals; single crystals; superconducting composite materials; crystal growth; bubble behavior in weightlessness; microgravity environment; health monitoring of Payload Specialists; cultured plant cells; effect of low gravity on calcium metabolism and bone formation; and circadian rhythm.

  6. Using an Educational Electronic Documentation System to Help Nursing Students Accurately Identify Nursing Diagnoses

    ERIC Educational Resources Information Center

    Pobocik, Tamara J.

    2013-01-01

    The use of technology and electronic medical records in healthcare has exponentially increased. This quantitative research project used a pretest/posttest design, and reviewed how an educational electronic documentation system helped nursing students to identify the accurate related to statement of the nursing diagnosis for the patient in the case…

  7. Management control system description

    SciTech Connect

    Bence, P. J.

    1990-10-01

    This Management Control System (MCS) description describes the processes used to manage the cost and schedule of work performed by Westinghouse Hanford Company (Westinghouse Hanford) for the US Department of Energy, Richland Operations Office (DOE-RL), Richland, Washington. Westinghouse Hanford will maintain and use formal cost and schedule management control systems, as presented in this document, in performing work for the DOE-RL. This MCS description is a controlled document and will be modified or updated as required. This document must be approved by the DOE-RL; thereafter, any significant change will require DOE-RL concurrence. Westinghouse Hanford is the DOE-RL operations and engineering contractor at the Hanford Site. Activities associated with this contract (DE-AC06-87RL10930) include operating existing plant facilities, managing defined projects and programs, and planning future enhancements. This document is designed to comply with Section I-13 of the contract by providing a description of Westinghouse Hanford's cost and schedule control systems used in managing the above activities. 5 refs., 22 figs., 1 tab.

  8. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  9. Vectorial Kerr magnetometer for simultaneous and quantitative measurements of the in-plane magnetization components.

    PubMed

    Jiménez, E; Mikuszeit, N; Cuñado, J L F; Perna, P; Pedrosa, J; Maccariello, D; Rodrigo, C; Niño, M A; Bollero, A; Camarero, J; Miranda, R

    2014-05-01

    A vectorial magneto-optic Kerr effect (v-MOKE) setup with simultaneous and quantitative determination of the two in-plane magnetization components is described. The setup provides both polarization rotations and reflectivity changes at the same time for a given sample orientation with respect to a variable external magnetic field, as well as allowing full angular studies. A classical description based on the Jones formalism is used to calculate the setup's properties. The use of different incoming light polarizations and/or MOKE geometries, as well as the errors due to misalignment and solutions are discussed. To illustrate the capabilities of the setup a detailed study of a model four-fold anisotropy system is presented. Among others, the setup allows to study the angular dependence of the hysteresis phenomena, remanences, critical fields, and magnetization reversal processes, as well as the accurate determination of the easy and hard magnetization directions, domain wall orientations, and magnetic anisotropies.

  10. Accurate object tracking system by integrating texture and depth cues

    NASA Astrophysics Data System (ADS)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  11. Continuum description of avalanches in granular media.

    SciTech Connect

    Aranson, I. S.; Tsimring, L. S.

    2000-12-05

    A continuum theory of partially fluidized granular flows is proposed. The theory is based on a combination of the mass and momentum conservation equations with the order parameter equation which describes the transition between flowing and static components of the granular system. We apply this model to the dynamics of avalanches in chutes. The theory provides a quantitative description of recent observations of granular flows on rough inclined planes (Daerr and Douady 1999): layer bistability, and the transition from triangular avalanches propagating downhill at small inclination angles to balloon-shaped avalanches also propagating uphill for larger angles.

  12. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  13. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  14. Quantitative measurement and analysis for detection and treatment planning of developmental dysplasia of the hip

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Lu, Hongbing; Chen, Hanyong; Zhao, Li; Shi, Zhengxing; Liang, Zhengrong

    2009-02-01

    Developmental dysplasia of the hip is a congenital hip joint malformation affecting the proximal femurs and acetabulum that are subluxatable, dislocatable, and dislocated. Conventionally, physicians made diagnoses and treatments only based on findings from two-dimensional (2D) images by manually calculating clinic parameters. However, anatomical complexity of the disease and the limitation of current standard procedures make accurate diagnosis quite difficultly. In this study, we developed a system that provides quantitative measurement of 3D clinical indexes based on computed tomography (CT) images. To extract bone structure from surrounding tissues more accurately, the system firstly segments the bone using a knowledge-based fuzzy clustering method, which is formulated by modifying the objective function of the standard fuzzy c-means algorithm with additive adaptation penalty. The second part of the system calculates automatically the clinical indexes, which are extended from 2D to 3D for accurate description of spatial relationship between femurs and acetabulum. To evaluate the system performance, experimental study based on 22 patients with unilateral or bilateral affected hip was performed. The results of 3D acetabulum index (AI) automatically provided by the system were validated by comparison with 2D results measured by surgeons manually. The correlation between the two results was found to be 0.622 (p<0.01).

  15. Measuring Joint Stimulus Control by Complex Graph/Description Correspondences

    ERIC Educational Resources Information Center

    Fields, Lanny; Spear, Jack

    2012-01-01

    Joint stimulus control occurs when responding is determined by the correspondence of elements of a complex sample and a complex comparison stimulus. In academic settings, joint stimulus control of behavior would be evidenced by the selection of an accurate description of a complex graph in which each element of a graph corresponded to particular…

  16. Trophic relationships in an estuarine environment: A quantitative fatty acid analysis signature approach

    NASA Astrophysics Data System (ADS)

    Magnone, Larisa; Bessonart, Martin; Gadea, Juan; Salhi, María

    2015-12-01

    In order to better understand the functioning of aquatic environments, it is necessary to obtain accurate diet estimations in food webs. Their description should incorporate information about energy flow and the relative importance of trophic pathways. Fatty acids have been extensively used in qualitative studies on trophic relationships in food webs. Recently a new method to estimate quantitatively single predator diet has been developed. In this study, a model of aquatic food web through quantitative fatty acid signature analysis was generated to identify the trophic interactions among the species in the Rocha Lagoon. The biological sampling over two consecutive annual periods was comprehensive enough to identify all functional groups in the aquatic food web (except birds and mammals). Heleobia australis seemed to play a central role in this estuarine ecosystem. As both, a grazer and a prey to several other species, probably H. australis is transferring a great amount of energy to upper trophic levels. Most of the species at Rocha Lagoon have a wide range of prey items in their diet reflecting a complex food web, which is characteristic of extremely dynamic environment as estuarine ecosystems. QFASA is a model in tracing and quantitative estimate trophic pathways among species in an estuarine food web. The results obtained in the present work are a valuable contribution in the understanding of trophic relationships in Rocha Lagoon.

  17. A Descriptive Analysis of High School Student Motivators for Success

    ERIC Educational Resources Information Center

    Booker, Janet Maria

    2011-01-01

    The purpose of the quantitative descriptive study was to gain an understanding of the motivating factors leading high school students from rural and urban schools to receive a diploma. A revised version of the High School Motivation Scale (Close, 2001; Solberg et al., 2007) generated from SurveyMonkey.com was administered to high school graduates…

  18. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    NASA Astrophysics Data System (ADS)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  19. Accurate Drawbead Modeling in Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Sester, M.; Burchitz, I.; Saenz de Argandona, E.; Estalayo, F.; Carleer, B.

    2016-08-01

    An adaptive line bead model that continually updates according to the changing conditions during the forming process has been developed. In these calculations, the adaptive line bead's geometry is treated as a 3D object where relevant phenomena like hardening curve, yield surface, through thickness stress effects and contact description are incorporated. The effectiveness of the adaptive drawbead model will be illustrated by an industrial example.

  20. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  1. Automatic and Accurate Shadow Detection Using Near-Infrared Information.

    PubMed

    Rüfenacht, Dominic; Fredembach, Clément; Süsstrunk, Sabine

    2014-08-01

    We present a method to automatically detect shadows in a fast and accurate manner by taking advantage of the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. Dark objects, which confound many shadow detection algorithms, often have much higher reflectance in the NIR. We can thus build an accurate shadow candidate map based on image pixels that are dark both in the visible and NIR representations. We further refine the shadow map by incorporating ratios of the visible to the NIR image, based on the observation that commonly encountered light sources have very distinct spectra in the NIR band. The results are validated on a new database, which contains visible/NIR images for a large variety of real-world shadow creating illuminant conditions, as well as manually labeled shadow ground truth. Both quantitative and qualitative evaluations show that our method outperforms current state-of-the-art shadow detection algorithms in terms of accuracy and computational efficiency.

  2. The SILAC Fly Allows for Accurate Protein Quantification in Vivo*

    PubMed Central

    Sury, Matthias D.; Chen, Jia-Xuan; Selbach, Matthias

    2010-01-01

    Stable isotope labeling by amino acids in cell culture (SILAC) is widely used to quantify protein abundance in tissue culture cells. Until now, the only multicellular organism completely labeled at the amino acid level was the laboratory mouse. The fruit fly Drosophila melanogaster is one of the most widely used small animal models in biology. Here, we show that feeding flies with SILAC-labeled yeast leads to almost complete labeling in the first filial generation. We used these “SILAC flies” to investigate sexual dimorphism of protein abundance in D. melanogaster. Quantitative proteome comparison of adult male and female flies revealed distinct biological processes specific for each sex. Using a tudor mutant that is defective for germ cell generation allowed us to differentiate between sex-specific protein expression in the germ line and somatic tissue. We identified many proteins with known sex-specific expression bias. In addition, several new proteins with a potential role in sexual dimorphism were identified. Collectively, our data show that the SILAC fly can be used to accurately quantify protein abundance in vivo. The approach is simple, fast, and cost-effective, making SILAC flies an attractive model system for the emerging field of in vivo quantitative proteomics. PMID:20525996

  3. Theory of bi-molecular association dynamics in 2D for accurate model and experimental parameterization of binding rates

    NASA Astrophysics Data System (ADS)

    Yogurtcu, Osman N.; Johnson, Margaret E.

    2015-08-01

    The dynamics of association between diffusing and reacting molecular species are routinely quantified using simple rate-equation kinetics that assume both well-mixed concentrations of species and a single rate constant for parameterizing the binding rate. In two-dimensions (2D), however, even when systems are well-mixed, the assumption of a single characteristic rate constant for describing association is not generally accurate, due to the properties of diffusional searching in dimensions d ≤ 2. Establishing rigorous bounds for discriminating between 2D reactive systems that will be accurately described by rate equations with a single rate constant, and those that will not, is critical for both modeling and experimentally parameterizing binding reactions restricted to surfaces such as cellular membranes. We show here that in regimes of intrinsic reaction rate (ka) and diffusion (D) parameters ka/D > 0.05, a single rate constant cannot be fit to the dynamics of concentrations of associating species independently of the initial conditions. Instead, a more sophisticated multi-parametric description than rate-equations is necessary to robustly characterize bimolecular reactions from experiment. Our quantitative bounds derive from our new analysis of 2D rate-behavior predicted from Smoluchowski theory. Using a recently developed single particle reaction-diffusion algorithm we extend here to 2D, we are able to test and validate the predictions of Smoluchowski theory and several other theories of reversible reaction dynamics in 2D for the first time. Finally, our results also mean that simulations of reactive systems in 2D using rate equations must be undertaken with caution when reactions have ka/D > 0.05, regardless of the simulation volume. We introduce here a simple formula for an adaptive concentration dependent rate constant for these chemical kinetics simulations which improves on existing formulas to better capture non-equilibrium reaction dynamics from dilute

  4. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    PubMed

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  5. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  6. The Case for Descriptive Writing.

    ERIC Educational Resources Information Center

    Hauck, Marian K.

    An approach to teaching descriptive writing and its values are discussed. Benefits derived from a descriptive writing unit are said to be the following: (1) Descriptive writing is fun; (2) It enables the instructor to demonstrate that the first word that pops into the writer's mind is often not the best one; (3) There is no easier way in which to…

  7. Three Approaches to Descriptive Research.

    ERIC Educational Resources Information Center

    Svensson, Lennart

    This report compares three approaches to descriptive research, focusing on the kinds of descriptions developed and on the methods used to develop the descriptions. The main emphasis in all three approaches is on verbal data. In these approaches the importance of interpretation and its intuitive nature are emphasized. The three approaches, however,…

  8. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  9. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  10. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  11. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  12. Preparation and accurate measurement of pure ozone.

    PubMed

    Janssen, Christof; Simone, Daniela; Guinet, Mickaël

    2011-03-01

    Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766

  13. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  14. Line gas sampling system ensures accurate analysis

    SciTech Connect

    Not Available

    1992-06-01

    Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.

  15. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  16. Recent advances in quantitative neuroproteomics.

    PubMed

    Craft, George E; Chen, Anshu; Nairn, Angus C

    2013-06-15

    The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson's disease and Alzheimer's disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to shed

  17. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  18. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-04-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  19. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  20. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  1. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  2. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  3. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  4. Direction and Description

    NASA Astrophysics Data System (ADS)

    Ben-Menahem, Yemima

    This paper deals with the dependence of directionality in the course of events-or our claims concerning such directionality-on the modes of description we use in speaking of the events in question. I argue that criteria of similarity and individuation play a crucial role in assessments of directionality. This is an extension of Davidson's claim regarding the difference between causal and explanatory contexts. The argument is based on a characterisation of notions of necessity and contingency that differ from their modal logic counterparts on the one hand, and from causality and chance on the other. I show that some types of directionality are perfectly compatible with both determinism and indeterminism at the microscopic level, and that there is no likelihood of, or advantage to, reducing such directionality to other laws or causal processes.

  5. Symmetrical gait descriptions

    NASA Astrophysics Data System (ADS)

    Dunajewski, Adam; Dusza, Jacek J.; Rosado Muñoz, Alfredo

    2014-11-01

    The article presents a proposal for the description of human gait as a periodic and symmetric process. Firstly, the data for researches was obtained in the Laboratory of Group SATI in the School of Engineering of University of Valencia. Then, the periodical model - Mean Double Step (MDS) was made. Finally, on the basis of MDS, the symmetrical models - Left Mean Double Step and Right Mean Double Step (LMDS and RMDS) could be created. The method of various functional extensions was used. Symmetrical gait models can be used to calculate the coefficients of asymmetry at any time or phase of the gait. In this way it is possible to create asymmetry, function which better describes human gait dysfunction. The paper also describes an algorithm for calculating symmetric models, and shows exemplary results based on the experimental data.

  6. Task Description Language

    NASA Technical Reports Server (NTRS)

    Simmons, Reid; Apfelbaum, David

    2005-01-01

    Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.

  7. Description of Jet Breakup

    NASA Technical Reports Server (NTRS)

    Papageorgiou, Demetrios T.

    1996-01-01

    In this article we review recent results on the breakup of cylindrical jets of a Newtonian fluid. Capillary forces provide the main driving mechanism and our interest is in the description of the flow as the jet pinches to form drops. The approach is to describe such topological singularities by constructing local (in time and space) similarity solutions from the governing equations. This is described for breakup according to the Euler, Stokes or Navier-Stokes equations. It is found that slender jet theories can be applied when viscosity is present, but for inviscid jets the local shape of the jet at breakup is most likely of a non-slender geometry. Systems of one-dimensional models of the governing equations are solved numerically in order to illustrate these differences.

  8. YUCCA MOUNTAIN SITE DESCRIPTION

    SciTech Connect

    A.M. Simmons

    2004-04-16

    The ''Yucca Mountain Site Description'' summarizes, in a single document, the current state of knowledge and understanding of the natural system at Yucca Mountain. It describes the geology; geochemistry; past, present, and projected future climate; regional hydrologic system; and flow and transport within the unsaturated and saturated zones at the site. In addition, it discusses factors affecting radionuclide transport, the effect of thermal loading on the natural system, and tectonic hazards. The ''Yucca Mountain Site Description'' is broad in nature. It summarizes investigations carried out as part of the Yucca Mountain Project since 1988, but it also includes work done at the site in earlier years, as well as studies performed by others. The document has been prepared under the Office of Civilian Radioactive Waste Management quality assurance program for the Yucca Mountain Project. Yucca Mountain is located in Nye County in southern Nevada. The site lies in the north-central part of the Basin and Range physiographic province, within the northernmost subprovince commonly referred to as the Great Basin. The basin and range physiography reflects the extensional tectonic regime that has affected the region during the middle and late Cenozoic Era. Yucca Mountain was initially selected for characterization, in part, because of its thick unsaturated zone, its arid to semiarid climate, and the existence of a rock type that would support excavation of stable openings. In 1987, the United States Congress directed that Yucca Mountain be the only site characterized to evaluate its suitability for development of a geologic repository for high-level radioactive waste and spent nuclear fuel.

  9. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  10. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  11. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  12. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  13. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  14. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  15. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  16. Quantitative analysis of blood vessel geometry

    NASA Astrophysics Data System (ADS)

    Fuhrman, Michael G.; Abdul-Karim, Othman; Shah, Sujal; Gilbert, Steven G.; Van Bibber, Richard

    2001-07-01

    Re-narrowing or restenosis of a human coronary artery occurs within six months in one third of balloon angioplasty procedures. Accurate and repeatable quantitative analysis of vessel shape is important to characterize the progression and type of restenosis, and to evaluate effects new therapies might have. A combination of complicated geometry and image variability, and the need for high resolution and large image size makes visual/manual analysis slow, difficult, and prone to error. The image processing and analysis described here was developed to automate feature extraction of the lumen, internal elastic lamina, neointima, external elastic lamina, and tunica adventitia and to enable an objective, quantitative definition of blood vessel geometry. The quantitative geometrical analysis enables the measurement of several features including perimeter, area, and other metrics of vessel damage. Automation of feature extraction creates a high throughput capability that enables analysis of serial sections for more accurate measurement of restenosis dimensions. Measurement results are input into a relational database where they can be statistically analyzed compared across studies. As part of the integrated process, results are also imprinted on the images themselves to facilitate auditing of the results. The analysis is fast, repeatable and accurate while allowing the pathologist to control the measurement process.

  17. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  18. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  19. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  20. Accurate Stellar Parameters for Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  1. Benchmark data base for accurate van der Waals interaction in inorganic fragments

    NASA Astrophysics Data System (ADS)

    Brndiar, Jan; Stich, Ivan

    2012-02-01

    A range of inorganic materials, such as Sb, As, P, S, Se are built from van der Waals (vdW) interacting units forming the crystals, which neither the standard DFT GGA description as well as cheap quantum chemistry methods, such as MP2, do not describe correctly. We use this data base, for which have performed ultra accurate CCSD(T) calculations in complete basis set limit, to test the alternative approximate theories, such as Grimme [1], Langreth-Lundqvist [2], and Tkachenko-Scheffler [3]. While none of these theories gives entirely correct description, Grimme consistently provides more accurate results than Langreth-Lundqvist, which tend to overestimate the distances and underestimate the interaction energies for this set of systems. Contrary Tkachenko-Scheffler appear to yield surprisingly accurate and computationally cheap and convenient description applicable also for systems with appreciable charge transfer. [4pt] [1] S. Grimme, J. Comp. Chem. 27, 1787 (2006) [0pt] [2] K. Lee, et al., Phys. Rev. B 82 081101 (R) (2010) [0pt] [3] Tkachenko and M. Scheffler Phys. Rev. Lett. 102 073005 (2009).

  2. Microgravity Environment Description Handbook

    NASA Technical Reports Server (NTRS)

    DeLombard, Richard; McPherson, Kevin; Hrovat, Kenneth; Moskowitz, Milton; Rogers, Melissa J. B.; Reckart, Timothy

    1997-01-01

    The Microgravity Measurement and Analysis Project (MMAP) at the NASA Lewis Research Center (LeRC) manages the Space Acceleration Measurement System (SAMS) and the Orbital Acceleration Research Experiment (OARE) instruments to measure the microgravity environment on orbiting space laboratories. These laboratories include the Spacelab payloads on the shuttle, the SPACEHAB module on the shuttle, the middeck area of the shuttle, and Russia's Mir space station. Experiments are performed in these laboratories to investigate scientific principles in the near-absence of gravity. The microgravity environment desired for most experiments would have zero acceleration across all frequency bands or a true weightless condition. This is not possible due to the nature of spaceflight where there are numerous factors which introduce accelerations to the environment. This handbook presents an overview of the major microgravity environment disturbances of these laboratories. These disturbances are characterized by their source (where known), their magnitude, frequency and duration, and their effect on the microgravity environment. Each disturbance is characterized on a single page for ease in understanding the effect of a particular disturbance. The handbook also contains a brief description of each laboratory.

  3. CGL description revisited

    NASA Astrophysics Data System (ADS)

    Hunana, P.; Zank, G. P.; Goldstein, M. L.; Webb, G. M.; Adhikari, L.

    2016-03-01

    Solar wind observational studies have emphasized that the solar wind plasma data is bounded by the mirror and firehose instabilities, and it is often believed that these instabilities are of a purely kinetic nature. The simplest fluid model that generalizes magnetohydrodynamics with anisotropic temperatures is the Chew-Goldberger-Low model (CGL). Here we briefly revisit the CGL description and discuss its (otherwise well-documented) linear firehose and mirror instability thresholds; namely that the firehose instability threshold is identical to the one found from linear kinetic theory and that the mirror threshold contains a factor of 6 error. We consider a simple higher-order fluid model with time dependent heat flux equations and show that the mirror instability threshold is correctly reproduced. We also present fully nonlinear three-dimensional simulations of freely decaying turbulence for the Hall-CGL model with isothermal electrons. The spatial resolution of these simulations is 5123 and the formation of a spectral break in magnetic and velocity field spectra around the proton inertial length is found.

  4. Quantitative confocal microscopy: beyond a pretty picture.

    PubMed

    Jonkman, James; Brown, Claire M; Cole, Richard W

    2014-01-01

    Quantitative optical microscopy has become the norm, with the confocal laser-scanning microscope being the workhorse of many imaging laboratories. Generating quantitative data requires a greater emphasis on the accurate operation of the microscope itself, along with proper experimental design and adequate controls. The microscope, which is more accurately an imaging system, cannot be treated as a "black box" with the collected data viewed as infallible. There needs to be regularly scheduled performance testing that will ensure that quality data are being generated. This regular testing also allows for the tracking of metrics that can point to issues before they result in instrument malfunction and downtime. In turn, images must be collected in a manner that is quantitative with maximal signal to noise (which can be difficult depending on the application) without data clipping. Images must then be processed to correct for background intensities, fluorophore cross talk, and uneven field illumination. With advanced techniques such as spectral imaging, Förster resonance energy transfer, and fluorescence-lifetime imaging microscopy, experimental design needs to be carefully planned out and include all appropriate controls. Quantitative confocal imaging in all of these contexts and more will be explored within the chapter. PMID:24974025

  5. Accurate, Fully-Automated NMR Spectral Profiling for Metabolomics

    PubMed Central

    Ravanbakhsh, Siamak; Liu, Philip; Bjordahl, Trent C.; Mandal, Rupasri; Grant, Jason R.; Wilson, Michael; Eisner, Roman; Sinelnikov, Igor; Hu, Xiaoyu; Luchinat, Claudio; Greiner, Russell; Wishart, David S.

    2015-01-01

    Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person’s biofluids, which means such diseases can often be readily detected from a person’s “metabolic profile"—i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person’s metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the “signatures” of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively—with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications

  6. Accurate, fully-automated NMR spectral profiling for metabolomics.

    PubMed

    Ravanbakhsh, Siamak; Liu, Philip; Bjorndahl, Trent C; Bjordahl, Trent C; Mandal, Rupasri; Grant, Jason R; Wilson, Michael; Eisner, Roman; Sinelnikov, Igor; Hu, Xiaoyu; Luchinat, Claudio; Greiner, Russell; Wishart, David S

    2015-01-01

    Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of NMR in

  7. Filling the gaps: A robust description of adhesive birth-death-movement processes

    NASA Astrophysics Data System (ADS)

    Johnston, Stuart T.; Baker, Ruth E.; Simpson, Matthew J.

    2016-04-01

    Existing continuum descriptions of discrete adhesive birth-death-movement processes provide accurate predictions of the average discrete behavior for limited parameter regimes. Here we present an alternative continuum description in terms of the dynamics of groups of contiguous occupied and vacant lattice sites. Our method provides more accurate predictions, is valid in parameter regimes that could not be described by previous continuum descriptions, and provides information about the spatial clustering of occupied sites. Furthermore, we present a simple analytic approximation of the spatial clustering of occupied sites at late time, when the system reaches its steady-state configuration.

  8. Trusting Description: Authenticity, Accountability, and Archival Description Standards

    ERIC Educational Resources Information Center

    MacNeil, Heather

    2009-01-01

    It has been suggested that one of the purposes of archival description is to establish grounds for presuming the authenticity of the records being described. The article examines the implications of this statement by examining the relationship between and among authenticity, archival description, and archival accountability, assessing how this…

  9. An accurate equation of state for fluids and solids.

    PubMed

    Parsafar, G A; Spohr, H V; Patey, G N

    2009-09-01

    A simple functional form for a general equation of state based on an effective near-neighbor pair interaction of an extended Lennard-Jones (12,6,3) type is given and tested against experimental data for a wide variety of fluids and solids. Computer simulation results for ionic liquids are used for further evaluation. For fluids, there appears to be no upper density limitation on the equation of state. The lower density limit for isotherms near the critical temperature is the critical density. The equation of state gives a good description of all types of fluids, nonpolar (including long-chain hydrocarbons), polar, hydrogen-bonded, and metallic, at temperatures ranging from the triple point to the highest temperature for which there is experimental data. For solids, the equation of state is very accurate for all types considered, including covalent, molecular, metallic, and ionic systems. The experimental pvT data available for solids does not reveal any pressure or temperature limitations. An analysis of the importance and possible underlying physical significance of the terms in the equation of state is given. PMID:19678647

  10. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  11. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  12. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  13. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  14. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  15. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  16. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  17. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  18. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  19. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  20. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  1. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  2. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  3. Reference module selection criteria for accurate testing of photovoltaic (PV) panels

    SciTech Connect

    Roy, J.N.; Gariki, Govardhan Rao; Nagalakhsmi, V.

    2010-01-15

    It is shown that for accurate testing of PV panels the correct selection of reference modules is important. A detailed description of the test methodology is given. Three different types of reference modules, having different I{sub SC} (short circuit current) and power (in Wp) have been used for this study. These reference modules have been calibrated from NREL. It has been found that for accurate testing, both I{sub SC} and power of the reference module must be either similar or exceed to that of modules under test. In case corresponding values of the test modules are less than a particular limit, the measurements may not be accurate. The experimental results obtained have been modeled by using simple equivalent circuit model and associated I-V equations. (author)

  4. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    SciTech Connect

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-08-15

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput quantitative LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. In addition, we also report on the optimization of a reversed-phase LC method for the separation of lipids in these sample types.

  5. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  6. Quantitative optical phase microscopy.

    PubMed

    Barty, A; Nugent, K A; Paganin, D; Roberts, A

    1998-06-01

    We present a new method for the extraction of quantitative phase data from microscopic phase samples by use of partially coherent illumination and an ordinary transmission microscope. The technique produces quantitative images of the phase profile of the sample without phase unwrapping. The technique is able to recover phase even in the presence of amplitude modulation, making it significantly more powerful than existing methods of phase microscopy. We demonstrate the technique by providing quantitatively correct phase images of well-characterized test samples and show that the results obtained for more-complex samples correlate with structures observed with Nomarski differential interference contrast techniques.

  7. Rapid Accurate Identification of Bacterial and Viral Pathogens

    SciTech Connect

    Dunn, John

    2007-03-09

    The goals of this program were to develop two assays for rapid, accurate identification of pathogenic organisms at the strain level. The first assay "Quantitative Genome Profiling or QGP" is a real time PCR assay with a restriction enzyme-based component. Its underlying concept is that certain enzymes should cleave genomic DNA at many sites and that in some cases these cuts will interrupt the connection on the genomic DNA between flanking PCR primer pairs thereby eliminating selected PCR amplifications. When this occurs the appearance of the real-time PCR threshold (Ct) signal during DNA amplification is totally eliminated or, if cutting is incomplete, greatly delayed compared to an uncut control. This temporal difference in appearance of the Ct signal relative to undigested control DNA provides a rapid, high-throughput approach for DNA-based identification of different but closely related pathogens depending upon the nucleotide sequence of the target region. The second assay we developed uses the nucleotide sequence of pairs of shmi identifier tags (-21 bp) to identify DNA molecules. Subtle differences in linked tag pair combinations can also be used to distinguish between closely related isolates..

  8. Accurate multiple network alignment through context-sensitive random walk

    PubMed Central

    2015-01-01

    Background Comparative network analysis can provide an effective means of analyzing large-scale biological networks and gaining novel insights into their structure and organization. Global network alignment aims to predict the best overall mapping between a given set of biological networks, thereby identifying important similarities as well as differences among the networks. It has been shown that network alignment methods can be used to detect pathways or network modules that are conserved across different networks. Until now, a number of network alignment algorithms have been proposed based on different formulations and approaches, many of them focusing on pairwise alignment. Results In this work, we propose a novel multiple network alignment algorithm based on a context-sensitive random walk model. The random walker employed in the proposed algorithm switches between two different modes, namely, an individual walk on a single network and a simultaneous walk on two networks. The switching decision is made in a context-sensitive manner by examining the current neighborhood, which is effective for quantitatively estimating the degree of correspondence between nodes that belong to different networks, in a manner that sensibly integrates node similarity and topological similarity. The resulting node correspondence scores are then used to predict the maximum expected accuracy (MEA) alignment of the given networks. Conclusions Performance evaluation based on synthetic networks as well as real protein-protein interaction networks shows that the proposed algorithm can construct more accurate multiple network alignments compared to other leading methods. PMID:25707987

  9. Subvoxel accurate graph search using non-Euclidean graph space.

    PubMed

    Abràmoff, Michael D; Wu, Xiaodong; Lee, Kyungmoo; Tang, Li

    2014-01-01

    Graph search is attractive for the quantitative analysis of volumetric medical images, and especially for layered tissues, because it allows globally optimal solutions in low-order polynomial time. However, because nodes of graphs typically encode evenly distributed voxels of the volume with arcs connecting orthogonally sampled voxels in Euclidean space, segmentation cannot achieve greater precision than a single unit, i.e. the distance between two adjoining nodes, and partial volume effects are ignored. We generalize the graph to non-Euclidean space by allowing non-equidistant spacing between nodes, so that subvoxel accurate segmentation is achievable. Because the number of nodes and edges in the graph remains the same, running time and memory use are similar, while all the advantages of graph search, including global optimality and computational efficiency, are retained. A deformation field calculated from the volume data adaptively changes regional node density so that node density varies with the inverse of the expected cost. We validated our approach using optical coherence tomography (OCT) images of the retina and 3-D MR of the arterial wall, and achieved statistically significant increased accuracy. Our approach allows improved accuracy in volume data acquired with the same hardware, and also, preserved accuracy with lower resolution, more cost-effective, image acquisition equipment. The method is not limited to any specific imaging modality and readily extensible to higher dimensions.

  10. Personalized Orthodontic Accurate Tooth Arrangement System with Complete Teeth Model.

    PubMed

    Cheng, Cheng; Cheng, Xiaosheng; Dai, Ning; Liu, Yi; Fan, Qilei; Hou, Yulin; Jiang, Xiaotong

    2015-09-01

    The accuracy, validity and lack of relation information between dental root and jaw in tooth arrangement are key problems in tooth arrangement technology. This paper aims to describe a newly developed virtual, personalized and accurate tooth arrangement system based on complete information about dental root and skull. Firstly, a feature constraint database of a 3D teeth model is established. Secondly, for computed simulation of tooth movement, the reference planes and lines are defined by the anatomical reference points. The matching mathematical model of teeth pattern and the principle of the specific pose transformation of rigid body are fully utilized. The relation of position between dental root and alveolar bone is considered during the design process. Finally, the relative pose relationships among various teeth are optimized using the object mover, and a personalized therapeutic schedule is formulated. Experimental results show that the virtual tooth arrangement system can arrange abnormal teeth very well and is sufficiently flexible. The relation of position between root and jaw is favorable. This newly developed system is characterized by high-speed processing and quantitative evaluation of the amount of 3D movement of an individual tooth.

  11. Accurate masses for dispersion-supported galaxies

    NASA Astrophysics Data System (ADS)

    Wolf, Joe; Martinez, Gregory D.; Bullock, James S.; Kaplinghat, Manoj; Geha, Marla; Muñoz, Ricardo R.; Simon, Joshua D.; Avedo, Frank F.

    2010-08-01

    We derive an accurate mass estimator for dispersion-supported stellar systems and demonstrate its validity by analysing resolved line-of-sight velocity data for globular clusters, dwarf galaxies and elliptical galaxies. Specifically, by manipulating the spherical Jeans equation we show that the mass enclosed within the 3D deprojected half-light radius r1/2 can be determined with only mild assumptions about the spatial variation of the stellar velocity dispersion anisotropy as long as the projected velocity dispersion profile is fairly flat near the half-light radius, as is typically observed. We find M1/2 = 3 G-1< σ2los > r1/2 ~= 4 G-1< σ2los > Re, where < σ2los > is the luminosity-weighted square of the line-of-sight velocity dispersion and Re is the 2D projected half-light radius. While deceptively familiar in form, this formula is not the virial theorem, which cannot be used to determine accurate masses unless the radial profile of the total mass is known a priori. We utilize this finding to show that all of the Milky Way dwarf spheroidal galaxies (MW dSphs) are consistent with having formed within a halo of a mass of approximately 3 × 109 Msolar, assuming a Λ cold dark matter cosmology. The faintest MW dSphs seem to have formed in dark matter haloes that are at least as massive as those of the brightest MW dSphs, despite the almost five orders of magnitude spread in luminosity between them. We expand our analysis to the full range of observed dispersion-supported stellar systems and examine their dynamical I-band mass-to-light ratios ΥI1/2. The ΥI1/2 versus M1/2 relation for dispersion-supported galaxies follows a U shape, with a broad minimum near ΥI1/2 ~= 3 that spans dwarf elliptical galaxies to normal ellipticals, a steep rise to ΥI1/2 ~= 3200 for ultra-faint dSphs and a more shallow rise to ΥI1/2 ~= 800 for galaxy cluster spheroids.

  12. Recapturing Quantitative Biology.

    ERIC Educational Resources Information Center

    Pernezny, Ken; And Others

    1996-01-01

    Presents a classroom activity on estimating animal populations. Uses shoe boxes and candies to emphasize the importance of mathematics in biology while introducing the methods of quantitative ecology. (JRH)

  13. On Quantitative Rorschach Scales.

    ERIC Educational Resources Information Center

    Haggard, Ernest A.

    1978-01-01

    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  14. RECENT ADVANCES IN QUANTITATIVE NEUROPROTEOMICS

    PubMed Central

    Craft, George E; Chen, Anshu; Nairn, Angus C

    2014-01-01

    The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson’s disease and Alzheimer’s disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to

  15. Semiclassical description of autocorrelations in nuclear masses

    SciTech Connect

    Garcia-Garcia, Antonio M.; Hirsch, Jorge G.; Frank, Alejandro

    2006-08-15

    Nuclear mass autocorrelations are investigated as a function of the number of nucleons. The fluctuating part of these autocorrelations is modeled by a parameter free model in which the nucleons are confined in a rigid sphere. Explicit results are obtained by using periodic orbit theory. Despite the simplicity of the model we have found a remarkable quantitative agreement of the mass autocorrelations for all nuclei in the nuclear data chart. In order to achieve a similar degree of agreement for the nuclear masses themselves it is necessary to consider additional variables such as multipolar corrections to the spherical shape and an effective number of nucleons. Our findings suggest that higher order effects like nuclear deformations or residual interactions have little relevance in the description of the fluctuations of the nuclear autocorrelations.

  16. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  17. Accurate free energy calculation along optimized paths.

    PubMed

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  18. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  19. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  20. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  1. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  2. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  3. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  4. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  5. Quantitative receptor autoradiography

    SciTech Connect

    Boast, C.A.; Snowhill, E.W.; Altar, C.A.

    1986-01-01

    Quantitative receptor autoradiography addresses the topic of technical and scientific advances in the sphere of quantitative autoradiography. The volume opens with a overview of the field from a historical and critical perspective. Following is a detailed discussion of in vitro data obtained from a variety of neurotransmitter systems. The next section explores applications of autoradiography, and the final two chapters consider experimental models. Methodological considerations are emphasized, including the use of computers for image analysis.

  6. Intersexes in swine: a problem in descriptive anatomy.

    PubMed Central

    Halina, W G; Barrales, D W; Partlow, G D; Fisher, K R

    1984-01-01

    Accurate anatomical descriptions of ten intersex pigs were compiled through dissection and histological examination in order to identify specific groups of reproductive anomalies. Six different anatomical phenotypes were identified: four varieties of male pseudohermaphrodite, one type of female pseudohermaphrodite and one type of true hermaphrodite. The intersex phenomenon is complicated by the number of distinct anatomical phenotypes represented broadly by the term hermaphrodite. Therefore, accurate anatomical descriptions and precise terminology are prerequisites to defining the etiology of hermaphroditism and defining the modes of inheritance. Images Fig. 1. Fig. 2. Fig. 3. Fig. 4. Fig. 5. Fig. 6. Fig. 7. Fig. 8. Fig. 9. Fig. 10. Fig. 11. Fig. 12. Fig. 13. Fig. 14. Fig. 15. Fig. 16. PMID:6478301

  7. Accurate measurement of the relative abundance of different DNA species in complex DNA mixtures.

    PubMed

    Jeong, Sangkyun; Yu, Hyunjoo; Pfeifer, Karl

    2012-06-01

    A molecular tool that can compare the abundances of different DNA sequences is necessary for comparing intergenic or interspecific gene expression. We devised and verified such a tool using a quantitative competitive polymerase chain reaction approach. For this approach, we adapted a competitor array, an artificially made plasmid DNA in which all the competitor templates for the target DNAs are arranged with a defined ratio, and melting analysis for allele quantitation for accurate quantitation of the fractional ratios of competitively amplified DNAs. Assays on two sets of DNA mixtures with explicitly known compositional structures of the test sequences were performed. The resultant average relative errors of 0.059 and 0.021 emphasize the highly accurate nature of this method. Furthermore, the method's capability of obtaining biological data is demonstrated by the fact that it can illustrate the tissue-specific quantitative expression signatures of the three housekeeping genes G6pdx, Ubc, and Rps27 by using the forms of the relative abundances of their transcripts, and the differential preferences of Igf2 enhancers for each of the multiple Igf2 promoters for the transcription.

  8. Accurate Measurement of the Relative Abundance of Different DNA Species in Complex DNA Mixtures

    PubMed Central

    Jeong, Sangkyun; Yu, Hyunjoo; Pfeifer, Karl

    2012-01-01

    A molecular tool that can compare the abundances of different DNA sequences is necessary for comparing intergenic or interspecific gene expression. We devised and verified such a tool using a quantitative competitive polymerase chain reaction approach. For this approach, we adapted a competitor array, an artificially made plasmid DNA in which all the competitor templates for the target DNAs are arranged with a defined ratio, and melting analysis for allele quantitation for accurate quantitation of the fractional ratios of competitively amplified DNAs. Assays on two sets of DNA mixtures with explicitly known compositional structures of the test sequences were performed. The resultant average relative errors of 0.059 and 0.021 emphasize the highly accurate nature of this method. Furthermore, the method's capability of obtaining biological data is demonstrated by the fact that it can illustrate the tissue-specific quantitative expression signatures of the three housekeeping genes G6pdx, Ubc, and Rps27 by using the forms of the relative abundances of their transcripts, and the differential preferences of Igf2 enhancers for each of the multiple Igf2 promoters for the transcription. PMID:22334570

  9. Reconstruction of the activity of point sources for the accurate characterization of nuclear waste drums by segmented gamma scanning.

    PubMed

    Krings, Thomas; Mauerhofer, Eric

    2011-06-01

    This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.

  10. Recommended procedures and methodology of coal description

    USGS Publications Warehouse

    Chao, E.C.; Minkin, J.A.; Thompson, C.L.

    1983-01-01

    This document is the result of a workshop on coal description held for the Branch of Coal Resources of the U.S. Geological Survey in March 1982. It has been prepared to aid and encourage the field-oriented coal scientist to participate directly in petrographic coal-description activities. The objectives and past and current practices of coal description vary widely. These are briefly reviewed and illustrated with examples. Sampling approaches and techniques for collecting columnar samples of fresh coal are also discussed. The recommended procedures and methodology emphasize the fact that obtaining a good megascopic description of a coal bed is much better done in the laboratory with a binocular microscope and under good lighting conditions after the samples have been cut and quickly prepared. For better observation and cross-checking using a petrographic microscope for identification purposes, an in-place polishing procedure (requiring less than 2 min) is routinely used. Methods for using both the petrographic microscope and an automated image analysis system are also included for geologists who have access to such instruments. To describe the material characteristics of a coal bed in terms of microlithotypes or lithotypes, a new nomenclature of (V), (E), (1), (M). (S). (X1). (X2) and so on is used. The microscopic description of the modal composition of a megascopically observed lithologic type is expressed in terms of (VEIM); subscripts are used to denote the volume percentage of each constituent present. To describe a coal-bed profile, semiquantitative data (without microscopic study) and quantitative data (with microscopic study) are presented in ready-to-understand form. The average total composition of any thickness interval or of the entire coal bed can be plotted on a triangular diagram having V, E, and I+ M +S as the apices. The modal composition of any mixed lithologies such as (X1), (X2), and so on can also be plotted on such a triangular ternary diagram

  11. Fractal-based description of natural scenes.

    PubMed

    Pentland, A P

    1984-06-01

    This paper addresses the problems of 1) representing natural shapes such as mountains, trees, and clouds, and 2) computing their description from image data. To solve these problems, we must be able to relate natural surfaces to their images; this requires a good model of natural surface shapes. Fractal functions are a good choice for modeling 3-D natural surfaces because 1) many physical processes produce a fractal surface shape, 2) fractals are widely used as a graphics tool for generating natural-looking shapes, and 3) a survey of natural imagery has shown that the 3-D fractal surface model, transformed by the image formation process, furnishes an accurate description of both textured and shaded image regions. The 3-D fractal model provides a characterization of 3-D surfaces and their images for which the appropriateness of the model is verifiable. Furthermore, this characterization is stable over transformations of scale and linear transforms of intensity. The 3-D fractal model has been successfully applied to the problems of 1) texture segmentation and classification, 2) estimation of 3-D shape information, and 3) distinguishing between perceptually ``smooth'' and perceptually ``textured'' surfaces in the scene.

  12. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  13. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  14. Accurate Thermal Conductivities from First Principles

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian

    2015-03-01

    In spite of significant research efforts, a first-principles determination of the thermal conductivity at high temperatures has remained elusive. On the one hand, Boltzmann transport techniques that include anharmonic effects in the nuclear dynamics only perturbatively become inaccurate or inapplicable under such conditions. On the other hand, non-equilibrium molecular dynamics (MD) methods suffer from enormous finite-size artifacts in the computationally feasible supercells, which prevent an accurate extrapolation to the bulk limit of the thermal conductivity. In this work, we overcome this limitation by performing ab initio MD simulations in thermodynamic equilibrium that account for all orders of anharmonicity. The thermal conductivity is then assessed from the auto-correlation function of the heat flux using the Green-Kubo formalism. Foremost, we discuss the fundamental theory underlying a first-principles definition of the heat flux using the virial theorem. We validate our approach and in particular the techniques developed to overcome finite time and size effects, e.g., by inspecting silicon, the thermal conductivity of which is particularly challenging to converge. Furthermore, we use this framework to investigate the thermal conductivity of ZrO2, which is known for its high degree of anharmonicity. Our calculations shed light on the heat resistance mechanism active in this material, which eventually allows us to discuss how the thermal conductivity can be controlled by doping and co-doping. This work has been performed in collaboration with R. Ramprasad (University of Connecticut), C. G. Levi and C. G. Van de Walle (University of California Santa Barbara).

  15. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  16. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  17. Quantitation and detection of vanadium in biologic and pollution materials

    NASA Technical Reports Server (NTRS)

    Gordon, W. A.

    1974-01-01

    A review is presented of special considerations and methodology for determining vanadium in biological and air pollution materials. In addition to descriptions of specific analysis procedures, general sections are included on quantitation of analysis procedures, sample preparation, blanks, and methods of detection of vanadium. Most of the information presented is applicable to the determination of other trace elements in addition to vanadium.

  18. Toward a New Classification of Nonexperimental Quantitative Research.

    ERIC Educational Resources Information Center

    Johnson, Burke

    2001-01-01

    Reviews the present treatment of nonexperimental methods of educational research, proposing a new, two-dimensional classification of nonexperimental quantitative research. The first dimension is based on the primary research objective (i.e., description, prediction, and explanation). The second dimension is called the time dimension (i.e.,…

  19. Nonexperimental Quantitative Research and Its Role in Guiding Instruction

    ERIC Educational Resources Information Center

    Cook, Bryan G.; Cook, Lysandra

    2008-01-01

    Different research designs answer different questions. Educators cannot use nonexperimental quantitative research designs, such as descriptive surveys and correlational research, to determine definitively that an intervention causes improved student outcomes and is an evidence-based practice. However, such research can (a) inform educators about a…

  20. Quantitative aspects of septicemia.

    PubMed Central

    Yagupsky, P; Nolte, F S

    1990-01-01

    For years, quantitative blood cultures found only limited use as aids in the diagnosis and management of septic patients because the available methods were cumbersome, labor intensive, and practical only for relatively small volumes of blood. The development and subsequent commercial availability of lysis-centrifugation direct plating methods for blood cultures have addressed many of the shortcomings of the older methods. The lysis-centrifugation method has demonstrated good performance relative to broth-based blood culture methods. As a result, quantitative blood cultures have found widespread use in clinical microbiology laboratories. Most episodes of clinical significant bacteremia in adults are characterized by low numbers of bacteria per milliliter of blood. In children, the magnitude of bacteremia is generally much higher, with the highest numbers of bacteria found in the blood of septic neonates. The magnitude of bacteremia correlates with the severity of disease in children and with mortality rates in adults, but other factors play more important roles in determining the patient's outcome. Serial quantitative blood cultures have been used to monitor the in vivo efficacy of antibiotic therapy in patients with slowly resolving sepsis, such as disseminated Mycobacterium avium-M. intracellulare complex infections. Quantitative blood culture methods were used in early studies of bacterial endocarditis, and the results significantly contributed to our understanding of the pathophysiology of this disease. Comparison of paired quantitative blood cultures obtained from a peripheral vein and the central venous catheter has been used to help identify patients with catheter-related sepsis and is the only method that does not require removal of the catheter to establish the diagnosis. Quantitation of bacteria in the blood can also help distinguish contaminated from truly positive blood cultures; however, no quantitative criteria can invariably differentiate

  1. Parsimonious description for predicting high-dimensional dynamics

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; Takeuchi, Tomoya; Horai, Shunsuke; Suzuki, Hideyuki; Aihara, Kazuyuki

    2015-10-01

    When we observe a system, we often cannot observe all its variables and may have some of its limited measurements. Under such a circumstance, delay coordinates, vectors made of successive measurements, are useful to reconstruct the states of the whole system. Although the method of delay coordinates is theoretically supported for high-dimensional dynamical systems, practically there is a limitation because the calculation for higher-dimensional delay coordinates becomes more expensive. Here, we propose a parsimonious description of virtually infinite-dimensional delay coordinates by evaluating their distances with exponentially decaying weights. This description enables us to predict the future values of the measurements faster because we can reuse the calculated distances, and more accurately because the description naturally reduces the bias of the classical delay coordinates toward the stable directions. We demonstrate the proposed method with toy models of the atmosphere and real datasets related to renewable energy.

  2. Parsimonious description for predicting high-dimensional dynamics

    PubMed Central

    Hirata, Yoshito; Takeuchi, Tomoya; Horai, Shunsuke; Suzuki, Hideyuki; Aihara, Kazuyuki

    2015-01-01

    When we observe a system, we often cannot observe all its variables and may have some of its limited measurements. Under such a circumstance, delay coordinates, vectors made of successive measurements, are useful to reconstruct the states of the whole system. Although the method of delay coordinates is theoretically supported for high-dimensional dynamical systems, practically there is a limitation because the calculation for higher-dimensional delay coordinates becomes more expensive. Here, we propose a parsimonious description of virtually infinite-dimensional delay coordinates by evaluating their distances with exponentially decaying weights. This description enables us to predict the future values of the measurements faster because we can reuse the calculated distances, and more accurately because the description naturally reduces the bias of the classical delay coordinates toward the stable directions. We demonstrate the proposed method with toy models of the atmosphere and real datasets related to renewable energy. PMID:26510518

  3. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  4. Accurate color synthesis of three-dimensional objects in an image.

    PubMed

    Xin, John H; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing. PMID:15139423

  5. Dynamic, Quantitative Assays of Phagosomal Function

    PubMed Central

    Podinovskaia, Maria; VanderVen, Brian C.; Yates, Robin M.; Glennie, Sarah; Fullerton, Duncan; Mwandumba, Henry C.; Russell, David G.

    2013-01-01

    Much of the activity of the macrophage as an effector cell is performed within its phagocytic compartment. This ranges from the degradation of tissue debris as part of its homeostatic function, to the generation of the superoxide burst as part of its microbicidal response to infection. We have developed a range of real-time readouts of phagosomal function that enables these activities to be rigorously quantified. This chapter contains the description of several of these assays assessed by different methods of quantitation; including a Fluorescence Resonance Emission Transfer (FRET) assay for phagosome/lysosome fusion measured by spectrofluorometer, a fluorogenic assay for the superoxide burst measured by flow cytometry, and a fluorogenic assay for bulk proteolysis measure by confocal microscope. These assays illustrate both the range parameters that can be quantified as well as the flexibility of instrumentation that can be exploited for their quantitation. PMID:24510516

  6. Quantitative aspects of the Galperin L parameter

    NASA Astrophysics Data System (ADS)

    Kosik, J. C.

    2007-12-01

    A new geomagnetic parameter was suggested twenty years ago by Y. Galperin, the Galperin L parameter, and it was introduced into the CNES Maglib for French-Russian projects in the exploration of the distant magnetosphere. The definition and the advantages of the Galperin L parameter are recalled in this brief paper. Unforeseen possibilities in the use of this parameter for mathematical models of the magnetosphere are stressed using past results obtained with the Mead model. The Galperin L parameter is shown to add, in the synchronous region, a quantitative capability to the qualitative description (labelling) of the magnetosphere. More work will be necessary to adapt past mathematical models to present numerical models and extend the domain of the quantitative applications of the Galperin L parameter.

  7. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  8. Descripcion y Medida del Bilinguismo a Nivel Colectivo. [Description and Measurement of Bilingualism at the Collective Level

    ERIC Educational Resources Information Center

    Siguan, Miguel

    1976-01-01

    A presentation of a rigorous method allowing an accurate description of collective bilingualism in any given population, including both the speaker's degree of language command and the patterns of linguistic behavior in each of the languages. [In Spanish] (NQ)

  9. Quantitative Glycomics Strategies*

    PubMed Central

    Mechref, Yehia; Hu, Yunli; Desantos-Garcia, Janie L.; Hussein, Ahmed; Tang, Haixu

    2013-01-01

    The correlations between protein glycosylation and many biological processes and diseases are increasing the demand for quantitative glycomics strategies enabling sensitive monitoring of changes in the abundance and structure of glycans. This is currently attained through multiple strategies employing several analytical techniques such as capillary electrophoresis, liquid chromatography, and mass spectrometry. The detection and quantification of glycans often involve labeling with ionic and/or hydrophobic reagents. This step is needed in order to enhance detection in spectroscopic and mass spectrometric measurements. Recently, labeling with stable isotopic reagents has also been presented as a very viable strategy enabling relative quantitation. The different strategies available for reliable and sensitive quantitative glycomics are herein described and discussed. PMID:23325767

  10. Quantitative photoacoustic tomography

    PubMed Central

    Yuan, Zhen; Jiang, Huabei

    2009-01-01

    In this paper, several algorithms that allow for quantitative photoacoustic reconstruction of tissue optical, acoustic and physiological properties are described in a finite-element method based framework. These quantitative reconstruction algorithms are compared, and the merits and limitations associated with these methods are discussed. In addition, a multispectral approach is presented for concurrent reconstructions of multiple parameters including deoxyhaemoglobin, oxyhaemoglobin and water concentrations as well as acoustic speed. Simulation and in vivo experiments are used to demonstrate the effectiveness of the reconstruction algorithms presented. PMID:19581254

  11. Flow unit concept - integrated approach to reservoir description for engineering projects

    SciTech Connect

    Ebanks, W.J. Jr.

    1987-05-01

    The successful application of secondary and tertiary oil recovery technology requires an accurate understanding of the internal architecture of the reservoir. Engineers have difficulty incorporating geological heterogeneity in their numerical models for simulating reservoir behavior. The concept of flow units has been developed to integrate geological and engineering data into a system for reservoir description. A flow unit is a volume of the total reservoir rock within which geological and petrophysical properties that affect fluid flow are internally consistent and predictably different from properties of other rock volumes (i.e., flow units). Flow units are defined by geological properties, such as texture, mineralogy, sedimentary structures, bedding contacts, and the nature of permeability barriers, combined with quantitative petrophysical properties, such as porosity, permeability, capillarity, and fluid saturations. Studies in the subsurface and in surface outcrops have shown that flow units do not always coincide with geologic lithofacies. The flow unit approach provides a means of uniquely subdividing reservoirs into volumes that approximate the architecture of a reservoir at a scale consistent with reservoir simulations. Thus, reservoir engineers can incorporate critical geological information into a reservoir simulation without greatly increasing the complexity of their models. This approach has advantages over more traditional methods of reservoir zonation whereby model layers are determined on the basis of vertical distributions of permeability and porosity from core analyses and wireline logs.

  12. Multiscale schemes for the predictive description and virtual engineering of materials.

    SciTech Connect

    von Lilienfeld-Toal, Otto Anatole

    2010-09-01

    This report documents research carried out by the author throughout his 3-years Truman fellowship. The overarching goal consisted of developing multiscale schemes which permit not only the predictive description but also the computational design of improved materials. Identifying new materials through changes in atomic composition and configuration requires the use of versatile first principles methods, such as density functional theory (DFT). Using DFT, its predictive reliability has been investigated with respect to pseudopotential construction, band-gap, van-der-Waals forces, and nuclear quantum effects. Continuous variation of chemical composition and derivation of accurate energy gradients in compound space has been developed within a DFT framework for free energies of solvation, reaction energetics, and frontier orbital eigenvalues. Similar variations have been leveraged within classical molecular dynamics in order to address thermal properties of molten salt candidates for heat transfer fluids used in solar thermal power facilities. Finally, a combination of DFT and statistical methods has been used to devise quantitative structure property relationships for the rapid prediction of charge mobilities in polyaromatic hydrocarbons.

  13. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...

  14. Evaluating quantitative proton-density-mapping methods.

    PubMed

    Mezer, Aviv; Rokem, Ariel; Berman, Shai; Hastie, Trevor; Wandell, Brian A

    2016-10-01

    Quantitative magnetic resonance imaging (qMRI) aims to quantify tissue parameters by eliminating instrumental bias. We describe qMRI theory, simulations, and software designed to estimate proton density (PD), the apparent local concentration of water protons in the living human brain. First, we show that, in the absence of noise, multichannel coil data contain enough information to separate PD and coil sensitivity, a limiting instrumental bias. Second, we show that, in the presence of noise, regularization by a constraint on the relationship between T1 and PD produces accurate coil sensitivity and PD maps. The ability to measure PD quantitatively has applications in the analysis of in-vivo human brain tissue and enables multisite comparisons between individuals and across instruments. Hum Brain Mapp 37:3623-3635, 2016. © 2016 Wiley Periodicals, Inc.

  15. Uncertainty Quantification for Quantitative Imaging Holdup Measurements

    SciTech Connect

    Bevill, Aaron M; Bledsoe, Keith C

    2016-01-01

    In nuclear fuel cycle safeguards, special nuclear material "held up" in pipes, ducts, and glove boxes causes significant uncertainty in material-unaccounted-for estimates. Quantitative imaging is a proposed non-destructive assay technique with potential to estimate the holdup mass more accurately and reliably than current techniques. However, uncertainty analysis for quantitative imaging remains a significant challenge. In this work we demonstrate an analysis approach for data acquired with a fast-neutron coded aperture imager. The work includes a calibrated forward model of the imager. Cross-validation indicates that the forward model predicts the imager data typically within 23%; further improvements are forthcoming. A new algorithm based on the chi-squared goodness-of-fit metric then uses the forward model to calculate a holdup confidence interval. The new algorithm removes geometry approximations that previous methods require, making it a more reliable uncertainty estimator.

  16. Evaluating quantitative proton-density-mapping methods.

    PubMed

    Mezer, Aviv; Rokem, Ariel; Berman, Shai; Hastie, Trevor; Wandell, Brian A

    2016-10-01

    Quantitative magnetic resonance imaging (qMRI) aims to quantify tissue parameters by eliminating instrumental bias. We describe qMRI theory, simulations, and software designed to estimate proton density (PD), the apparent local concentration of water protons in the living human brain. First, we show that, in the absence of noise, multichannel coil data contain enough information to separate PD and coil sensitivity, a limiting instrumental bias. Second, we show that, in the presence of noise, regularization by a constraint on the relationship between T1 and PD produces accurate coil sensitivity and PD maps. The ability to measure PD quantitatively has applications in the analysis of in-vivo human brain tissue and enables multisite comparisons between individuals and across instruments. Hum Brain Mapp 37:3623-3635, 2016. © 2016 Wiley Periodicals, Inc. PMID:27273015

  17. Quantitative Decision Making.

    ERIC Educational Resources Information Center

    Baldwin, Grover H.

    The use of quantitative decision making tools provides the decision maker with a range of alternatives among which to decide, permits acceptance and use of the optimal solution, and decreases risk. Training line administrators in the use of these tools can help school business officials obtain reliable information upon which to base district…

  18. Quantitative Simulation Games

    NASA Astrophysics Data System (ADS)

    Černý, Pavol; Henzinger, Thomas A.; Radhakrishna, Arjun

    While a boolean notion of correctness is given by a preorder on systems and properties, a quantitative notion of correctness is defined by a distance function on systems and properties, where the distance between a system and a property provides a measure of "fit" or "desirability." In this article, we explore several ways how the simulation preorder can be generalized to a distance function. This is done by equipping the classical simulation game between a system and a property with quantitative objectives. In particular, for systems that satisfy a property, a quantitative simulation game can measure the "robustness" of the satisfaction, that is, how much the system can deviate from its nominal behavior while still satisfying the property. For systems that violate a property, a quantitative simulation game can measure the "seriousness" of the violation, that is, how much the property has to be modified so that it is satisfied by the system. These distances can be computed in polynomial time, since the computation reduces to the value problem in limit average games with constant weights. Finally, we demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.

  19. Nanoliter high throughput quantitative PCR

    PubMed Central

    Morrison, Tom; Hurley, James; Garcia, Javier; Yoder, Karl; Katz, Arrin; Roberts, Douglas; Cho, Jamie; Kanigan, Tanya; Ilyin, Sergey E.; Horowitz, Daniel; Dixon, James M.; Brenan, Colin J.H.

    2006-01-01

    Understanding biological complexity arising from patterns of gene expression requires accurate and precise measurement of RNA levels across large numbers of genes simultaneously. Real time PCR (RT-PCR) in a microtiter plate is the preferred method for quantitative transcriptional analysis but scaling RT-PCR to higher throughputs in this fluidic format is intrinsically limited by cost and logistic considerations. Hybridization microarrays measure the transcription of many thousands of genes simultaneously yet are limited by low sensitivity, dynamic range, accuracy and sample throughput. The hybrid approach described here combines the superior accuracy, precision and dynamic range of RT-PCR with the parallelism of a microarray in an array of 3072 real time, 33 nl polymerase chain reactions (RT-PCRs) the size of a microscope slide. RT-PCR is demonstrated with an accuracy and precision equivalent to the same assay in a 384-well microplate but in a 64-fold smaller reaction volume, a 24-fold higher analytical throughput and a workflow compatible with standard microplate protocols. PMID:17000636

  20. Towards an accurate treatment of σ∗ ← σ transitions: Moving onto N6.-

    NASA Astrophysics Data System (ADS)

    Dumont, Élise; Ferré, Nicolas; Monari, Antonio

    2013-08-01

    Dimeric σ∗ radical anions are ubiquitous, and their formation, spectroscopy and outcome can often be elucidated by density functional theory. But for shorter interfragment distances, three-electron two-center systems can be reluctant to a single-determinant description, such as the hexanitrogen radical anion. For N6.-, we show that multireference configuration interactions calculations are required to recover its characteristic electronic excitation energy, while TDDFT fails even with modern exchange-correlation functionals. The effects of vibronic couplings on the absorption spectrum are delineated based on a full quantum mechanical dynamical treatment; this study opens the door towards an accurate description of the subtle solvatochromism of hemi-bonded systems.

  1. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  2. Accurate and efficient method for many-body van der Waals interactions.

    PubMed

    Tkatchenko, Alexandre; DiStasio, Robert A; Car, Roberto; Scheffler, Matthias

    2012-06-01

    An efficient method is developed for the microscopic description of the frequency-dependent polarizability of finite-gap molecules and solids. This is achieved by combining the Tkatchenko-Scheffler van der Waals (vdW) method [Phys. Rev. Lett. 102, 073005 (2009)] with the self-consistent screening equation of classical electrodynamics. This leads to a seamless description of polarization and depolarization for the polarizability tensor of molecules and solids. The screened long-range many-body vdW energy is obtained from the solution of the Schrödinger equation for a system of coupled oscillators. We show that the screening and the many-body vdW energy play a significant role even for rather small molecules, becoming crucial for an accurate treatment of conformational energies for biomolecules and binding of molecular crystals. The computational cost of the developed theory is negligible compared to the underlying electronic structure calculation.

  3. Quantitative imaging of a non-combusting diesel spray using structured laser illumination planar imaging

    NASA Astrophysics Data System (ADS)

    Berrocal, E.; Kristensson, E.; Hottenbach, P.; Aldén, M.; Grünefeld, G.

    2012-12-01

    Due to its transient nature, high atomization process, and rapid generation of fine evaporating droplets, diesel sprays have been, and still remain, one of the most challenging sprays to be fully analyzed and understood by means of non-intrusive diagnostics. The main limitation of laser techniques for quantitative measurements of diesel sprays concerns the detection of the multiple light scattering resulting from the high optical density of such a scattering medium. A second limitation is the extinction of the incident laser radiation as it crosses the spray, as well as the attenuation of the signal which is to be detected. All these issues have strongly motivated, during the past decade, the use of X-ray instead of visible light for dense spray diagnostics. However, we demonstrate in this paper that based on an affordable Nd:YAG laser system, structured laser illumination planar imaging (SLIPI) can provide accurate quantitative description of a non-reacting diesel spray injected at 1,100 bar within a room temperature vessel pressurized at 18.6 bar. The technique is used at λ = 355 nm excitation wavelength with 1.0 mol% TMPD dye concentration, for simultaneous LIF/Mie imaging. Furthermore, a novel dual-SLIPI configuration is tested with Mie scattering detection only. The results confirm that a mapping of both the droplet Sauter mean diameter and extinction coefficient can be obtained by such complementary approaches. These new insights are provided in this article at late times after injection start. It is demonstrated that the application of SLIPI to diesel sprays provides valuable quantitative information which was not previously accessible.

  4. Critical Quantitative Inquiry in Context

    ERIC Educational Resources Information Center

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  5. Quantitative transverse flow measurement using OCT speckle decorrelation analysis

    PubMed Central

    Liu, Xuan; Huang, Yong; Ramella-Roman, Jessica C.; Mathews, Scott A.; Kang, Jin U.

    2014-01-01

    We propose an inter-Ascan speckle decorrelation based method that can quantitatively assess blood flow normal to the direction of the OCT imaging beam. To validate this method, we performed a systematic study using both phantom and in vivo animal models. Results show that our speckle analysis method can accurately extract transverse flow speed with high spatial and temporal resolution. PMID:23455305

  6. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  7. NEW TARGET AND CONTROL ASSAYS FOR QUANTITATIVE POLYMERASE CHAIN REACTION (QPCR) ANALYSIS OF ENTEROCOCCI IN WATER

    EPA Science Inventory

    Enterococci are frequently monitored in water samples as indicators of fecal pollution. Attention is now shifting from culture based methods for enumerating these organisms to more rapid molecular methods such as QPCR. Accurate quantitative analyses by this method requires highly...

  8. Towards Efficient and Accurate Description of Many-Electron Problems: Developments of Static and Time-Dependent Electronic Structure Methods

    NASA Astrophysics Data System (ADS)

    Ding, Feizhi

    Understanding electronic behavior in molecular and nano-scale systems is fundamental to the development and design of novel technologies and materials for application in a variety of scientific contexts from fundamental research to energy conversion. This dissertation aims to provide insights into this goal by developing novel methods and applications of first-principle electronic structure theory. Specifically, we will present new methods and applications of excited state multi-electron dynamics based on the real-time (RT) time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) formalism, and new development of the multi-configuration self-consist field theory (MCSCF) for modeling ground-state electronic structure. The RT-TDHF/TDDFT based developments and applications can be categorized into three broad and coherently integrated research areas: (1) modeling of the interaction between moleculars and external electromagnetic perturbations. In this part we will first prove both analytically and numerically the gauge invariance of the TDHF/TDDFT formalisms, then we will present a novel, efficient method for calculating molecular nonlinear optical properties, and last we will study quantum coherent plasmon in metal namowires using RT-TDDFT; (2) modeling of excited-state charge transfer in molecules. In this part, we will investigate the mechanisms of bridge-mediated electron transfer, and then we will introduce a newly developed non-equilibrium quantum/continuum embedding method for studying charge transfer dynamics in solution; (3) developments of first-principles spin-dependent many-electron dynamics. In this part, we will present an ab initio non-relativistic spin dynamics method based on the two-component generalized Hartree-Fock approach, and then we will generalized it to the two-component TDDFT framework and combine it with the Ehrenfest molecular dynamics approach for modeling the interaction between electron spins and nuclear motion. All these developments and applications will open up new computational and theoretical tools to be applied to the development and understanding of chemical reactions, nonlinear optics, electromagnetism, and spintronics. Lastly, we present a new algorithm for large-scale MCSCF calculations that can utilize massively parallel machines while still maintaining optimal performance for each single processor. This will great improve the efficiency in the MCSCF calculations for studying chemical dissociation and high-accuracy quantum-mechanical simulations.

  9. Accurate phase measurements for thick spherical objects using optical quadrature microscopy

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; DiMarzio, Charles A.

    2009-02-01

    In vitro fertilization (IVF) procedures have resulted in the birth of over three million babies since 1978. Yet the live birth rate in the United States was only 34% in 2005, with 32% of the successful pregnancies resulting in multiple births. These multiple pregnancies were directly attributed to the transfer of multiple embryos to increase the probability that a single, healthy embryo was included. Current viability markers used for IVF, such as the cell number, symmetry, size, and fragmentation, are analyzed qualitatively with differential interference contrast (DIC) microscopy. However, this method is not ideal for quantitative measures beyond the 8-cell stage of development because the cells overlap and obstruct the view within and below the cluster of cells. We have developed the phase-subtraction cell-counting method that uses the combination of DIC and optical quadrature microscopy (OQM) to count the number of cells accurately in live mouse embryos beyond the 8-cell stage. We have also created a preliminary analysis to measure the cell symmetry, size, and fragmentation quantitatively by analyzing the relative dry mass from the OQM image in conjunction with the phase-subtraction count. In this paper, we will discuss the characterization of OQM with respect to measuring the phase accurately for spherical samples that are much larger than the depth of field. Once fully characterized and verified with human embryos, this methodology could provide the means for a more accurate method to score embryo viability.

  10. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments

    PubMed Central

    Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  11. Density Relaxation in Time-Dependent Density Functional Theory: Combining Relaxed Density Natural Orbitals and Multireference Perturbation Theories for an Improved Description of Excited States.

    PubMed

    Ronca, Enrico; Angeli, Celestino; Belpassi, Leonardo; De Angelis, Filippo; Tarantelli, Francesco; Pastore, Mariachiara

    2014-09-01

    Making use of the recently developed excited state charge displacement analysis [E. Ronca et al., J. Chem. Phys. 140, 054110 (2014)], suited to quantitatively characterize the charge fluxes coming along an electronic excitation, we investigate the role of the density relaxation effects in the overall description of electronically excited states of different nature, namely, valence, ionic, and charge transfer (CT), considering a large set of prototypical small and medium-sized molecular systems. By comparing the response densities provided by time-dependent density functional theory (TDDFT) and the corresponding relaxed densities obtained by applying the Z-vector postlinear-response approach [N. C. Handy and H. F. Schaefer, J. Chem. Phys. 81, 5031 (1984)] with those obtained by highly correlated state-of-the-art wave function calculations, we show that the inclusion of the relaxation effects is imperative to get an accurate description of the considered excited states. We also examine what happens at the quality of the response function when an increasing amount of Hartree-Fock (HF) exchange is included in the functional, showing that the usually improved excitation energies in the case of CT states are not always the consequence of an improved description of their overall properties. Remarkably, we find that the relaxation of the response densities is always able to reproduce, independently of the extent of HF exchange in the functional, the benchmark wave function densities. Finally, we propose a novel and computationally convenient strategy, based on the use of the natural orbitals derived from the relaxed TDDFT density to build zero-order wave function for multireference perturbation theory calculations. For a significant set of different excited states, the proposed approach provided accurate excitation energies, comparable to those obtained by computationally demanding ab initio calculations.

  12. CANISTER HANDLING FACILITY DESCRIPTION DOCUMENT

    SciTech Connect

    J.F. Beesley

    2005-04-21

    The purpose of this facility description document (FDD) is to establish requirements and associated bases that drive the design of the Canister Handling Facility (CHF), which will allow the design effort to proceed to license application. This FDD will be revised at strategic points as the design matures. This FDD identifies the requirements and describes the facility design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This FDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This FDD is part of an iterative design process. It leads the design process with regard to the flowdown of upper tier requirements onto the facility. Knowledge of these requirements is essential in performing the design process. The FDD follows the design with regard to the description of the facility. The description provided in this FDD reflects the current results of the design process.

  13. Micropolar continuum in spatial description

    NASA Astrophysics Data System (ADS)

    Ivanova, Elena A.; Vilchevskaya, Elena N.

    2016-11-01

    Within the spatial description, it is customary to refer thermodynamic state quantities to an elementary volume fixed in space containing an ensemble of particles. During its evolution, the elementary volume is occupied by different particles, each having its own mass, tensor of inertia, angular and linear velocities. The aim of the present paper is to answer the question of how to determine the inertial and kinematic characteristics of the elementary volume. In order to model structural transformations due to the consolidation or defragmentation of particles or anisotropic changes, one should consider the fact that the tensor of inertia of the elementary volume may change. This means that an additional constitutive equation must be formulated. The paper suggests kinetic equations for the tensor of inertia of the elementary volume. It also discusses the specificity of the inelastic polar continuum description within the framework of the spatial description.

  14. Quantitative aspects of inductively coupled plasma mass spectrometry.

    PubMed

    Bulska, Ewa; Wagner, Barbara

    2016-10-28

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided.This article is part of the themed issue 'Quantitative mass spectrometry'. PMID:27644971

  15. Quantitative aspects of inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bulska, Ewa; Wagner, Barbara

    2016-10-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.

  16. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR

  17. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,‑26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated

  18. The Quantitative Preparation of Future Geoscience Graduate Students

    NASA Astrophysics Data System (ADS)

    Manduca, C. A.; Hancock, G. S.

    2006-12-01

    Modern geoscience is a highly quantitative science. In February, a small group of faculty and graduate students from across the country met to discuss the quantitative preparation of geoscience majors for graduate school. The group included ten faculty supervising graduate students in quantitative areas spanning the earth, atmosphere, and ocean sciences; five current graduate students in these areas; and five faculty teaching undergraduate students in the spectrum of institutions preparing students for graduate work. Discussion focused in four key ares: Are incoming graduate students adequately prepared for the quantitative aspects of graduate geoscience programs? What are the essential quantitative skills are that are required for success in graduate school? What are perceived as the important courses to prepare students for the quantitative aspects of graduate school? What programs/resources would be valuable in helping faculty/departments improve the quantitative preparation of students? The participants concluded that strengthening the quantitative preparation of undergraduate geoscience majors would increase their opportunities in graduate school. While specifics differed amongst disciplines, a special importance was placed on developing the ability to use quantitative skills to solve geoscience problems. This requires the ability to pose problems so they can be addressed quantitatively, understand the relationship between quantitative concepts and physical representations, visualize mathematics, test the reasonableness of quantitative results, creatively move forward from existing models/techniques/approaches, and move between quantitative and verbal descriptions. A list of important quantitative competencies desirable in incoming graduate students includes mechanical skills in basic mathematics, functions, multi-variate analysis, statistics and calculus, as well as skills in logical analysis and the ability to learn independently in quantitative ways

  19. Berkeley Quantitative Genome Browser

    2008-02-29

    The Berkeley Quantitative Genome Browser provides graphical browsing functionality for genomic data organized, at a minimum, by sequence and position. While supporting the annotation browsing features typical of many other genomic browsers, additional emphasis is placed on viewing and utilizing quantitative data. Data may be read from GFF, SGR, FASTA or any column delimited format. Once the data has been read into the browser's buffer, it may be searched. filtered or subjected to mathematical transformation.more » The browser also supplies some graphical design manipulation functionality geared towards preparing figures for presentations or publication. A plug-in mechanism enables development outside the core functionality that adds more advanced or esoteric analysis capabilities. BBrowse's development and distribution is open-source and has been built to run on Linux, OSX and MS Windows operating systems.« less

  20. Berkeley Quantitative Genome Browser

    SciTech Connect

    Hechmer, Aaron

    2008-02-29

    The Berkeley Quantitative Genome Browser provides graphical browsing functionality for genomic data organized, at a minimum, by sequence and position. While supporting the annotation browsing features typical of many other genomic browsers, additional emphasis is placed on viewing and utilizing quantitative data. Data may be read from GFF, SGR, FASTA or any column delimited format. Once the data has been read into the browser's buffer, it may be searched. filtered or subjected to mathematical transformation. The browser also supplies some graphical design manipulation functionality geared towards preparing figures for presentations or publication. A plug-in mechanism enables development outside the core functionality that adds more advanced or esoteric analysis capabilities. BBrowse's development and distribution is open-source and has been built to run on Linux, OSX and MS Windows operating systems.

  1. Primary enzyme quantitation

    DOEpatents

    Saunders, G.C.

    1982-03-04

    The disclosure relates to the quantitation of a primary enzyme concentration by utilizing a substrate for the primary enzyme labeled with a second enzyme which is an indicator enzyme. Enzyme catalysis of the substrate occurs and results in release of the indicator enzyme in an amount directly proportional to the amount of primary enzyme present. By quantifying the free indicator enzyme one determines the amount of primary enzyme present.

  2. Langley Atmospheric Information Retrieval System (LAIRS): System description and user's guide

    NASA Technical Reports Server (NTRS)

    Boland, D. E., Jr.; Lee, T.

    1982-01-01

    This document presents the user's guide, system description, and mathematical specifications for the Langley Atmospheric Information Retrieval System (LAIRS). It also includes a description of an optimal procedure for operational use of LAIRS. The primary objective of the LAIRS Program is to make it possible to obtain accurate estimates of atmospheric pressure, density, temperature, and winds along Shuttle reentry trajectories for use in postflight data reduction.

  3. Computational vaccinology: quantitative approaches.

    PubMed

    Flower, Darren R; McSparron, Helen; Blythe, Martin J; Zygouri, Christianna; Taylor, Debra; Guan, Pingping; Wan, Shouzhan; Coveney, Peter V; Walshe, Valerie; Borrow, Persephone; Doytchinova, Irini A

    2003-01-01

    The immune system is hierarchical and has many levels, exhibiting much emergent behaviour. However, at its heart are molecular recognition events that are indistinguishable from other types of biomacromolecular interaction. These can be addressed well by quantitative experimental and theoretical biophysical techniques, and particularly by methods from drug design. We review here our approach to computational immunovaccinology. In particular, we describe the JenPep database and two new techniques for T cell epitope prediction. One is based on quantitative structure-activity relationships (a 3D-QSAR method based on CoMSIA and another 2D method based on the Free-Wilson approach) and the other on atomistic molecular dynamic simulations using high performance computing. JenPep (http://www.jenner.ar.uk/ JenPep) is a relational database system supporting quantitative data on peptide binding to major histocompatibility complexes, TAP transporters, TCR-pMHC complexes, and an annotated list of B cell and T cell epitopes. Our 2D-QSAR method factors the contribution to peptide binding from individual amino acids as well as 1-2 and 1-3 residue interactions. In the 3D-QSAR approach, the influence of five physicochemical properties (volume, electrostatic potential, hydrophobicity, hydrogen-bond donor and acceptor abilities) on peptide affinity were considered. Both methods are exemplified through their application to the well-studied problem of peptide binding to the human class I MHC molecule HLA-A*0201. PMID:14712934

  4. Modern quantitative schlieren techniques

    NASA Astrophysics Data System (ADS)

    Hargather, Michael; Settles, Gary

    2010-11-01

    Schlieren optical techniques have traditionally been used to qualitatively visualize refractive flowfields in transparent media. Modern schlieren optics, however, are increasingly focused on obtaining quantitative information such as temperature and density fields in a flow -- once the sole purview of interferometry -- without the need for coherent illumination. Quantitative data are obtained from schlieren images by integrating the measured refractive index gradient to obtain the refractive index field in an image. Ultimately this is converted to a density or temperature field using the Gladstone-Dale relationship, an equation of state, and geometry assumptions for the flowfield of interest. Several quantitative schlieren methods are reviewed here, including background-oriented schlieren (BOS), schlieren using a weak lens as a "standard," and "rainbow schlieren." Results are presented for the application of these techniques to measure density and temperature fields across a supersonic turbulent boundary layer and a low-speed free-convection boundary layer in air. Modern equipment, including digital cameras, LED light sources, and computer software that make this possible are also discussed.

  5. Predicting in vivo glioma growth with the reaction diffusion equation constrained by quantitative magnetic resonance imaging data.

    PubMed

    Hormuth, David A; Weis, Jared A; Barnes, Stephanie L; Miga, Michael I; Rericha, Erin C; Quaranta, Vito; Yankeelov, Thomas E

    2015-06-04

    Reaction-diffusion models have been widely used to model glioma growth. However, it has not been shown how accurately this model can predict future tumor status using model parameters (i.e., tumor cell diffusion and proliferation) estimated from quantitative in vivo imaging data. To this end, we used in silico studies to develop the methods needed to accurately estimate tumor specific reaction-diffusion model parameters, and then tested the accuracy with which these parameters can predict future growth. The analogous study was then performed in a murine model of glioma growth. The parameter estimation approach was tested using an in silico tumor 'grown' for ten days as dictated by the reaction-diffusion equation. Parameters were estimated from early time points and used to predict subsequent growth. Prediction accuracy was assessed at global (total volume and Dice value) and local (concordance correlation coefficient, CCC) levels. Guided by the in silico study, rats (n = 9) with C6 gliomas, imaged with diffusion weighted magnetic resonance imaging, were used to evaluate the model's accuracy for predicting in vivo tumor growth. The in silico study resulted in low global (tumor volume error <8.8%, Dice >0.92) and local (CCC values >0.80) level errors for predictions up to six days into the future. The in vivo study showed higher global (tumor volume error >11.7%, Dice <0.81) and higher local (CCC <0.33) level errors over the same time period. The in silico study shows that model parameters can be accurately estimated and used to accurately predict future tumor growth at both the global and local scale. However, the poor predictive accuracy in the experimental study suggests the reaction-diffusion equation is an incomplete description of in vivo C6 glioma biology and may require further modeling of intra-tumor interactions including segmentation of (for example) proliferative and necrotic regions.

  6. Predicting in vivo glioma growth with the reaction diffusion equation constrained by quantitative magnetic resonance imaging data

    NASA Astrophysics Data System (ADS)

    Hormuth, David A., II; Weis, Jared A.; Barnes, Stephanie L.; Miga, Michael I.; Rericha, Erin C.; Quaranta, Vito; Yankeelov, Thomas E.

    2015-07-01

    Reaction-diffusion models have been widely used to model glioma growth. However, it has not been shown how accurately this model can predict future tumor status using model parameters (i.e., tumor cell diffusion and proliferation) estimated from quantitative in vivo imaging data. To this end, we used in silico studies to develop the methods needed to accurately estimate tumor specific reaction-diffusion model parameters, and then tested the accuracy with which these parameters can predict future growth. The analogous study was then performed in a murine model of glioma growth. The parameter estimation approach was tested using an in silico tumor ‘grown’ for ten days as dictated by the reaction-diffusion equation. Parameters were estimated from early time points and used to predict subsequent growth. Prediction accuracy was assessed at global (total volume and Dice value) and local (concordance correlation coefficient, CCC) levels. Guided by the in silico study, rats (n = 9) with C6 gliomas, imaged with diffusion weighted magnetic resonance imaging, were used to evaluate the model’s accuracy for predicting in vivo tumor growth. The in silico study resulted in low global (tumor volume error <8.8%, Dice >0.92) and local (CCC values >0.80) level errors for predictions up to six days into the future. The in vivo study showed higher global (tumor volume error >11.7%, Dice <0.81) and higher local (CCC <0.33) level errors over the same time period. The in silico study shows that model parameters can be accurately estimated and used to accurately predict future tumor growth at both the global and local scale. However, the poor predictive accuracy in the experimental study suggests the reaction-diffusion equation is an incomplete description of in vivo C6 glioma biology and may require further modeling of intra-tumor interactions including segmentation of (for example) proliferative and necrotic regions.

  7. Quantitative analysis of myocardial tissue with digital autofluorescence microscopy

    PubMed Central

    Jensen, Thomas; Holten-Rossing, Henrik; Svendsen, Ida M H; Jacobsen, Christina; Vainer, Ben

    2016-01-01

    Background: The opportunity offered by whole slide scanners of automated histological analysis implies an ever increasing importance of digital pathology. To go beyond the importance of conventional pathology, however, digital pathology may need a basic histological starting point similar to that of hematoxylin and eosin staining in conventional pathology. This study presents an automated fluorescence-based microscopy approach providing highly detailed morphological data from unstained microsections. This data may provide a basic histological starting point from which further digital analysis including staining may benefit. Methods: This study explores the inherent tissue fluorescence, also known as autofluorescence, as a mean to quantitate cardiac tissue components in histological microsections. Data acquisition using a commercially available whole slide scanner and an image-based quantitation algorithm are presented. Results: It is shown that the autofluorescence intensity of unstained microsections at two different wavelengths is a suitable starting point for automated digital analysis of myocytes, fibrous tissue, lipofuscin, and the extracellular compartment. The output of the method is absolute quantitation along with accurate outlines of above-mentioned components. The digital quantitations are verified by comparison to point grid quantitations performed on the microsections after Van Gieson staining. Conclusion: The presented method is amply described as a prestain multicomponent quantitation and outlining tool for histological sections of cardiac tissue. The main perspective is the opportunity for combination with digital analysis of stained microsections, for which the method may provide an accurate digital framework. PMID:27141321

  8. Natural Language Description of Emotion

    ERIC Educational Resources Information Center

    Kazemzadeh, Abe

    2013-01-01

    This dissertation studies how people describe emotions with language and how computers can simulate this descriptive behavior. Although many non-human animals can express their current emotions as social signals, only humans can communicate about emotions symbolically. This symbolic communication of emotion allows us to talk about emotions that we…

  9. Developmental Kindergarten: Definition and Description.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Education, Richmond.

    This paper sets forth a definition and operational description of a developmental program that should be of use as a guide, especially to Virginia's teachers and administrators. Also included in the paper are kindergarten curriculum objectives in the areas of language arts, mathematics, science, art, social studies, family life, health, mental…

  10. Integrating values in job descriptions.

    PubMed

    Craig, R P

    1987-12-01

    The Mission Services Division of the Sisters of Charity of the Incarnate Word Health Care Corporation, Houston, has established training sessions to help various facilities develop criteria-based job descriptions that integrate values. A major problem with traditional job descriptions is that they do not contain enough information for value integration to occur. Each facet of the job description--the responsibility statement, the task statement, and the standard--can integrate the facilities' values in explicit and implicit ways. Such integration reduces the possibility of a supervisor arbitrarily defining the qualitative aspects of how an employee performs the job and provides a better method for evaluating the quality of the employee's performance. The first step in value integration is to identify the organization's values. Next, illustrative behaviors are identified to emphasize value integration in both activity-based task statements and results-based standards. The final step is to integrate the values in the job description, which makes the value operational and bridges the gap between commitment to values and behavior that exemplifies those values. Although values cannot be measured as objectively as the successful accomplishment of a procedure with a specified method of measurement, evaluation of values is not fruitless. When the employee and supervisor agree on specific qualitative aspects of patient care or other tasks, the consistency of the qualitative aspects of the job can be evaluated.

  11. Accurate identification of waveform of evoked potentials by component decomposition using discrete cosine transform modeling.

    PubMed

    Bai, O; Nakamura, M; Kanda, M; Nagamine, T; Shibasaki, H

    2001-11-01

    This study introduces a method for accurate identification of the waveform of the evoked potentials by decomposing the component responses. The decomposition was achieved by zero-pole modeling of the evoked potentials in the discrete cosine transform (DCT) domain. It was found that the DCT coefficients of a component response in the evoked potentials could be modeled sufficiently by a second order transfer function in the DCT domain. The decomposition of the component responses was approached by using partial expansion of the estimated model for the evoked potentials, and the effectiveness of the decomposition method was evaluated both qualitatively and quantitatively. Because of the overlap of the different component responses, the proposed method enables an accurate identification of the evoked potentials, which is useful for clinical and neurophysiological investigations.

  12. Efficient design, accurate fabrication and effective characterization of plasmonic quasicrystalline arrays of nano-spherical particles

    PubMed Central

    Namin, Farhad A.; Yuwen, Yu A.; Liu, Liu; Panaretos, Anastasios H.; Werner, Douglas H.; Mayer, Theresa S.

    2016-01-01

    In this paper, the scattering properties of two-dimensional quasicrystalline plasmonic lattices are investigated. We combine a newly developed synthesis technique, which allows for accurate fabrication of spherical nanoparticles, with a recently published variation of generalized multiparticle Mie theory to develop the first quantitative model for plasmonic nano-spherical arrays based on quasicrystalline morphologies. In particular, we study the scattering properties of Penrose and Ammann- Beenker gold spherical nanoparticle array lattices. We demonstrate that by using quasicrystalline lattices, one can obtain multi-band or broadband plasmonic resonances which are not possible in periodic structures. Unlike previously published works, our technique provides quantitative results which show excellent agreement with experimental measurements. PMID:26911709

  13. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams. PMID:27505659

  14. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams.

  15. Tracer testing for reservoir description

    SciTech Connect

    Brigham, W.E.; Abbaszadeh-Dehghani, M.

    1987-05-01

    When a reservoir is studied in detail for an EOR project, well-to-well tracers should be used as a tool to help understand the reservoir in a quantitative way. Tracers complement the more traditional reservoir evaluation tools. This paper discusses the concepts underlying tracer testing, the analysis methods used to produce quantitative results, and the meaning of these results in terms of conceptual picture of the reservoir. Some of the limitations of these analysis methods are discussed, along with ongoing research on tracer flow.

  16. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  17. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  18. Quantitative SPECT techniques.

    PubMed

    Watson, D D

    1999-07-01

    Quantitative imaging involves first, a set of measurements that characterize an image. There are several variations of technique, but the basic measurements that are used for single photon emission computed tomography (SPECT) perfusion images are reasonably standardized. Quantification currently provides only relative tracer activity within the myocardial regions defined by an individual SPECT acquisition. Absolute quantification is still a work in progress. Quantitative comparison of absolute changes in tracer uptake comparing a stress and rest study or preintervention and postintervention study would be useful and could be done, but most commercial systems do not maintain the data normalization that is necessary for this. Measurements of regional and global function are now possible with electrocardiography (ECG) gating, and this provides clinically useful adjunctive data. Techniques for measuring ventricular function are evolving and promise to provide clinically useful accuracy. The computer can classify images as normal or abnormal by comparison with a normal database. The criteria for this classification involve more than just checking the normal limits. The images should be analyzed to measure how far they deviate from normal, and this information can be used in conjunction with pretest likelihood to indicate the level of statistical certainty that an individual patient has a true positive or true negative test. The interface between the computer and the clinician interpreter is an important part of the process. Especially when both perfusion and function are being determined, the ability of the interpreter to correctly assimilate the data is essential to the use of the quantitative process. As we become more facile with performing and recording objective measurements, the significance of the measurements in terms of risk evaluation, viability assessment, and outcome should be continually enhanced. PMID:10433336

  19. Quantitative rainbow schlieren deflectometry

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Klimek, Robert B.; Buchele, Donald R.

    1995-01-01

    In the rainbow schlieren apparatus, a continuously graded rainbow filter is placed in the back focal plane of the decollimating lens. Refractive-index gradients in the test section thus appear as gradations in hue rather than irradiance. A simple system is described wherein a conventional color CCD array and video digitizer are used to quantify accurately the color attributes of the resulting image, and hence the associated ray deflections. The present system provides a sensitivity comparable with that of conventional interferometry, while being simpler to implement and less sensitive to mechanical misalignment.

  20. Quantitative imaging with a mobile phone microscope.

    PubMed

    Skandarajah, Arunan; Reber, Clay D; Switz, Neil A; Fletcher, Daniel A

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications.

  1. Quantitative imaging with a mobile phone microscope.

    PubMed

    Skandarajah, Arunan; Reber, Clay D; Switz, Neil A; Fletcher, Daniel A

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  2. Quantitative Imaging with a Mobile Phone Microscope

    PubMed Central

    Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  3. A comparison of risk assessment techniques from qualitative to quantitative

    SciTech Connect

    Altenbach, T.J.

    1995-02-13

    Risk assessment techniques vary from purely qualitative approaches, through a regime of semi-qualitative to the more traditional quantitative. Constraints such as time, money, manpower, skills, management perceptions, risk result communication to the public, and political pressures all affect the manner in which risk assessments are carried out. This paper surveys some risk matrix techniques, examining the uses and applicability for each. Limitations and problems for each technique are presented and compared to the others. Risk matrix approaches vary from purely qualitative axis descriptions of accident frequency vs consequences, to fully quantitative axis definitions using multi-attribute utility theory to equate different types of risk from the same operation.

  4. Collective Cell Motion in an Epithelial Sheet Can Be Quantitatively Described by a Stochastic Interacting Particle Model

    PubMed Central

    Cochet, Olivier; Grasland-Mongrain, Erwan; Silberzan, Pascal; Hakim, Vincent

    2013-01-01

    Modelling the displacement of thousands of cells that move in a collective way is required for the simulation and the theoretical analysis of various biological processes. Here, we tackle this question in the controlled setting where the motion of Madin-Darby Canine Kidney (MDCK) cells in a confluent epithelium is triggered by the unmasking of free surface. We develop a simple model in which cells are described as point particles with a dynamic based on the two premises that, first, cells move in a stochastic manner and, second, tend to adapt their motion to that of their neighbors. Detailed comparison to experimental data show that the model provides a quantitatively accurate description of cell motion in the epithelium bulk at early times. In addition, inclusion of model “leader” cells with modified characteristics, accounts for the digitated shape of the interface which develops over the subsequent hours, providing that leader cells invade free surface more easily than other cells and coordinate their motion with their followers. The previously-described progression of the epithelium border is reproduced by the model and quantitatively explained. PMID:23505356

  5. Sampling Soil for Characterization and Site Description

    NASA Technical Reports Server (NTRS)

    Levine, Elissa

    1999-01-01

    The sampling scheme for soil characterization within the GLOBE program is uniquely different from the sampling methods of the other protocols. The strategy is based on an understanding of the 5 soil forming factors (parent material, climate, biota, topography, and time) at each study site, and how each of these interact to produce a soil profile with unique characteristics and unique input and control into the atmospheric, biological, and hydrological systems. Soil profile characteristics, as opposed to soil moisture and temperature, vegetative growth, and atmospheric and hydrologic conditions, change very slowly, depending on the parameter being measured, ranging from seasonally to many thousands of years. Thus, soil information, including profile description and lab analysis, is collected only one time for each profile at a site. These data serve two purposes: 1) to supplement existing spatial information about soil profile characteristics across the landscape at local, regional, and global scales, and 2) to provide specific information within a given area about the basic substrate to which elements within the other protocols are linked. Because of the intimate link between soil properties and these other environmental elements, the static soil properties at a given site are needed to accurately interpret and understand the continually changing dynamics of soil moisture and temperature, vegetation growth and phenology, atmospheric conditions, and chemistry and turbidity in surface waters. Both the spatial and specific soil information can be used for modeling purposes to assess and make predictions about global change.

  6. Journal bearing impedance descriptions for rotordynamic applications

    NASA Technical Reports Server (NTRS)

    Childs, D.; Moes, H.; Van Leeuwen, H.

    1976-01-01

    The paper deals with the development of analytic descriptions for plain circumferentially-symmetric fluid journal bearings, which are suitable for use in rotor dynamic analysis. The bearing impedance vector is introduced, which defines the bearing reaction force components as a function of the bearing motion. Impedances are derived directly for the Ocvirk (short) and Sommerfeld (long) bearings, and the relationships between the impedance vector and the more familiar mobility vector are developed and used to derive analytic impedance for finite-length bearings. The static correctness of the finite-length cavitating impedance is verified. Analytic stiffness and damping coefficient definitions are derived in terms of an impedance vector for small motion around an equilibrium position and demonstrated for the finite-length cavitating impedance. Nonlinear transient rotordynamic simulations are presented for the short pi and 2-pi impedances and the finite-length cavitating impedance. It is shown that finite-length impedance yields more accurate results for substantially less computer time than the short-bearing numerical-pressure-integration approach.

  7. Sample normalization methods in quantitative metabolomics.

    PubMed

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts.

  8. HGVS Recommendations for the Description of Sequence Variants: 2016 Update.

    PubMed

    den Dunnen, Johan T; Dalgleish, Raymond; Maglott, Donna R; Hart, Reece K; Greenblatt, Marc S; McGowan-Jordan, Jean; Roux, Anne-Francoise; Smith, Timothy; Antonarakis, Stylianos E; Taschner, Peter E M

    2016-06-01

    The consistent and unambiguous description of sequence variants is essential to report and exchange information on the analysis of a genome. In particular, DNA diagnostics critically depends on accurate and standardized description and sharing of the variants detected. The sequence variant nomenclature system proposed in 2000 by the Human Genome Variation Society has been widely adopted and has developed into an internationally accepted standard. The recommendations are currently commissioned through a Sequence Variant Description Working Group (SVD-WG) operating under the auspices of three international organizations: the Human Genome Variation Society (HGVS), the Human Variome Project (HVP), and the Human Genome Organization (HUGO). Requests for modifications and extensions go through the SVD-WG following a standard procedure including a community consultation step. Version numbers are assigned to the nomenclature system to allow users to specify the version used in their variant descriptions. Here, we present the current recommendations, HGVS version 15.11, and briefly summarize the changes that were made since the 2000 publication. Most focus has been on removing inconsistencies and tightening definitions allowing automatic data processing. An extensive version of the recommendations is available online, at http://www.HGVS.org/varnomen. PMID:26931183

  9. Quantitative Hyperspectral Reflectance Imaging

    PubMed Central

    Klein, Marvin E.; Aalderink, Bernard J.; Padoan, Roberto; de Bruin, Gerrit; Steemers, Ted A.G.

    2008-01-01

    Hyperspectral imaging is a non-destructive optical analysis technique that can for instance be used to obtain information from cultural heritage objects unavailable with conventional colour or multi-spectral photography. This technique can be used to distinguish and recognize materials, to enhance the visibility of faint or obscured features, to detect signs of degradation and study the effect of environmental conditions on the object. We describe the basic concept, working principles, construction and performance of a laboratory instrument specifically developed for the analysis of historical documents. The instrument measures calibrated spectral reflectance images at 70 wavelengths ranging from 365 to 1100 nm (near-ultraviolet, visible and near-infrared). By using a wavelength tunable narrow-bandwidth light-source, the light energy used to illuminate the measured object is minimal, so that any light-induced degradation can be excluded. Basic analysis of the hyperspectral data includes a qualitative comparison of the spectral images and the extraction of quantitative data such as mean spectral reflectance curves and statistical information from user-defined regions-of-interest. More sophisticated mathematical feature extraction and classification techniques can be used to map areas on the document, where different types of ink had been applied or where one ink shows various degrees of degradation. The developed quantitative hyperspectral imager is currently in use by the Nationaal Archief (National Archives of The Netherlands) to study degradation effects of artificial samples and original documents, exposed in their permanent exhibition area or stored in their deposit rooms.

  10. Completion of structured patient descriptions by semantic mining.

    PubMed

    Tchraktchiev, Dimitar; Angelova, Galia; Boytcheva, Svetla; Angelov, Zhivko; Zacharieva, Sabina

    2011-01-01

    This paper presents experiments in automatic Information Extraction of medication events, diagnoses, and laboratory tests form hospital patient records, in order to increase the completeness of the description of the episode of care. Each patient record in our hospital information system contains structured data and text descriptions, including full discharge letters. From these letters, we extract automatically information about the medication just before and in the time of hospitalization, especially for the drugs prescribed to the patient, but not delivered by the hospital pharmacy; we also extract values of lab tests not performed and not registered in our laboratory as well as all non-encoded diagnoses described only in the free text of discharge letters. Thus we increase the availability of suitable and accurate information about the hospital stay and the outpatient segment of care before the hospitalization. Information Extraction also helps to understand the clinical and organizational decisions concerning the patient without increasing the complexity of the structured health record.

  11. Accurate Identification of MCI Patients via Enriched White-Matter Connectivity Network

    NASA Astrophysics Data System (ADS)

    Wee, Chong-Yaw; Yap, Pew-Thian; Brownyke, Jeffery N.; Potter, Guy G.; Steffens, David C.; Welsh-Bohmer, Kathleen; Wang, Lihong; Shen, Dinggang

    Mild cognitive impairment (MCI), often a prodromal phase of Alzheimer's disease (AD), is frequently considered to be a good target for early diagnosis and therapeutic interventions of AD. Recent emergence of reliable network characterization techniques have made understanding neurological disorders at a whole brain connectivity level possible. Accordingly, we propose a network-based multivariate classification algorithm, using a collection of measures derived from white-matter (WM) connectivity networks, to accurately identify MCI patients from normal controls. An enriched description of WM connections, utilizing six physiological parameters, i.e., fiber penetration count, fractional anisotropy (FA), mean diffusivity (MD), and principal diffusivities (λ 1, λ 2, λ 3), results in six connectivity networks for each subject to account for the connection topology and the biophysical properties of the connections. Upon parcellating the brain into 90 regions-of-interest (ROIs), the average statistics of each ROI in relation to the remaining ROIs are extracted as features for classification. These features are then sieved to select the most discriminant subset of features for building an MCI classifier via support vector machines (SVMs). Cross-validation results indicate better diagnostic power of the proposed enriched WM connection description than simple description with any single physiological parameter.

  12. Spacelab Mission 3 experiment descriptions

    NASA Technical Reports Server (NTRS)

    Hill, C. K. (Editor)

    1982-01-01

    The Spacelab 3 mission is the first operational flight of Spacelab aboard the shuttle transportation system. The primary objectives of this mission are to conduct application, science, and technology experimentation that requires the low gravity environment of Earth orbit and an extended duration, stable vehicle attitude with emphasis on materials processing. This document provides descriptions of the experiments to be performed during the Spacelab 3 mission.

  13. GROUNDWATER PROTECTION MANAGEMENT PROGRAM DESCRIPTION.

    SciTech Connect

    PAQUETTE,D.E.; BENNETT,D.B.; DORSCH,W.R.; GOODE,G.A.; LEE,R.J.; KLAUS,K.; HOWE,R.F.; GEIGER,K.

    2002-05-31

    THE DEPARTMENT OF ENERGY ORDER 5400.1, GENERAL ENVIRONMENTAL PROTECTION PROGRAM, REQUIRES THE DEVELOPMENT AND IMPLEMENTATION OF A GROUNDWATER PROTECTION PROGRAM. THE BNL GROUNDWATER PROTECTION MANAGEMENT PROGRAM DESCRIPTION PROVIDES AN OVERVIEW OF HOW THE LABORATORY ENSURES THAT PLANS FOR GROUNDWATER PROTECTION, MONITORING, AND RESTORATION ARE FULLY DEFINED, INTEGRATED, AND MANAGED IN A COST EFFECTIVE MANNER THAT IS CONSISTENT WITH FEDERAL, STATE, AND LOCAL REGULATIONS.

  14. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  15. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  16. 78 FR 34604 - Submitting Complete and Accurate Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  17. Closed terminologies in description logics

    SciTech Connect

    Weida, R.A. |

    1996-12-31

    We introduce a predictive concept recognition methodology for description logics based on a new closed terminology assumption. During knowledge engineering, our system adopts the standard open terminology assumption as it automatically classifies concept descriptions into a taxonomy via subsumption inferences. However, for applications like configuration, the terminology becomes fixed during problem solving. Then, closed terminology reasoning is more appropriate. In our interactive configuration application, a user incrementally specifies an individual computer system in collaboration with a configuration engine. Choices can be made in any order and at any level of abstraction. We distinguish between abstract and concrete concepts to formally define when an individual`s description may be considered finished. We also take advantage of the closed terminology assumption, together with the terminology`s subsumption-based organization, to efficiently track the types of systems and components consistent with current choices, infer additional constraints on current choices, and appropriately guide future choices. Thus, we can help focus the efforts of both user and configuration engine.

  18. Using GPS To Teach More Than Accurate Positions.

    ERIC Educational Resources Information Center

    Johnson, Marie C.; Guth, Peter L.

    2002-01-01

    Undergraduate science majors need practice in critical thinking, quantitative analysis, and judging whether their calculated answers are physically reasonable. Develops exercises using handheld Global Positioning System (GPS) receivers. Reinforces students' abilities to think quantitatively, make realistic "back of the envelope" assumptions, and…

  19. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  20. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. PMID:27498635

  1. The accurate assessment of small-angle X-ray scattering data

    SciTech Connect

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.

    2015-01-01

    A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  2. Quantitative radionuclide angiocardiography

    SciTech Connect

    Scholz, P.M.; Rerych, S.K.; Moran, J.F.; Newman, G.E.; Douglas, J.M.; Sabiston, D.C. Jr.; Jones, R.H.

    1980-01-01

    This study introduces a new method for calculating actual left ventricular volumes and cardiac output from data recorded during a single transit of a radionuclide bolus through the heart, and describes in detail current radionuclide angiocardiography methodology. A group of 64 healthy adults with a wide age range were studied to define the normal range of hemodynamic parameters determined by the technique. Radionuclide angiocardiograms were performed in patients undergoing cardiac catherization to validate the measurements. In 33 patients studied by both techniques on the same day, a close correlation was documented for measurement of ejection fraction and end-diastolic volume. To validate the method of volumetric cardiac output calcuation, 33 simultaneous radionuclide and indocyanine green dye determinations of cardiac output were performed in 18 normal young adults. These independent comparisons of radionuclide measurements with two separate methods document that initial transit radionuclide angiocardiography accurately assesses left ventricular function.

  3. Quantitative velocity modulation spectroscopy

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; McCall, Benjamin J.

    2016-05-01

    Velocity Modulation Spectroscopy (VMS) is arguably the most important development in the 20th century for spectroscopic study of molecular ions. For decades, interpretation of VMS lineshapes has presented challenges due to the intrinsic covariance of fit parameters including velocity modulation amplitude, linewidth, and intensity. This limitation has stifled the growth of this technique into the quantitative realm. In this work, we show that subtle changes in the lineshape can be used to help address this complexity. This allows for determination of the linewidth, intensity relative to other transitions, velocity modulation amplitude, and electric field strength in the positive column of a glow discharge. Additionally, we explain the large homogeneous component of the linewidth that has been previously described. Using this component, the ion mobility can be determined.

  4. Quantitative MRI techniques of cartilage composition

    PubMed Central

    Matzat, Stephen J.; van Tiel, Jasper; Gold, Garry E.

    2013-01-01

    Due to aging populations and increasing rates of obesity in the developed world, the prevalence of osteoarthritis (OA) is continually increasing. Decreasing the societal and patient burden of this disease motivates research in prevention, early detection of OA, and novel treatment strategies against OA. One key facet of this effort is the need to track the degradation of tissues within joints, especially cartilage. Currently, conventional imaging techniques provide accurate means to detect morphological deterioration of cartilage in the later stages of OA, but these methods are not sensitive to the subtle biochemical changes during early disease stages. Novel quantitative techniques with magnetic resonance imaging (MRI) provide direct and indirect assessments of cartilage composition, and thus allow for earlier detection and tracking of OA. This review describes the most prominent quantitative MRI techniques to date—dGEMRIC, T2 mapping, T1rho mapping, and sodium imaging. Other, less-validated methods for quantifying cartilage composition are also described—Ultrashort echo time (UTE), gagCEST, and diffusion-weighted imaging (DWI). For each technique, this article discusses the proposed biochemical correlates, as well its advantages and limitations for clinical and research use. The article concludes with a detailed discussion of how the field of quantitative MRI has progressed to provide information regarding two specific patient populations through clinical research—patients with anterior cruciate ligament rupture and patients with impingement in the hip. While quantitative imaging techniques continue to rapidly evolve, specific challenges for each technique as well as challenges to clinical applications remain. PMID:23833729

  5. Rigid reflection-asymmetric rotor description of the nucleus /sup 227/Ac

    SciTech Connect

    Leander, G.A.; Chen, Y.S.

    1987-03-01

    A model based on a static quadrupole and octupole deformation of the intrinsic nuclear shape gives an accurate description of the low-energy level spectrum and wave functions in /sup 227/Ac. Major discrepancies between strong-coupling theory and experiment are removed by taking into account the nonadiabaticity of the nucleonic motion.

  6. Effects of a Training Package to Improve the Accuracy of Descriptive Analysis Data Recording

    ERIC Educational Resources Information Center

    Mayer, Kimberly L.; DiGennaro Reed, Florence D.

    2013-01-01

    Functional behavior assessment is an important precursor to developing interventions to address a problem behavior. Descriptive analysis, a type of functional behavior assessment, is effective in informing intervention design only if the gathered data accurately capture relevant events and behaviors. We investigated a training procedure to improve…

  7. Application of the Rangeland Hydrology and Erosion Model to Ecological Site Descriptions and Management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The utility of Ecological Site Descriptions (ESDs) and State-and-Transition Models (STMs) concepts in guiding rangeland management hinges on their ability to accurately describe and predict community dynamics and the associated consequences. For many rangeland ecosystems, plant community dynamics ar...

  8. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    PubMed

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit.

  9. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    PubMed

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit. PMID:26897117

  10. The Classroom Practice of Creative Arts Education in NSW Primary Schools: A Descriptive Account

    ERIC Educational Resources Information Center

    Power, Bianca; Klopper, Christopher

    2011-01-01

    This article documents the current classroom practice of creative arts education of respondent classroom teachers in the New South Wales Greater Western Region, Australia. The study provides a descriptive account of classroom practice in creative arts education through the employment of a quantitative methodology. A questionnaire was designed and…

  11. Accurate calculation of diffraction-limited encircled and ensquared energy.

    PubMed

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  12. ELECTRICAL SUPPORT SYSTEM DESCRIPTION DOCUMENT

    SciTech Connect

    S. Roy

    2004-06-24

    The purpose of this revision of the System Design Description (SDD) is to establish requirements that drive the design of the electrical support system and their bases to allow the design effort to proceed to License Application. This SDD is a living document that will be revised at strategic points as the design matures over time. This SDD identifies the requirements and describes the system design as they exist at this time, with emphasis on those attributes of the design provided to meet the requirements. This SDD has been developed to be an engineering tool for design control. Accordingly, the primary audience/users are design engineers. This type of SDD both ''leads'' and ''trails'' the design process. It leads the design process with regard to the flow down of upper tier requirements onto the system. Knowledge of these requirements is essential in performing the design process. The SDD trails the design with regard to the description of the system. The description provided in the SDD is a reflection of the results of the design process to date. Functional and operational requirements applicable to electrical support systems are obtained from the ''Project Functional and Operational Requirements'' (F&OR) (Siddoway 2003). Other requirements to support the design process have been taken from higher-level requirements documents such as the ''Project Design Criteria Document'' (PDC) (Doraswamy 2004), and fire hazards analyses. The above-mentioned low-level documents address ''Project Requirements Document'' (PRD) (Canon and Leitner 2003) requirements. This SDD contains several appendices that include supporting information. Appendix B lists key system charts, diagrams, drawings, and lists, and Appendix C includes a list of system procedures.

  13. Standardizing the microsystems technology description

    NASA Astrophysics Data System (ADS)

    Liateni, Karim; Thomas, Gabriel; Hui Bon Hoa, Christophe; Bensaude, David

    2002-04-01

    The microsystems industry is promising a rapid and widespread growth for the coming years. The automotive, network, telecom and electronics industries take advantage of this technology by including it in their products; thus, getting better integration and high energetic performances. Microsystems related software and data exchange have inherited from the IC technology experience or standards, which appear not to fit the advanced level of conception currently needed by microsystems designers. A typical design flow to validate a microsystem device involves several software from disconnected areas like layout editors, FEM simulators, HDL modeling and simulation tools. However, and fabricated microsystem is obtained through execution of a layered process. Process characteristics will be used at each level of the design and analysis. Basically, the designer will have to customize each of his tools after the process. The project introduced here intends to unify the process description language and speed up the critical and tedious CAD customization task. We gather all the information related to the technology of a microsystem process in a single file. It is based on the XML standard format to receive worldwide attention. This format is called XML-MTD, standing for XML Microsystems Technology Description. Built around XML, it is an ASCII format which gives the ability to handle a comprehensive database for technology data. This format is open, given under general public license, but the aim is to manage the format withing a XML-MTD consortium of leader and well-established EDA companies and Foundries. In this way, it will take profit of their experience. For automated configuration of design and analysis tools regarding process-dependant information, we ship the Technology Manger software. Technology Manager links foundries with a large panel of standard EDA and FEA packages used by design teams relying on the Microsystems Technology Description in XML-MTD format.

  14. SNF AGING SYSTEM DESCRIPTION DOCUMENT

    SciTech Connect

    L.L. Swanson

    2005-04-06

    The purpose of this system description document (SDD) is to establish requirements that drive the design of the spent nuclear fuel (SNF) aging system and associated bases, which will allow the design effort to proceed. This SDD will be revised at strategic points as the design matures. This SDD identifies the requirements and describes the system design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This SDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This SDD is part of an iterative design process. It leads the design process with regard to the flow down of upper tier requirements onto the system. Knowledge of these requirements is essential in performing the design process. The SDD follows the design with regard to the description of the system. The description provided in the SDD reflects the current results of the design process. Throughout this SDD, the term aging cask applies to vertical site-specific casks and to horizontal aging modules. The term overpack is a vertical site-specific cask that contains a dual-purpose canister (DPC) or a disposable canister. Functional and operational requirements applicable to this system were obtained from ''Project Functional and Operational Requirements'' (F&OR) (Curry 2004 [DIRS 170557]). Other requirements that support the design process were taken from documents such as ''Project Design Criteria Document'' (PDC) (BSC 2004 [DES 171599]), ''Site Fire Hazards Analyses'' (BSC 2005 [DIRS 172174]), and ''Nuclear Safety Design Bases for License Application'' (BSC 2005 [DIRS 171512]). The documents address requirements in the ''Project Requirements Document'' (PRD) (Canori and Leitner 2003 [DIRS 166275]). This SDD includes several appendices. Appendix A is a Glossary; Appendix B is a list of key system charts, diagrams, drawings, lists and additional supporting information; and Appendix C is a list of

  15. ELECTRICAL POWER SYSTEM DESCRIPTION DOCUMENT

    SciTech Connect

    M. Maniyar

    2004-06-22

    The purpose of this revision of the System Description Document (SDD) is to establish requirements that drive the design of the electrical power system and their bases to allow the design effort to proceed to License Application. This SDD is a living document that will be revised at strategic points as the design matures over time. This SDD identifies the requirements and describes the system design as they exist at this time, with emphasis on those attributes of the design provided to meet the requirements. This SDD has been developed to be an engineering tool for design control. Accordingly, the primary audience are design engineers. This type of SDD leads and follows the design process. It leads the design process with regard to the flow down of upper tier requirements onto the system. Knowledge of these requirements is essential to performing the design process. This SDD follows the design with regard to the description of the system. The description provided in the SDD is a reflection of the results of the design process to date. Functional and operational requirements applicable to this system are obtained from ''Project Functional and Operational Requirements'' (F&OR) (Siddoway, 2003). Other requirements to support the design process have been taken from higher level requirements documents such as ''Project Design Criteria Document'' (PDC) (Doraswamy 2004), the fire hazards analyses, and the preclosure safety analysis. The above mentioned low-level documents address ''Project Requirements Document'' (PRD) (Canori and Leitner 2003) requirements. This SDD includes several appendices with supporting information. Appendix B lists key system charts, diagrams, drawings, and lists; and Appendix C is a list of system procedures.

  16. Orbiter active thermal control system description

    NASA Technical Reports Server (NTRS)

    Laubach, G. E.

    1975-01-01

    A brief description of the Orbiter Active Thermal Control System (ATCS) including (1) major functional requirements of heat load, temperature control and heat sink utilization, (2) the overall system arrangement, and (3) detailed description of the elements of the ATCS.

  17. IUE/IRA system description

    NASA Technical Reports Server (NTRS)

    Jennings, J.

    1977-01-01

    The IUE/IRA rate sensor system designed to meet the requirements of the International Ultraviolet Explorer spacecraft mission is described. The system consists of the sensor unit containing six rate sensor modules and the electronic control unit containing the rate sensor support electronics and the command/control circuitry. The inertial reference assembly formed by the combined units will provide spacecraft rate information for use in the stabilization and control system. The system is described in terms of functional description, operation redundancy performance, mechanical interface, and electrical interface. Test data obtained from the flight unit are summarized.

  18. Hadl: HUMS Architectural Description Language

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Adavi, V.; Agarwal, N.; Gullapalli, S.; Kumar, P.; Sundaram, P.

    2004-01-01

    Specification of architectures is an important prerequisite for evaluation of architectures. With the increase m the growth of health usage and monitoring systems (HUMS) in commercial and military domains, the need far the design and evaluation of HUMS architectures has also been on the increase. In this paper, we describe HADL, HUMS Architectural Description Language, that we have designed for this purpose. In particular, we describe the features of the language, illustrate them with examples, and show how we use it in designing domain-specific HUMS architectures. A companion paper contains details on our design methodology of HUMS architectures.

  19. Descriptive Model of Generic WAMS

    SciTech Connect

    Hauer, John F.; DeSteese, John G.

    2007-06-01

    The Department of Energy’s (DOE) Transmission Reliability Program is supporting the research, deployment, and demonstration of various wide area measurement system (WAMS) technologies to enhance the reliability of the Nation’s electrical power grid. Pacific Northwest National Laboratory (PNNL) was tasked by the DOE National SCADA Test Bed Program to conduct a study of WAMS security. This report represents achievement of the milestone to develop a generic WAMS model description that will provide a basis for the security analysis planned in the next phase of this study.

  20. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  1. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  2. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers

    PubMed Central

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-01-01

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769

  3. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry.

    PubMed

    Li, Xiu Qin; Zhang, Feng; Sun, Yan Yan; Yong, Wei; Chu, Xiao Gang; Fang, Yan Yan; Zweigenbaum, Jerry

    2008-02-11

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M+H]+ or the deprotonated molecules [M-H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0mg.kg(-1) concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0mg.kg(-1)-100mg.kg(-1) are 81-106%, with coefficients of variation <7.5%. Limits of detection (LODs) range from 0.0005 to 0.05 mg.kg(-1), which are far below the required maximum residue level (MRL) for these preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff.

  4. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    PubMed

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid.

  5. Pathways to Provenance: "DACS" and Creator Descriptions

    ERIC Educational Resources Information Center

    Weimer, Larry

    2007-01-01

    "Describing Archives: A Content Standard" breaks important ground for American archivists in its distinction between creator descriptions and archival material descriptions. Implementations of creator descriptions, many using Encoded Archival Context (EAC), are found internationally. "DACS"'s optional approach of describing creators in authority…

  6. 36 CFR 1120.26 - Deficient descriptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Deficient descriptions. 1120.26 Section 1120.26 Parks, Forests, and Public Property ARCHITECTURAL AND TRANSPORTATION BARRIERS COMPLIANCE BOARD PUBLIC AVAILABILITY OF INFORMATION Information Available Upon Request § 1120.26 Deficient descriptions. (a) If the description of...

  7. 40 CFR 233.11 - Program description.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... administration and evaluation of the program; (d) A description of the funding and manpower which will be... the State's compliance evaluation and enforcement programs, including a description of how the State... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Program description. 233.11...

  8. 40 CFR 123.22 - Program description.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... employees. The State need not submit complete job descriptions for every employee carrying out the State... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Program description. 123.22 Section... PROGRAM REQUIREMENTS State Program Submissions § 123.22 Program description. Any State that seeks...

  9. 40 CFR 145.23 - Program description.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... job descriptions for every employee carrying out the State program. (2) An itemization of the... responsibility requirements of §§ 144.51 and 144.52, and 40 CFR part 146; (7) A description of and schedule for... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Program description. 145.23...

  10. 40 CFR 145.23 - Program description.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... job descriptions for every employee carrying out the State program. (2) An itemization of the... responsibility requirements of §§ 144.51 and 144.52, and 40 CFR part 146; (7) A description of and schedule for... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Program description. 145.23...

  11. 40 CFR 271.6 - Program description.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 27 2011-07-01 2011-07-01 false Program description. 271.6 Section 271... Program description. Any State that seeks to administer a program under this subpart shall submit a description of the program it proposes to administer in lieu of the Federal program under State law or...

  12. 40 CFR 271.6 - Program description.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 28 2012-07-01 2012-07-01 false Program description. 271.6 Section 271... Program description. Any State that seeks to administer a program under this subpart shall submit a description of the program it proposes to administer in lieu of the Federal program under State law or...

  13. 40 CFR 271.6 - Program description.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 28 2013-07-01 2013-07-01 false Program description. 271.6 Section 271... Program description. Any State that seeks to administer a program under this subpart shall submit a description of the program it proposes to administer in lieu of the Federal program under State law or...

  14. 40 CFR 271.6 - Program description.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 27 2014-07-01 2014-07-01 false Program description. 271.6 Section 271... Program description. Any State that seeks to administer a program under this subpart shall submit a description of the program it proposes to administer in lieu of the Federal program under State law or...

  15. 40 CFR 123.22 - Program description.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... employees. The State need not submit complete job descriptions for every employee carrying out the State... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Program description. 123.22 Section... PROGRAM REQUIREMENTS State Program Submissions § 123.22 Program description. Any State that seeks...

  16. 40 CFR 123.22 - Program description.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... employees. The State need not submit complete job descriptions for every employee carrying out the State... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Program description. 123.22 Section... PROGRAM REQUIREMENTS State Program Submissions § 123.22 Program description. Any State that seeks...

  17. 40 CFR 123.22 - Program description.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... employees. The State need not submit complete job descriptions for every employee carrying out the State... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Program description. 123.22 Section 123... REQUIREMENTS State Program Submissions § 123.22 Program description. Any State that seeks to administer...

  18. 40 CFR 123.22 - Program description.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... employees. The State need not submit complete job descriptions for every employee carrying out the State... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Program description. 123.22 Section... PROGRAM REQUIREMENTS State Program Submissions § 123.22 Program description. Any State that seeks...

  19. 40 CFR 271.6 - Program description.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Program description. 271.6 Section 271... Program description. Any State that seeks to administer a program under this subpart shall submit a description of the program it proposes to administer in lieu of the Federal program under State law or...

  20. Progress toward accurate high spatial resolution actinide analysis by EPMA

    NASA Astrophysics Data System (ADS)

    Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.

    2010-12-01

    High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation

  1. Quantitative fluorescence tomography using a trimodality system: in vivo validation

    NASA Astrophysics Data System (ADS)

    Lin, Yuting; Barber, William C.; Iwanczyk, Jan S.; Roeck, Werner W.; Nalcioglu, Orhan; Gulsen, Gultekin

    2010-07-01

    A fully integrated trimodality fluorescence, diffuse optical, and x-ray computed tomography (FT/DOT/XCT) system for small animal imaging is reported in this work. The main purpose of this system is to obtain quantitatively accurate fluorescence concentration images using a multimodality approach. XCT offers anatomical information, while DOT provides the necessary background optical property map to improve FT image accuracy. The quantitative accuracy of this trimodality system is demonstrated in vivo. In particular, we show that a 2-mm-diam fluorescence inclusion located 8 mm deep in a nude mouse can only be localized when functional a priori information from DOT is available. However, the error in the recovered fluorophore concentration is nearly 87%. On the other hand, the fluorophore concentration can be accurately recovered within 2% error when both DOT functional and XCT structural a priori information are utilized together to guide and constrain the FT reconstruction algorithm.

  2. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  3. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    SciTech Connect

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-03-23

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  4. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  5. Quantitative environmental risk analysis

    SciTech Connect

    Klovning, J.; Nilsen, E.F.

    1995-12-31

    According to regulations relating to implementation and rise of risk analysis in the petroleum activities issued by the Norwegian Petroleum Directorate, it is mandatory for an operator on the Norwegian Continental Shelf to establish acceptance criteria for environmental risk in the activities and carry out environmental risk analysis. This paper presents a {open_quotes}new{close_quotes} method for environmental risk analysis developed by the company. The objective has been to assist the company to meet rules and regulations and to assess and describe the environmental risk in a systematic manner. In the environmental risk analysis the most sensitive biological resource in the affected area is used to assess the environmental damage. The analytical method is based on the methodology for quantitative risk analysis related to loss of life. In addition it incorporates the effect of seasonal fluctuations in the environmental risk evaluations. The paper is describing the function of the main analytical sequences exemplified through an analysis of environmental risk related to exploration drilling in an environmental sensitive area on the Norwegian Continental Shelf.

  6. Quantitive DNA Fiber Mapping

    SciTech Connect

    Lu, Chun-Mei; Wang, Mei; Greulich-Bode, Karin M.; Weier, Jingly F.; Weier, Heinz-Ulli G.

    2008-01-28

    Several hybridization-based methods used to delineate single copy or repeated DNA sequences in larger genomic intervals take advantage of the increased resolution and sensitivity of free chromatin, i.e., chromatin released from interphase cell nuclei. Quantitative DNA fiber mapping (QDFM) differs from the majority of these methods in that it applies FISH to purified, clonal DNA molecules which have been bound with at least one end to a solid substrate. The DNA molecules are then stretched by the action of a receding meniscus at the water-air interface resulting in DNA molecules stretched homogeneously to about 2.3 kb/{micro}m. When non-isotopically, multicolor-labeled probes are hybridized to these stretched DNA fibers, their respective binding sites are visualized in the fluorescence microscope, their relative distance can be measured and converted into kilobase pairs (kb). The QDFM technique has found useful applications ranging from the detection and delineation of deletions or overlap between linked clones to the construction of high-resolution physical maps to studies of stalled DNA replication and transcription.

  7. Quantitation of naturalistic behaviors.

    PubMed

    Evans, H L

    1988-10-01

    Naturalistic behaviors are behaviors that organisms exhibit 'in nature'. Eating, sleeping and sexual behaviors are examples. Since naturalistic behaviors are observed to occur without any apparent training or learning, some people mistakenly believe that all naturalistic behaviors are unlearned, and are thus different from laboratory behaviors. We maintain that naturalistic behaviors can be studied profitably in the toxicological laboratory, using quantitative techniques from behavioral neuroscience. Understanding of toxicity and underlying mechanisms is enhanced when naturalistic behaviors are thought of as responses to stimuli. Stimuli that influence naturalistic behaviors may arise inside the organisms (e.g., physiological signals of hunger) or outside the organisms (e.g., the smell of food or the start of the nocturnal lighting cycle). A practical, noninvasive, automated system can be used to improve upon the cage-side observation currently used to evaluate naturalistic behaviors in toxicity screening. Effects of alkyltins and other neurotoxicants upon eating, drinking, rearing, and the daily cycle of rest-activity will be shown. The rodent's pattern of nocturnal activity has proven to be particularly sensitive to neurotoxicants, and thus deserves additional attention in developing neurobehavioral toxicology.

  8. Quantitative Electron Nanodiffraction.

    SciTech Connect

    Spence, John

    2015-01-30

    This Final report summarizes progress under this award for the final reporting period 2002 - 2013 in our development of quantitive electron nanodiffraction to materials problems, especially devoted to atomistic processes in semiconductors and electronic oxides such as the new artificial oxide multilayers, where our microdiffraction is complemented with energy-loss spectroscopy (ELNES) and aberration-corrected STEM imaging (9). The method has also been used to map out the chemical bonds in the important GaN semiconductor (1) used for solid state lighting, and to understand the effects of stacking sequence variations and interfaces in digital oxide superlattices (8). Other projects include the development of a laser-beam Zernike phase plate for cryo-electron microscopy (5) (based on the Kapitza-Dirac effect), work on reconstruction of molecular images using the scattering from many identical molecules lying in random orientations (4), a review article on space-group determination for the International Tables on Crystallography (10), the observation of energy-loss spectra with millivolt energy resolution and sub-nanometer spatial resolution from individual point defects in an alkali halide, a review article for the Centenary of X-ray Diffration (17) and the development of a new method of electron-beam lithography (12). We briefly summarize here the work on GaN, on oxide superlattice ELNES, and on lithography by STEM.

  9. Statistical genetics and evolution of quantitative traits

    NASA Astrophysics Data System (ADS)

    Neher, Richard A.; Shraiman, Boris I.

    2011-10-01

    The distribution and heritability of many traits depends on numerous loci in the genome. In general, the astronomical number of possible genotypes makes the system with large numbers of loci difficult to describe. Multilocus evolution, however, greatly simplifies in the limit of weak selection and frequent recombination. In this limit, populations rapidly reach quasilinkage equilibrium (QLE) in which the dynamics of the full genotype distribution, including correlations between alleles at different loci, can be parametrized by the allele frequencies. This review provides a simplified exposition of the concept and mathematics of QLE which is central to the statistical description of genotypes in sexual populations. Key results of quantitative genetics such as the generalized Fisher’s “fundamental theorem,” along with Wright’s adaptive landscape, are shown to emerge within QLE from the dynamics of the genotype distribution. This is followed by a discussion under what circumstances QLE is applicable, and what the breakdown of QLE implies for the population structure and the dynamics of selection. Understanding the fundamental aspects of multilocus evolution obtained through simplified models may be helpful in providing conceptual and computational tools to address the challenges arising in the studies of complex quantitative phenotypes of practical interest.

  10. Quantitative thermal diffusivity measurements of composites

    NASA Technical Reports Server (NTRS)

    Heath, D. M.; Winfree, W. P.; Heyman, J. S.; Miller, W. E.; Welch, C. S.

    1986-01-01

    A remote radiometric technique for making quantitative thermal diffusivity measurements is described. The technique was designed to make assessments of the structural integrity of large composite parts, such as wings, and can be performed at field sites. In the measurement technique, a CO2 laser beam is scanned using two orthogonal servo-controlled deflecting mirrors. An infrared imager, whose scanning mirrors oscillate in the vertical and the horizontal directions, is used as the detector. The analysis technique used to extract the diffusivity from these images is based on a thin infinite plate assumption, which requires waiting a given period of time for the temperature to equilibrate throughout the thickness of the sample. The technique is shown to be accurate to within two percent for values of the order of those for composite diffusivities, and to be insensitive to convection losses.

  11. Expanding the Horizons of Quantitative Remote Sensing

    NASA Astrophysics Data System (ADS)

    Christensen, P. R.

    2011-12-01

    Remote sensing of the Earth has made significant progress since its inception in the 1970's. The Landsat, ASTER, MODIS multi-spectral imagers have provided a global, long-term record of the surface at visible through infrared wavelengths, and meter-scale color images can be acquired of regions of interest. However, these systems, and many of the algorithms to analyze them, have advanced surprising little over the past three decades. Very little hyperspectral data are readily available or widely used, and software analysis tools are typically complex or 'black box'. As a result it is often difficult to make quantitative assessments of surface character - for example the accurate mapping of the composition and abundance of surface components. Ironically, planetary observations often have higher spectral resolution, a broader spectral range, and global coverage, with the result that sophisticated tools are routinely applied to these data to make quantitative mineralogy maps. These analyses are driven by the reality that, except for a tiny area explored by rovers, remote sensing provides the only means to determine surface properties. Improved terrestrial hyperspectral imaging systems have long been proposed, and will make great advances. However, these systems remain in the future, and the question exists - what advancements can be made to extract quantitative information from existing data? A case study, inspired by the 1987 work of Sultan et al, was performed to combine all available visible, near-, and thermal-IR multi-spectral data with selected hyperspectral information and limited field verification. Hyperspectral data were obtained from lab observations of collected samples, and the highest spatial resolution images available were used to help interpret the lower-resolution regional imagery. The hyperspectral data were spectrally deconvolved, giving quantitative mineral abundances accurate to 5-10%. These spectra were degraded to the multi-spectral resolution

  12. Quantitation of multistage carcinogenesis in rat liver.

    PubMed

    Pitot, H C; Dragan, Y P; Teeguarden, J; Hsia, S; Campbell, H

    1996-01-01

    A well characterized model of multistage carcinogenesis is that of hepatocarcinogenesis in the rat. The histopathology as well as the cell and molecular biology of the stages of initiation, promotion, and progression have been elucidated to varying degrees in this system. Putatively single initiated hepatocytes are identified by their expression of the ubiquitous marker of hepatocarcinogenesis, glutathione-S-transferase pi (GSTP). 0.5-1.0 x 10(6) GSTP-positive "initiated" hepatocytes developed within 14 days after initiation with a subcarcinogenic dose of diethylnitrosamine (DEN). Approximately 1% of these cells develop clonally into altered hepatic foci (AHF) in animals administered promoting agents, such as phenobarbital, chronically for 4-8 mo. Hepatocytes within AHF during the stage of promotion exhibit normal diploid karyotypes but various phenotypes depending on the chemical nature of the promoting agent. Continued administration of the promoting agent results in the infrequent development of hepatocellular carcinomas; however, administration of a complete carcinogen or a progressor agent during the stage of promotion results in substantial numbers of hepatic neoplasms. In order to quantitate the development of the stage of progression more accurately, markers selective for this stage have been sought. Transforming growth factor-alpha (TGF-alpha) appears to be such a marker of progression. About 500 TGF-alpha-positive lesions develop spontaneously following initiation and continued promotion, usually within GSTP-positive AHF, but administration of a single dose of a progressor agent such as ethylnitrosourea may increase this number 3-fold or more. Some agents such as gamma radiation and hydroxyurea, when administered as single or a few closely spaced multiple doses, result in no increased number in TGF-alpha-positive lesions but a markedly enhanced increase in their growth rate. By monitoring gene expression using quantitative stereology, the stages of

  13. Quantitative Spectroscopy of Deneb

    NASA Astrophysics Data System (ADS)

    Schiller, Florian; Przybilla, N.

    We use the visually brightest A-type supergiant Deneb (A2 Ia) as benchmark for testing a spectro- scopic analysis technique developed for quantitative studies of BA-type supergiants. Our NLTE spectrum synthesis technique allows us to derive stellar parameters and elemental abundances with unprecedented accuracy. The study is based on a high-resolution and high-S/N spectrum obtained with the Echelle spectrograph FOCES on the Calar Alto 2.2 m telescope. Practically all inconsistencies reported in earlier studies are resolved. A self-consistent view of Deneb is thus obtained, allowing us to discuss its evolutionary state in detail by comparison with the most recent generation of evolution models for massive stars. The basic atmospheric parameters Teff = 8525 ± 75 K and log g = 1.10 ± 0.05 dex (cgs) and the distance imply the following fundamental parameters for Deneb: M spec = 17 ± 3 M⊙ , L = 1.77 ± 0.29 · 105 L⊙ and R = 192 ± 16 R⊙ . The derived He and CNO abundances indicate mixing with nuclear processed matter. The high N/C ratio of 4.64 ± 1.39 and a N/O ratio of 0.88 ± 0.07 (mass fractions) could in principle be explained by evolutionary models with initially very rapid rotation. A mass of ˜ 22 M⊙ is implied for the progenitor on the zero-age main se- quence, i.e. it was a late O-type star. Significant mass-loss has occurred, probably enhanced by pronounced centrifugal forces. The observational constraints favour a scenario for the evolu- tion of Deneb where the effects of rotational mixing may be amplified by an interaction with a magnetic field. Analogous analyses of such highly luminous BA-type supergiants will allow for precision studies of different galaxies in the Local Group and beyond.

  14. Quantitative Luminescence Imaging System

    SciTech Connect

    Batishko, C.R.; Stahl, K.A.; Fecht, B.A.

    1992-12-31

    The goal of the MEASUREMENT OF CHEMILUMINESCENCE project is to develop and deliver a suite of imaging radiometric instruments for measuring spatial distributions of chemiluminescence. Envisioned deliverables include instruments working at the microscopic, macroscopic, and life-sized scales. Both laboratory and field portable instruments are envisioned. The project also includes development of phantoms as enclosures for the diazoluminomelanin (DALM) chemiluminescent chemistry. A suite of either phantoms in a variety of typical poses, or phantoms that could be adjusted to a variety of poses, is envisioned. These are to include small mammals (rats), mid-sized mammals (monkeys), and human body parts. A complete human phantom that can be posed is a long-term goal of the development. Taken together, the chemistry and instrumentation provide a means for imaging rf dosimetry based on chemiluminescence induced by the heat resulting from rf energy absorption. The first delivered instrument, the Quantitative Luminescence Imaging System (QLIS), resulted in a patent, and an R&D Magazine 1991 R&D 100 award, recognizing it as one of the 100 most significant technological developments of 1991. The current status of the project is that three systems have been delivered, several related studies have been conducted, two preliminary human hand phantoms have been delivered, system upgrades have been implemented, and calibrations have been maintained. Current development includes sensitivity improvements to the microscope-based system; extension of the large-scale (potentially life-sized targets) system to field portable applications; extension of the 2-D large-scale system to 3-D measurement; imminent delivery of a more refined human hand phantom and a rat phantom; rf, thermal and imaging subsystem integration; and continued calibration and upgrade support.

  15. Accurately measuring MPI broadcasts in a computational grid

    SciTech Connect

    Karonis N T; de Supinski, B R

    1999-05-06

    An MPI library's implementation of broadcast communication can significantly affect the performance of applications built with that library. In order to choose between similar implementations or to evaluate available libraries, accurate measurements of broadcast performance are required. As we demonstrate, existing methods for measuring broadcast performance are either inaccurate or inadequate. Fortunately, we have designed an accurate method for measuring broadcast performance, even in a challenging grid environment. Measuring broadcast performance is not easy. Simply sending one broadcast after another allows them to proceed through the network concurrently, thus resulting in inaccurate per broadcast timings. Existing methods either fail to eliminate this pipelining effect or eliminate it by introducing overheads that are as difficult to measure as the performance of the broadcast itself. This problem becomes even more challenging in grid environments. Latencies a long different links can vary significantly. Thus, an algorithm's performance is difficult to predict from it's communication pattern. Even when accurate pre-diction is possible, the pattern is often unknown. Our method introduces a measurable overhead to eliminate the pipelining effect, regardless of variations in link latencies. choose between different available implementations. Also, accurate and complete measurements could guide use of a given implementation to improve application performance. These choices will become even more important as grid-enabled MPI libraries [6, 7] become more common since bad choices are likely to cost significantly more in grid environments. In short, the distributed processing community needs accurate, succinct and complete measurements of collective communications performance. Since successive collective communications can often proceed concurrently, accurately measuring them is difficult. Some benchmarks use knowledge of the communication algorithm to predict the

  16. An accurate simulation model for single-photon avalanche diodes including important statistical effects

    NASA Astrophysics Data System (ADS)

    Qiuyang, He; Yue, Xu; Feifei, Zhao

    2013-10-01

    An accurate and complete circuit simulation model for single-photon avalanche diodes (SPADs) is presented. The derived model is not only able to simulate the static DC and dynamic AC behaviors of an SPAD operating in Geiger-mode, but also can emulate the second breakdown and the forward bias behaviors. In particular, it considers important statistical effects, such as dark-counting and after-pulsing phenomena. The developed model is implemented using the Verilog-A description language and can be directly performed in commercial simulators such as Cadence Spectre. The Spectre simulation results give a very good agreement with the experimental results reported in the open literature. This model shows a high simulation accuracy and very fast simulation rate.

  17. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  18. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  19. ROM Plus(®): accurate point-of-care detection of ruptured fetal membranes.

    PubMed

    McQuivey, Ross W; Block, Jon E

    2016-01-01

    Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article. PMID:27274316

  20. A Quantitative Description of Suicide Inhibition of Dichloroacetic Acid in Rats and Mice

    SciTech Connect

    Keys, Deborah A.; Schultz, Irv R.; Mahle, Deirdre A.; Fisher, Jeffrey W.

    2004-09-16

    Dichloroacetic acid (DCA), a minor metabolite of trichloroethylene (TCE) and water disinfection byproduct, remains an important risk assessment issue because of its carcinogenic potency. DCA has been shown to inhibit its own metabolism by irreversibly inactivating glutathione transferase zeta (GSTzeta). To better predict internal dosimetry of DCA, a physiologically based pharmacokinetic (PBPK) model of DCA was developed. Suicide inhibition was described dynamically by varying the rate of maximal GSTzeta mediated metabolism of DCA (Vmax) over time. Resynthesis (zero-order) and degradation (first-order) of metabolic activity were described. Published iv pharmacokinetic studies in native rats were used to estimate an initial Vmax value, with Km set to an in vitro determined value. Degradation and resynthesis rates were set to estimated values from a published immunoreactive GSTzeta protein time course. The first-order inhibition rate, kd, was estimated to this same time course. A secondary, linear non-GSTzeta-mediated metabolic pathway is proposed to fit DCA time courses following treatment with DCA in drinking water. The PBPK model predictions were validated by comparing predicted DCA concentrations to measured concentrations in published studies of rats pretreated with DCA following iv exposure to 0.05 to 20 mg/kg DCA. The same model structure was parameterized to simulate DCA time courses following iv exposure in native and pretreated mice. Blood and liver concentrations during and postexposure to DCA in drinking water were predicted. Comparisons of PBPK model predicted to measured values were favorable, lending support for the further development of this model for application to DCA or TCE human health risk assessment.

  1. Topographic and quantitative description of rat dorsal column fibres arising from the lumbar dorsal roots.

    PubMed

    Smith, K J; Bennett, B J

    1987-08-01

    The number and topographic distribution of the profiles of degenerating primary afferent fibres were determined within the rat dorsal column 3-4 weeks after division of the lumbar and S2 dorsal roots. The degenerating fibres were identified in toluidine blue-stained 1 micron transverse sections taken at different spinal levels, and their positions were marked with the aid of a drawing tube. Fibres entered the dorsal column at its lateral margin and sent projections rostrally and caudally. Fibres ascending the column were displaced medially in an orderly progression as the fibres of more rostral roots entered the cord. Most ascending fibres were lost from the dorsal columns within 2-3 segments of their site of entry, with only 15%, on average, reaching cervical levels. The descending fibres maintained a less organised topographic distribution, and typically only 3% of fibres entering the dorsal column descended two segments from their site of entry.

  2. Quantitative description of thermodynamic and kinetic properties of the platelet factor 4/heparin bonds

    NASA Astrophysics Data System (ADS)

    Nguyen, Thi-Huong; Greinacher, Andreas; Delcea, Mihaela

    2015-05-01

    Heparin is the most important antithrombotic drug in hospitals. It binds to the endogenous tetrameric protein platelet factor 4 (PF4) forming PF4/heparin complexes which may cause a severe immune-mediated adverse drug reaction, so-called heparin-induced thrombocytopenia (HIT). Although new heparin drugs have been synthesized to reduce such a risk, detailed bond dynamics of the PF4/heparin complexes have not been clearly understood. In this study, single molecule force spectroscopy (SMFS) is utilized to characterize the interaction of PF4 with heparins of defined length (5-, 6-, 8-, 12-, and 16-mers). Analysis of the force-distance curves shows that PF4/heparin binding strength rises with increasing heparin length. In addition, two binding pathways in the PF4/short heparins (<=8-mers) and three binding pathways in the PF4/long heparins (>=8-mers) are identified. We provide a model for the PF4/heparin complexes in which short heparins bind to one PF4 tetramer, while long heparins bind to two PF4 tetramers. We propose that the interaction between long heparins and PF4s is not only due to charge differences as generally assumed, but also due to hydrophobic interaction between two PF4s which are brought close to each other by long heparin. This complicated interaction induces PF4/heparin complexes more stable than other ligand-receptor interactions. Our results also reveal that the boundary between antigenic and non-antigenic heparins is between 8- and 12-mers. These observations are particularly important to understand processes in which PF4-heparin interactions are involved and to develop new heparin-derived drugs.Heparin is the most important antithrombotic drug in hospitals. It binds to the endogenous tetrameric protein platelet factor 4 (PF4) forming PF4/heparin complexes which may cause a severe immune-mediated adverse drug reaction, so-called heparin-induced thrombocytopenia (HIT). Although new heparin drugs have been synthesized to reduce such a risk, detailed bond dynamics of the PF4/heparin complexes have not been clearly understood. In this study, single molecule force spectroscopy (SMFS) is utilized to characterize the interaction of PF4 with heparins of defined length (5-, 6-, 8-, 12-, and 16-mers). Analysis of the force-distance curves shows that PF4/heparin binding strength rises with increasing heparin length. In addition, two binding pathways in the PF4/short heparins (<=8-mers) and three binding pathways in the PF4/long heparins (>=8-mers) are identified. We provide a model for the PF4/heparin complexes in which short heparins bind to one PF4 tetramer, while long heparins bind to two PF4 tetramers. We propose that the interaction between long heparins and PF4s is not only due to charge differences as generally assumed, but also due to hydrophobic interaction between two PF4s which are brought close to each other by long heparin. This complicated interaction induces PF4/heparin complexes more stable than other ligand-receptor interactions. Our results also reveal that the boundary between antigenic and non-antigenic heparins is between 8- and 12-mers. These observations are particularly important to understand processes in which PF4-heparin interactions are involved and to develop new heparin-derived drugs. Electronic supplementary information (ESI) available: S1 - Control experiments for tip and substrate functionalization. S2 - AFM images of the gold surface, the PEG-coated gold surface and the PF4-coated gold surface. S3 - Selected probability distributions of the rupture distances from F-D curve measurements for PEG-NH2/glass, HO05, HO08 and HO16. S4 - EIA measurements for anti-PF4/heparin antibody binding to PF4/heparin complexes. S5 - Bond distances of PF4/heparin complexes at three different loading rate regimes. See DOI: 10.1039/c5nr02132d

  3. Quantitative and Descriptive Comparison of Four Acoustic Analysis Systems: Vowel Measurements

    ERIC Educational Resources Information Center

    Burris, Carlyn; Vorperian, Houri K.; Fourakis, Marios; Kent, Ray D.; Bolt, Daniel M.

    2014-01-01

    Purpose: This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Method: Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9…

  4. Quantitative description of thermodynamic and kinetic properties of the platelet factor 4/heparin bonds.

    PubMed

    Nguyen, Thi-Huong; Greinacher, Andreas; Delcea, Mihaela

    2015-06-14

    Heparin is the most important antithrombotic drug in hospitals. It binds to the endogenous tetrameric protein platelet factor 4 (PF4) forming PF4/heparin complexes which may cause a severe immune-mediated adverse drug reaction, so-called heparin-induced thrombocytopenia (HIT). Although new heparin drugs have been synthesized to reduce such a risk, detailed bond dynamics of the PF4/heparin complexes have not been clearly understood. In this study, single molecule force spectroscopy (SMFS) is utilized to characterize the interaction of PF4 with heparins of defined length (5-, 6-, 8-, 12-, and 16-mers). Analysis of the force-distance curves shows that PF4/heparin binding strength rises with increasing heparin length. In addition, two binding pathways in the PF4/short heparins (≤8-mers) and three binding pathways in the PF4/long heparins (≥8-mers) are identified. We provide a model for the PF4/heparin complexes in which short heparins bind to one PF4 tetramer, while long heparins bind to two PF4 tetramers. We propose that the interaction between long heparins and PF4s is not only due to charge differences as generally assumed, but also due to hydrophobic interaction between two PF4s which are brought close to each other by long heparin. This complicated interaction induces PF4/heparin complexes more stable than other ligand-receptor interactions. Our results also reveal that the boundary between antigenic and non-antigenic heparins is between 8- and 12-mers. These observations are particularly important to understand processes in which PF4-heparin interactions are involved and to develop new heparin-derived drugs.

  5. Do children prefer mentalistic descriptions?

    PubMed

    Dore, Rebecca A; Lillard, Angeline S

    2014-01-01

    Against a long tradition of childhood realism (Piaget, 1929), A. S. Lillard and J. H. Flavell (1990) found that 3-year-olds prefer to characterize people by their mental states (beliefs, desires, emotions) than by their visible behaviors. In this exploratory study, we extend this finding to a new cohort of 3-year-olds, examine how these preferences change from 3-4 years, and explore relationships with theory of mind and parental mind-mindedness. The results showed a developmental change and a possible cohort difference: at 3 years, children in the sample preferred behavioral descriptions, although by 4 years of age, they preferred mentalistic ones. Interestingly, mentalistic preferences were unrelated to theory of mind or parental mind-mindedness, concurrently or over time. Perspective-taking skills at 3 years, however, predicted an increase in mentalistic responses from 3 years to 4 years. Possible explanations for each finding are discussed. PMID:24796151

  6. Geometric moments for gait description

    NASA Astrophysics Data System (ADS)

    Toxqui-Quitl, C.; Morales-Batalla, V.; Padilla-Vivanco, A.; Camacho-Bello, C.

    2013-09-01

    The optical flow associated with a set of digital images of a moving individual is analyzed in order to extract a gait signature. For this, invariant Hu moments are obtained for image description. A Hu Moment History (HMH) is obtained from K frames to describe the gait signature of individuals in a video. The gait descriptors are subsequences of the HMH of variable width. Each subsequence is generated by means of genetic algorithms and used for classification in a neuronal network. The database for algorithm evaluation is MoBo, and the gait classification results are above 90% for the cases of slow and fast walking and 100% for the cases of walking with a ball and inclined walking. An optical processor is also implemented in order to obtain the descriptors of the human gait.

  7. DESCRIPTIVE ANALYSIS OF DIVALENT SALTS

    PubMed Central

    YANG, HEIDI HAI-LING; LAWLESS, HARRY T.

    2005-01-01

    Many divalent salts (e.g., calcium, iron, zinc), have important nutritional value and are used to fortify food or as dietary supplements. Sensory characterization of some divalent salts in aqueous solutions by untrained judges has been reported in the psychophysical literature, but formal sensory evaluation by trained panels is lacking. To provide this information, a trained descriptive panel evaluated the sensory characteristics of 10 divalent salts including ferrous sulfate, chloride and gluconate; calcium chloride, lactate and glycerophosphate; zinc sulfate and chloride; and magnesium sulfate and chloride. Among the compounds tested, iron compounds were highest in metallic taste; zinc compounds had higher astringency and a glutamate-like sensation; and bitterness was pronounced for magnesium and calcium salts. Bitterness was affected by the anion in ferrous and calcium salts. Results from the trained panelists were largely consistent with the psychophysical literature using untrained judges, but provided a more comprehensive set of oral sensory attributes. PMID:16614749

  8. Lagrangian description of warm plasmas

    NASA Technical Reports Server (NTRS)

    Kim, H.

    1970-01-01

    Efforts are described to extend the averaged Lagrangian method of describing small signal wave propagation and nonlinear wave interaction, developed by earlier workers for cold plasmas, to the more general conditions of warm collisionless plasmas, and to demonstrate particularly the effectiveness of the method in analyzing wave-wave interactions. The theory is developed for both the microscopic description and the hydrodynamic approximation to plasma behavior. First, a microscopic Lagrangian is formulated rigorously, and expanded in terms of perturbations about equilibrium. Two methods are then described for deriving a hydrodynamic Lagrangian. In the first of these, the Lagrangian is obtained by velocity integration of the exact microscopic Lagrangian. In the second, the expanded hydrodynamic Lagrangian is obtained directly from the expanded microscopic Lagrangian. As applications of the microscopic Lagrangian, the small-signal dispersion relations and the coupled mode equations are derived for all possible waves in a warm infinite, weakly inhomogeneous magnetoplasma, and their interactions are examined.

  9. Tailored work hardening descriptions in simulation of sheet metal forming

    NASA Astrophysics Data System (ADS)

    Vegter, Henk; Mulder, Hans.; van Liempt, Peter; Heijne, Jan

    2013-12-01

    In the previous decades much attention has been given on an accurate material description, especially for simulations at the design stage of new models in the automotive industry. Improvements lead to shorter design times and a better tailored use of material. It also contributed to the design and optimization of new materials. The current description of plastic material behaviour in simulation models of sheet metal forming is covered by a hardening curve and a yield surface. In this paper the focus will be on modelling of work hardening for advanced high strength steels considering the requirements of present applications. Nowadays work hardening models need to include the effect of hard phases in a soft matrix and the effect of strain rate and temperature on work hardening. Most material tests to characterize work hardening are only applicable to low strains whereas many practical applications require hardening data at relatively high strains. Therefore, physically based hardening descriptions are needed allowing reliable extensions to high strain values.

  10. Workshop on quantitative dynamic stratigraphy

    SciTech Connect

    Cross, T.A.

    1988-04-01

    This document discusses the development of quantitative simulation models for the investigation of geologic systems. The selection of variables, model verification, evaluation, and future directions in quantitative dynamic stratigraphy (QDS) models are detailed. Interdisciplinary applications, integration, implementation, and transfer of QDS are also discussed. (FI)

  11. Fast and accurate generation of ab initio quality atomic charges using nonparametric statistical regression.

    PubMed

    Rai, Brajesh K; Bakken, Gregory A

    2013-07-15

    We introduce a class of partial atomic charge assignment method that provides ab initio quality description of the electrostatics of bioorganic molecules. The method uses a set of models that neither have a fixed functional form nor require a fixed set of parameters, and therefore are capable of capturing the complexities of the charge distribution in great detail. Random Forest regression is used to build separate charge models for elements H, C, N, O, F, S, and Cl, using training data consisting of partial charges along with a description of their surrounding chemical environments; training set charges are generated by fitting to the b3lyp/6-31G* electrostatic potential (ESP) and are subsequently refined to improve consistency and transferability of the charge assignments. Using a set of 210 neutral, small organic molecules, the absolute hydration free energy calculated using these charges in conjunction with Generalized Born solvation model shows a low mean unsigned error, close to 1 kcal/mol, from the experimental data. Using another large and independent test set of chemically diverse organic molecules, the method is shown to accurately reproduce charge-dependent observables--ESP and dipole moment--from ab initio calculations. The method presented here automatically provides an estimate of potential errors in the charge assignment, enabling systematic improvement of these models using additional data. This work has implications not only for the future development of charge models but also in developing methods to describe many other chemical properties that require accurate representation of the electronic structure of the system.

  12. Quantitative tomographic measurements of opaque multiphase flows

    SciTech Connect

    GEORGE,DARIN L.; TORCZYNSKI,JOHN R.; SHOLLENBERGER,KIM ANN; O'HERN,TIMOTHY J.; CECCIO,STEVEN L.

    2000-03-01

    An electrical-impedance tomography (EIT) system has been developed for quantitative measurements of radial phase distribution profiles in two-phase and three-phase vertical column flows. The EIT system is described along with the computer algorithm used for reconstructing phase volume fraction profiles. EIT measurements were validated by comparison with a gamma-densitometry tomography (GDT) system. The EIT system was used to accurately measure average solid volume fractions up to 0.05 in solid-liquid flows, and radial gas volume fraction profiles in gas-liquid flows with gas volume fractions up to 0.15. In both flows, average phase volume fractions and radial volume fraction profiles from GDT and EIT were in good agreement. A minor modification to the formula used to relate conductivity data to phase volume fractions was found to improve agreement between the methods. GDT and EIT were then applied together to simultaneously measure the solid, liquid, and gas radial distributions within several vertical three-phase flows. For average solid volume fractions up to 0.30, the gas distribution for each gas flow rate was approximately independent of the amount of solids in the column. Measurements made with this EIT system demonstrate that EIT may be used successfully for noninvasive, quantitative measurements of dispersed multiphase flows.

  13. Quantitative Species Measurements In Microgravity Combustion Flames

    NASA Technical Reports Server (NTRS)

    Chen, Shin-Juh; Pilgrim, Jeffrey S.; Silver, Joel A.; Piltch, Nancy D.

    2003-01-01

    The capability of models and theories to accurately predict and describe the behavior of low gravity flames can only be verified by quantitative measurements. Although video imaging, simple temperature measurements, and velocimetry methods have provided useful information in many cases, there is still a need for quantitative species measurements. Over the past decade, we have been developing high sensitivity optical absorption techniques to permit in situ, non-intrusive, absolute concentration measurements for both major and minor flames species using diode lasers. This work has helped to establish wavelength modulation spectroscopy (WMS) as an important method for species detection within the restrictions of microgravity-based measurements. More recently, in collaboration with Prof. Dahm at the University of Michigan, a new methodology combining computed flame libraries with a single experimental measurement has allowed us to determine the concentration profiles for all species in a flame. This method, termed ITAC (Iterative Temperature with Assumed Chemistry) was demonstrated for a simple laminar nonpremixed methane-air flame at both 1-g and at 0-g in a vortex ring flame. In this paper, we report additional normal and microgravity experiments which further confirm the usefulness of this approach. We also present the development of a new type of laser. This is an external cavity diode laser (ECDL) which has the unique capability of high frequency modulation as well as a very wide tuning range. This will permit the detection of multiple species with one laser while using WMS detection.

  14. In Vitro Metabolic Labeling of Intestinal Microbiota for Quantitative Metaproteomics.

    PubMed

    Zhang, Xu; Ning, Zhibin; Mayne, Janice; Deeke, Shelley A; Li, Jennifer; Starr, Amanda E; Chen, Rui; Singleton, Ruth; Butcher, James; Mack, David R; Stintzi, Alain; Figeys, Daniel

    2016-06-21

    Intestinal microbiota is emerging as one of the key environmental factors influencing or causing the development of numerous human diseases. Metaproteomics can provide invaluable information on the functional activities of intestinal microbiota and on host-microbe interactions as well. However, the application of metaproteomics in human microbiota studies is still largely limited, in part due to the lack of accurate quantitative intestinal metaproteomic methods. Most current metaproteomic microbiota studies are based on label-free quantification, which may suffer from variability during the separate sample processing and mass spectrometry runs. In this study, we describe a quantitative metaproteomic strategy, using in vitro stable isotopically ((15)N) labeled microbiota as a spike-in reference, to study the intestinal metaproteomes. We showed that the human microbiota were efficiently labeled (>95% (15)N enrichment) within 3 days under in vitro conditions, and accurate light-to-heavy protein/peptide ratio measurements were obtained using a high-resolution mass spectrometer and the quantitative proteomic software tool Census. We subsequently employed our approach to study the in vitro modulating effects of fructo-oligosaccharide and five different monosaccharides on the microbiota. Our methodology improves the accuracy of quantitative intestinal metaproteomics, which would promote the application of proteomics for functional studies of intestinal microbiota. PMID:27248155

  15. Trypsin-catalyzed oxygen-18 labeling for quantitative proteomics

    SciTech Connect

    Qian, Weijun; Petritis, Brianne O.; Nicora, Carrie D.; Smith, Richard D.

    2011-07-01

    Stable isotope labeling based on relative peptide/protein abundance measurements is commonly applied for quantitative proteomics. Recently, trypsin-catalyzed oxygen-18 labeling has grown in popularity due to its simplicity, cost-effectiveness, and its ability to universally label peptides with high sample recovery. In (18)O labeling, both C-terminal carboxyl group atoms of tryptic peptides can be enzymatically exchanged with (18)O, thus providing the labeled peptide with a 4 Da mass shift from the (16)O-labeled sample. Peptide (18)O labeling is ideally suited for generating a labeled "universal" reference sample used for obtaining accurate and reproducible quantitative measurements across large number of samples in quantitative discovery proteomics.

  16. Quantitative 23Na magnetic resonance imaging of model foods.

    PubMed

    Veliyulin, Emil; Egelandsdal, Bjørg; Marica, Florin; Balcom, Bruce J

    2009-05-27

    Partial (23)Na MRI invisibility in muscle foods is often referred to as an inherent drawback of the MRI technique, impairing quantitative sodium analysis. Several model samples were designed to simulate muscle foods with a broad variation in protein, fat, moisture, and salt content. (23)Na spin-echo MRI and a recently developed (23)Na SPRITE MRI approach were compared for quantitative sodium imaging, demonstrating the possibility of accurate quantitative (23)Na MRI by the latter method. Good correlations with chemically determined standards were also obtained from bulk (23)Na free induction decay (FID) and CPMG relaxation experiments on the same sample set, indicating their potential use for rapid bulk NaCl measurements. Thus, the sodium MRI invisibility is a methodological problem that can easily be circumvented by using the SPRITE MRI technique. PMID:21314196

  17. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  18. Quantitative blood speed imaging with intravascular ultrasound.

    PubMed

    Crowe, J R; O'Donnell, M

    2001-03-01

    Previously, we presented a method of real-time arterial color flow imaging using an intravascular ultrasound (IVUS) imaging system, where real-time RF A-scans were processed with an FIR (finite-impulse response) filter bank to estimate relative blood speed. Although qualitative flow measurements are clinically valuable, realizing the full potential of blood flow imaging requires quantitative flow speed and volume measurements in real time. Unfortunately, the rate of RF echo-to-echo decorrelation is not directly related to scatterer speed in a side-looking IVUS system because the elevational extent of the imaging slice varies with range. Consequently, flow imaging methods using any type of decorrelation processing to estimate blood speed without accounting for spatial variation of the radiation pattern will have estimation errors that prohibit accurate comparison of speed estimates from different depths. The FIR filter bank approach measures the rate of change of the ultrasound signal by estimating the slow-time spectrum of RF echoes. A filter bank of M bandpass filters is applied in parallel to estimate M components of the slow-time DFT (discrete Fourier transform). The relationship between the slow-time spectrum, aperture diffraction pattern, and scatterer speed is derived for a simplified target. Because the ultimate goal of this work is to make quantitative speed measurements, we present a method to map slow time spectral characteristics to a quantitative estimate. Results of the speed estimator are shown for a simulated circumferential catheter array insonifying blood moving uniformly past the array (i.e., plug flow) and blood moving with a parabolic profile (i.e., laminar flow). PMID:11370361

  19. A gold nanoparticle-based semi-quantitative and quantitative ultrasensitive paper sensor for the detection of twenty mycotoxins.

    PubMed

    Kong, Dezhao; Liu, Liqiang; Song, Shanshan; Suryoprabowo, Steven; Li, Aike; Kuang, Hua; Wang, Libing; Xu, Chuanlai

    2016-03-01

    A semi-quantitative and quantitative multi-immunochromatographic (ICA) strip detection assay was developed for the simultaneous detection of twenty types of mycotoxins from five classes, including zearalenones (ZEAs), deoxynivalenols (DONs), T-2 toxins (T-2s), aflatoxins (AFs), and fumonisins (FBs), in cereal food samples. Sensitive and specific monoclonal antibodies were selected for this assay. The semi-quantitative results were obtained within 20 min by the naked eye, with visual limits of detection for ZEAs, DONs, T-2s, AFs and FBs of 0.1-0.5, 2.5-250, 0.5-1, 0.25-1 and 2.5-10 μg kg(-1), and cut-off values of 0.25-1, 5-500, 1-10, 0.5-2.5 and 5-25 μg kg(-1), respectively. The quantitative results were obtained using a hand-held strip scan reader, with the calculated limits of detection for ZEAs, DONs, T-2s, AFs and FBs of 0.04-0.17, 0.06-49, 0.15-0.22, 0.056-0.49 and 0.53-1.05 μg kg(-1), respectively. The analytical results of spiked samples were in accordance with the accurate content in the simultaneous detection analysis. This newly developed ICA strip assay is suitable for the on-site detection and rapid initial screening of mycotoxins in cereal samples, facilitating both semi-quantitative and quantitative determination.

  20. Legislative Ambiguity and the Accurate Identification of Seriously Emotionally Disturbed.

    ERIC Educational Resources Information Center

    Ostrander, Rick; And Others

    1988-01-01

    Surveyed school psychologists (N=127) practicing under three types of state criteria used in identifying children as seriously emotionally disturbed (SED) to determine the legal accuracy of their identifications of 12 behavioral descriptions of specific disorders. Found considerable differences in the perceptions of school psychology personnel.…