Science.gov

Sample records for accurate quantitative analyses

  1. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  2. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  3. Challenges in accurate quantitation of lysophosphatidic acids in human biofluids

    PubMed Central

    Onorato, Joelle M.; Shipkova, Petia; Minnich, Anne; Aubry, Anne-Françoise; Easter, John; Tymiak, Adrienne

    2014-01-01

    Lysophosphatidic acids (LPAs) are biologically active signaling molecules involved in the regulation of many cellular processes and have been implicated as potential mediators of fibroblast recruitment to the pulmonary airspace, pointing to possible involvement of LPA in the pathology of pulmonary fibrosis. LPAs have been measured in various biological matrices and many challenges involved with their analyses have been documented. However, little published information is available describing LPA levels in human bronchoalveolar lavage fluid (BALF). We therefore conducted detailed investigations into the effects of extensive sample handling and sample preparation conditions on LPA levels in human BALF. Further, targeted lipid profiling of human BALF and plasma identified the most abundant lysophospholipids likely to interfere with LPA measurements. We present the findings from these investigations, highlighting the importance of well-controlled sample handling for the accurate quantitation of LPA. Further, we show that chromatographic separation of individual LPA species from their corresponding lysophospholipid species is critical to avoid reporting artificially elevated levels. The optimized sample preparation and LC/MS/MS method was qualified using a stable isotope-labeled LPA as a surrogate calibrant and used to determine LPA levels in human BALF and plasma from a Phase 0 clinical study comparing idiopathic pulmonary fibrosis patients to healthy controls. PMID:24872406

  4. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  5. Fast and Accurate Detection of Multiple Quantitative Trait Loci

    PubMed Central

    Nettelblad, Carl; Holmgren, Sverker

    2013-01-01

    Abstract We present a new computational scheme that enables efficient and reliable quantitative trait loci (QTL) scans for experimental populations. Using a standard brute-force exhaustive search effectively prohibits accurate QTL scans involving more than two loci to be performed in practice, at least if permutation testing is used to determine significance. Some more elaborate global optimization approaches, for example, DIRECT have been adopted earlier to QTL search problems. Dramatic speedups have been reported for high-dimensional scans. However, since a heuristic termination criterion must be used in these types of algorithms, the accuracy of the optimization process cannot be guaranteed. Indeed, earlier results show that a small bias in the significance thresholds is sometimes introduced. Our new optimization scheme, PruneDIRECT, is based on an analysis leading to a computable (Lipschitz) bound on the slope of a transformed objective function. The bound is derived for both infinite- and finite-size populations. Introducing a Lipschitz bound in DIRECT leads to an algorithm related to classical Lipschitz optimization. Regions in the search space can be permanently excluded (pruned) during the optimization process. Heuristic termination criteria can thus be avoided. Hence, PruneDIRECT has a well-defined error bound and can in practice be guaranteed to be equivalent to a corresponding exhaustive search. We present simulation results that show that for simultaneous mapping of three QTLS using permutation testing, PruneDIRECT is typically more than 50 times faster than exhaustive search. The speedup is higher for stronger QTL. This could be used to quickly detect strong candidate eQTL networks. PMID:23919387

  6. New ventures require accurate risk analyses and adjustments.

    PubMed

    Eastaugh, S R

    2000-01-01

    For new business ventures to succeed, healthcare executives need to conduct robust risk analyses and develop new approaches to balance risk and return. Risk analysis involves examination of objective risks and harder-to-quantify subjective risks. Mathematical principles applied to investment portfolios also can be applied to a portfolio of departments or strategic business units within an organization. The ideal business investment would have a high expected return and a low standard deviation. Nonetheless, both conservative and speculative strategies should be considered in determining an organization's optimal service line and helping the organization manage risk.

  7. Quantitative Analyses of Planetary Reflectance Spectra

    NASA Technical Reports Server (NTRS)

    Johnson, P. E.

    1985-01-01

    The development of a set of quantitative models to analyze planetary reflectance spectra as a function of microscopic and macroscopic mineral mixtures, particle size, and illumination geometry is considered. The approach has been to simplify more sophisticated algorithms to include the smallest number of parameters possible, consistent with being able to use them to produce useful results. This means that they should be able to model the data to within the accuracy obtainable by laboratory, telescopic, and space instrumentation (roughly 1%). The algorithms are ideally given in terms of parameters that are directly measureable (such as spectral reflectance or particle size).

  8. Quantitative DNA Analyses for Airborne Birch Pollen.

    PubMed

    Müller-Germann, Isabell; Vogel, Bernhard; Vogel, Heike; Pauling, Andreas; Fröhlich-Nowoisky, Janine; Pöschl, Ulrich; Després, Viviane R

    2015-01-01

    Birch trees produce large amounts of highly allergenic pollen grains that are distributed by wind and impact human health by causing seasonal hay fever, pollen-related asthma, and other allergic diseases. Traditionally, pollen forecasts are based on conventional microscopic counting techniques that are labor-intensive and limited in the reliable identification of species. Molecular biological techniques provide an alternative approach that is less labor-intensive and enables identification of any species by its genetic fingerprint. A particularly promising method is quantitative Real-Time polymerase chain reaction (qPCR), which can be used to determine the number of DNA copies and thus pollen grains in air filter samples. During the birch pollination season in 2010 in Mainz, Germany, we collected air filter samples of fine (<3 μm) and coarse air particulate matter. These were analyzed by qPCR using two different primer pairs: one for a single-copy gene (BP8) and the other for a multi-copy gene (ITS). The BP8 gene was better suitable for reliable qPCR results, and the qPCR results obtained for coarse particulate matter were well correlated with the birch pollen forecasting results of the regional air quality model COSMO-ART. As expected due to the size of birch pollen grains (~23 μm), the concentration of DNA in fine particulate matter was lower than in the coarse particle fraction. For the ITS region the factor was 64, while for the single-copy gene BP8 only 51. The possible presence of so-called sub-pollen particles in the fine particle fraction is, however, interesting even in low concentrations. These particles are known to be highly allergenic, reach deep into airways and cause often severe health problems. In conclusion, the results of this exploratory study open up the possibility of predicting and quantifying the pollen concentration in the atmosphere more precisely in the future.

  9. Quantitative DNA Analyses for Airborne Birch Pollen.

    PubMed

    Müller-Germann, Isabell; Vogel, Bernhard; Vogel, Heike; Pauling, Andreas; Fröhlich-Nowoisky, Janine; Pöschl, Ulrich; Després, Viviane R

    2015-01-01

    Birch trees produce large amounts of highly allergenic pollen grains that are distributed by wind and impact human health by causing seasonal hay fever, pollen-related asthma, and other allergic diseases. Traditionally, pollen forecasts are based on conventional microscopic counting techniques that are labor-intensive and limited in the reliable identification of species. Molecular biological techniques provide an alternative approach that is less labor-intensive and enables identification of any species by its genetic fingerprint. A particularly promising method is quantitative Real-Time polymerase chain reaction (qPCR), which can be used to determine the number of DNA copies and thus pollen grains in air filter samples. During the birch pollination season in 2010 in Mainz, Germany, we collected air filter samples of fine (<3 μm) and coarse air particulate matter. These were analyzed by qPCR using two different primer pairs: one for a single-copy gene (BP8) and the other for a multi-copy gene (ITS). The BP8 gene was better suitable for reliable qPCR results, and the qPCR results obtained for coarse particulate matter were well correlated with the birch pollen forecasting results of the regional air quality model COSMO-ART. As expected due to the size of birch pollen grains (~23 μm), the concentration of DNA in fine particulate matter was lower than in the coarse particle fraction. For the ITS region the factor was 64, while for the single-copy gene BP8 only 51. The possible presence of so-called sub-pollen particles in the fine particle fraction is, however, interesting even in low concentrations. These particles are known to be highly allergenic, reach deep into airways and cause often severe health problems. In conclusion, the results of this exploratory study open up the possibility of predicting and quantifying the pollen concentration in the atmosphere more precisely in the future. PMID:26492534

  10. Quantitative DNA Analyses for Airborne Birch Pollen

    PubMed Central

    Müller-Germann, Isabell; Vogel, Bernhard; Vogel, Heike; Pauling, Andreas; Fröhlich-Nowoisky, Janine; Pöschl, Ulrich; Després, Viviane R.

    2015-01-01

    Birch trees produce large amounts of highly allergenic pollen grains that are distributed by wind and impact human health by causing seasonal hay fever, pollen-related asthma, and other allergic diseases. Traditionally, pollen forecasts are based on conventional microscopic counting techniques that are labor-intensive and limited in the reliable identification of species. Molecular biological techniques provide an alternative approach that is less labor-intensive and enables identification of any species by its genetic fingerprint. A particularly promising method is quantitative Real-Time polymerase chain reaction (qPCR), which can be used to determine the number of DNA copies and thus pollen grains in air filter samples. During the birch pollination season in 2010 in Mainz, Germany, we collected air filter samples of fine (<3 μm) and coarse air particulate matter. These were analyzed by qPCR using two different primer pairs: one for a single-copy gene (BP8) and the other for a multi-copy gene (ITS). The BP8 gene was better suitable for reliable qPCR results, and the qPCR results obtained for coarse particulate matter were well correlated with the birch pollen forecasting results of the regional air quality model COSMO-ART. As expected due to the size of birch pollen grains (~23 μm), the concentration of DNA in fine particulate matter was lower than in the coarse particle fraction. For the ITS region the factor was 64, while for the single-copy gene BP8 only 51. The possible presence of so-called sub-pollen particles in the fine particle fraction is, however, interesting even in low concentrations. These particles are known to be highly allergenic, reach deep into airways and cause often severe health problems. In conclusion, the results of this exploratory study open up the possibility of predicting and quantifying the pollen concentration in the atmosphere more precisely in the future. PMID:26492534

  11. Quantitative proteomics using the high resolution accurate mass capabilities of the quadrupole-orbitrap mass spectrometer.

    PubMed

    Gallien, Sebastien; Domon, Bruno

    2014-08-01

    High resolution/accurate mass hybrid mass spectrometers have considerably advanced shotgun proteomics and the recent introduction of fast sequencing capabilities has expanded its use for targeted approaches. More specifically, the quadrupole-orbitrap instrument has a unique configuration and its new features enable a wide range of experiments. An overview of the analytical capabilities of this instrument is presented, with a focus on its application to quantitative analyses. The high resolution, the trapping capability and the versatility of the instrument have allowed quantitative proteomic workflows to be redefined and new data acquisition schemes to be developed. The initial proteomic applications have shown an improvement of the analytical performance. However, as quantification relies on ion trapping, instead of ion beam, further refinement of the technique can be expected.

  12. Active contour approach for accurate quantitative airway analysis

    NASA Astrophysics Data System (ADS)

    Odry, Benjamin L.; Kiraly, Atilla P.; Slabaugh, Greg G.; Novak, Carol L.; Naidich, David P.; Lerallut, Jean-Francois

    2008-03-01

    Chronic airway disease causes structural changes in the lungs including peribronchial thickening and airway dilatation. Multi-detector computed tomography (CT) yields detailed near-isotropic images of the lungs, and thus the potential to obtain quantitative measurements of lumen diameter and airway wall thickness. Such measurements would allow standardized assessment, and physicians to diagnose and locate airway abnormalities, adapt treatment, and monitor progress over time. However, due to the sheer number of airways per patient, systematic analysis is infeasible in routine clinical practice without automation. We have developed an automated and real-time method based on active contours to estimate both airway lumen and wall dimensions; the method does not require manual contour initialization but only a starting point on the targeted airway. While the lumen contour segmentation is purely region-based, the estimation of the outer diameter considers the inner wall segmentation as well as local intensity variation, in order anticipate the presence of nearby arteries and exclude them. These properties make the method more robust than the Full-Width Half Maximum (FWHM) approach. Results are demonstrated on a phantom dataset with known dimensions and on a human dataset where the automated measurements are compared against two human operators. The average error on the phantom measurements was 0.10mm and 0.14mm for inner and outer diameters, showing sub-voxel accuracy. Similarly, the mean variation from the average manual measurement was 0.14mm and 0.18mm for inner and outer diameters respectively.

  13. Quantitative analyses of the biomineralization of different hard tissues.

    PubMed

    Arnold, S; Plate, U; Wiesmann, H P; Stratmann, U; Kohl, H; Höhling, H J

    2001-06-01

    The primary crystallites of the different developing hard tissues have an apatite structure. However, they have crystal lattice distortions representing an intermediate state between amorphous and fully crystalline. We have applied energy-filtering transmission electron microscopy in the selected area electron diffraction mode to analyse different stages of crystal formation in dentine, bone, enamel and inorganic apatite mineral. We have obtained quantitative information on the degree of crystal lattice distortion using the paracrystal theory of Hosemann and Bagchi. We have found that the early formed crystallites of the hard tissues being analysed have a paracrystalline character comparable to biopolymers. However, with maturation, the lattice fluctuations of the crystallites of the hard tissues bone, enamel and dentine decrease to form a typical (para)crystalline character. Also the decrease of the organic proportion in the matrix corresponds to the decrease of the lattice fluctuation of the crystallites in the different hard tissues during maturation.

  14. FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads

    PubMed Central

    Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya

    2012-01-01

    The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith–Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets. PMID:22379138

  15. Accurate scoring of non-uniform sampling schemes for quantitative NMR

    PubMed Central

    Aoto, Phillip C.; Fenwick, R. Bryn; Kroon, Gerard J. A.; Wright, Peter E.

    2014-01-01

    Non-uniform sampling (NUS) in NMR spectroscopy is a recognized and powerful tool to minimize acquisition time. Recent advances in reconstruction methodologies are paving the way for the use of NUS in quantitative applications, where accurate measurement of peak intensities is crucial. The presence or absence of NUS artifacts in reconstructed spectra ultimately determines the success of NUS in quantitative NMR. The quality of reconstructed spectra from NUS acquired data is dependent upon the quality of the sampling scheme. Here we demonstrate that the best performing sampling schemes make up a very small percentage of the total randomly generated schemes. A scoring method is found to accurately predict the quantitative similarity between reconstructed NUS spectra and those of fully sampled spectra. We present an easy-to-use protocol to batch generate and rank optimal Poisson-gap NUS schedules for use with 2D NMR with minimized noise and accurate signal reproduction, without the need for the creation of synthetic spectra. PMID:25063954

  16. Fractal and Lacunarity Analyses: Quantitative Characterization of Hierarchical Surface Topographies.

    PubMed

    Ling, Edwin J Y; Servio, Phillip; Kietzig, Anne-Marie

    2016-02-01

    Biomimetic hierarchical surface structures that exhibit features having multiple length scales have been used in many technological and engineering applications. Their surface topographies are most commonly analyzed using scanning electron microscopy (SEM), which only allows for qualitative visual assessments. Here we introduce fractal and lacunarity analyses as a method of characterizing the SEM images of hierarchical surface structures in a quantitative manner. Taking femtosecond laser-irradiated metals as an example, our results illustrate that, while the fractal dimension is a poor descriptor of surface complexity, lacunarity analysis can successfully quantify the spatial texture of an SEM image; this, in turn, provides a convenient means of reporting changes in surface topography with respect to changes in processing parameters. Furthermore, lacunarity plots are shown to be sensitive to the different length scales present within a hierarchical structure due to the reversal of lacunarity trends at specific magnifications where new features become resolvable. Finally, we have established a consistent method of detecting pattern sizes in an image from the oscillation of lacunarity plots. Therefore, we promote the adoption of lacunarity analysis as a powerful tool for quantitative characterization of, but not limited to, multi-scale hierarchical surface topographies. PMID:26758776

  17. A correlative imaging based methodology for accurate quantitative assessment of bone formation in additive manufactured implants.

    PubMed

    Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D

    2016-06-01

    A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants.

  18. A fluorescence-based quantitative real-time PCR assay for accurate Pocillopora damicornis species identification

    NASA Astrophysics Data System (ADS)

    Thomas, Luke; Stat, Michael; Evans, Richard D.; Kennington, W. Jason

    2016-09-01

    Pocillopora damicornis is one of the most extensively studied coral species globally, but high levels of phenotypic plasticity within the genus make species identification based on morphology alone unreliable. As a result, there is a compelling need to develop cheap and time-effective molecular techniques capable of accurately distinguishing P. damicornis from other congeneric species. Here, we develop a fluorescence-based quantitative real-time PCR (qPCR) assay to genotype a single nucleotide polymorphism that accurately distinguishes P. damicornis from other morphologically similar Pocillopora species. We trial the assay across colonies representing multiple Pocillopora species and then apply the assay to screen samples of Pocillopora spp. collected at regional scales along the coastline of Western Australia. This assay offers a cheap and time-effective alternative to Sanger sequencing and has broad applications including studies on gene flow, dispersal, recruitment and physiological thresholds of P. damicornis.

  19. Quantitative analyses for elucidating mechanisms of cell fate commitment in the mouse blastocyst

    NASA Astrophysics Data System (ADS)

    Saiz, Néstor; Kang, Minjung; Puliafito, Alberto; Schrode, Nadine; Xenopoulos, Panagiotis; Lou, Xinghua; Di Talia, Stefano; Hadjantonakis, Anna-Katerina

    2015-03-01

    In recent years we have witnessed a shift from qualitative image analysis towards higher resolution, quantitative analyses of imaging data in developmental biology. This shift has been fueled by technological advances in both imaging and analysis software. We have recently developed a tool for accurate, semi-automated nuclear segmentation of imaging data from early mouse embryos and embryonic stem cells. We have applied this software to the study of the first lineage decisions that take place during mouse development and established analysis pipelines for both static and time-lapse imaging experiments. In this paper we summarize the conclusions from these studies to illustrate how quantitative, single-cell level analysis of imaging data can unveil biological processes that cannot be revealed by traditional qualitative studies.

  20. Quantitative real-time PCR for rapid and accurate titration of recombinant baculovirus particles.

    PubMed

    Hitchman, Richard B; Siaterli, Evangelia A; Nixon, Clare P; King, Linda A

    2007-03-01

    We describe the use of quantitative PCR (QPCR) to titer recombinant baculoviruses. Custom primers and probe were designed to gp64 and used to calculate a standard curve of QPCR derived titers from dilutions of a previously titrated baculovirus stock. Each dilution was titrated by both plaque assay and QPCR, producing a consistent and reproducible inverse relationship between C(T) and plaque forming units per milliliter. No significant difference was observed between titers produced by QPCR and plaque assay for 12 recombinant viruses, confirming the validity of this technique as a rapid and accurate method of baculovirus titration.

  1. Accurate Construction of Photoactivated Localization Microscopy (PALM) Images for Quantitative Measurements

    PubMed Central

    Coltharp, Carla; Kessler, Rene P.; Xiao, Jie

    2012-01-01

    Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and

  2. Accurate quantitation of MHC-bound peptides by application of isotopically labeled peptide MHC complexes.

    PubMed

    Hassan, Chopie; Kester, Michel G D; Oudgenoeg, Gideon; de Ru, Arnoud H; Janssen, George M C; Drijfhout, Jan W; Spaapen, Robbert M; Jiménez, Connie R; Heemskerk, Mirjam H M; Falkenburg, J H Frederik; van Veelen, Peter A

    2014-09-23

    Knowledge of the accurate copy number of HLA class I presented ligands is important in fundamental and clinical immunology. Currently, the best copy number determinations are based on mass spectrometry, employing single reaction monitoring (SRM) in combination with a known amount of isotopically labeled peptide. The major drawback of this approach is that the losses during sample pretreatment, i.e. immunopurification and filtration steps, are not well defined and must, therefore, be estimated. In addition, such losses can vary for individual peptides. Therefore, we developed a new approach in which isotopically labeled peptide-MHC monomers (hpMHC) are prepared and added directly after cell lysis, i.e. before the usual sample processing. Using this approach, all losses during sample processing can be accounted for and allows accurate determination of specific MHC class I-presented ligands. Our study pinpoints the immunopurification step as the origin of the rather extreme losses during sample pretreatment and offers a solution to account for these losses. Obviously, this has important implications for accurate HLA-ligand quantitation. The strategy presented here can be used to obtain a reliable view of epitope copy number and thus allows improvement of vaccine design and strategies for immunotherapy.

  3. Quantitation and accurate mass analysis of pesticides in vegetables by LC/TOF-MS.

    PubMed

    Ferrer, Imma; Thurman, E Michael; Fernández-Alba, Amadeo R

    2005-05-01

    A quantitative method consisting of solvent extraction followed by liquid chromatography/time-of-flight mass spectrometry (LC/TOF-MS) analysis was developed for the identification and quantitation of three chloronicotinyl pesticides (imidacloprid, acetamiprid, thiacloprid) commonly used on salad vegetables. Accurate mass measurements within 3 ppm error were obtained for all the pesticides studied in various vegetable matrixes (cucumber, tomato, lettuce, pepper), which allowed an unequivocal identification of the target pesticides. Calibration curves covering 2 orders of magnitude were linear over the concentration range studied, thus showing the quantitative ability of TOF-MS as a monitoring tool for pesticides in vegetables. Matrix effects were also evaluated using matrix-matched standards showing no significant interferences between matrixes and clean extracts. Intraday reproducibility was 2-3% relative standard deviation (RSD) and interday values were 5% RSD. The precision (standard deviation) of the mass measurements was evaluated and it was less than 0.23 mDa between days. Detection limits of the chloronicotinyl insecticides in salad vegetables ranged from 0.002 to 0.01 mg/kg. These concentrations are equal to or better than the EU directives for controlled pesticides in vegetables showing that LC/TOF-MS analysis is a powerful tool for identification of pesticides in vegetables. Robustness and applicability of the method was validated for the analysis of market vegetable samples. Concentrations found in these samples were in the range of 0.02-0.17 mg/kg of vegetable. PMID:15859598

  4. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  5. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    PubMed Central

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  6. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull.

    PubMed

    Gröning, Flora; Jones, Marc E H; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E; Fagan, Michael J

    2013-07-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  7. Quantitative analyses of extrudate swell for polymer nanocomposites

    NASA Astrophysics Data System (ADS)

    Wang, Kejian; Sun, Chongxiao

    2009-07-01

    The quantitative theory of extrudate swell for nanocomposite and pure polymer is significant either for optimum processing or for understanding their viscoelasticity. Based on Song's die swell theory for entangled polymers, one extrudate swell correlation with material properties and capillary parameters was developed for polymer melt and their nanocomposites when compensating reservoir entry effect. It was the first to find that the composite swell ratio can be the matrix swell ratio multiplied by the concentration shift factor. The factor is the functions of the shear field, filler content, filler internal structure and the surface state as well as the matrix properties. The quantitative model was well fitful for the five kinds of nanoomposites.

  8. Quantitative and qualitative analyses of human salivary NEFA with gas-chromatography and mass spectrometry

    PubMed Central

    Kulkarni, Bhushan V.; Wood, Karl V.; Mattes, Richard D.

    2012-01-01

    Salivary non-esterified fatty acids (NEFA) are proposed to play a role in oral health, oral fat detection, and they may hold diagnostic and prognostic potential. Yet, little is known about the array and concentrations of NEFA in saliva. The aim of the study was to conduct qualitative and quantitative analyses of salivary NEFA in healthy humans and to present a new, efficient protocol to perform such analyses. Resting saliva samples from fifteen participants were collected. The salivary lipids were extracted using a modified Folch extraction. The NEFA in the extracted lipids were selectively subjected to pentafluorobenzyl bromide (PFB) derivatization and qualitatively and quantitatively analyzed using gas chromatography–mass spectrometry (GC–MS). A total of 16 NEFA were identified in resting saliva. The four major NEFA were palmitic, linoleic, oleic, and stearic acids. Their concentrations ranged from 2 to 9 μM. This is the first study to characterize individual human salivary NEFA and their respective concentrations. The method used in the study is sensitive, precise, and accurate. It is specific to fatty acids in non-esterified form and hence enables analysis of NEFA without their separation from other lipid classes. Thus, it saves time, reagents and prevents loss of sample. These properties make it suitable for large scale analysis of salivary NEFA. PMID:22934076

  9. Multimodal Quantitative Phase Imaging with Digital Holographic Microscopy Accurately Assesses Intestinal Inflammation and Epithelial Wound Healing.

    PubMed

    Lenz, Philipp; Brückner, Markus; Ketelhut, Steffi; Heidemann, Jan; Kemper, Björn; Bettenworth, Dominik

    2016-01-01

    The incidence of inflammatory bowel disease, i.e., Crohn's disease and Ulcerative colitis, has significantly increased over the last decade. The etiology of IBD remains unknown and current therapeutic strategies are based on the unspecific suppression of the immune system. The development of treatments that specifically target intestinal inflammation and epithelial wound healing could significantly improve management of IBD, however this requires accurate detection of inflammatory changes. Currently, potential drug candidates are usually evaluated using animal models in vivo or with cell culture based techniques in vitro. Histological examination usually requires the cells or tissues of interest to be stained, which may alter the sample characteristics and furthermore, the interpretation of findings can vary by investigator expertise. Digital holographic microscopy (DHM), based on the detection of optical path length delay, allows stain-free quantitative phase contrast imaging. This allows the results to be directly correlated with absolute biophysical parameters. We demonstrate how measurement of changes in tissue density with DHM, based on refractive index measurement, can quantify inflammatory alterations, without staining, in different layers of colonic tissue specimens from mice and humans with colitis. Additionally, we demonstrate continuous multimodal label-free monitoring of epithelial wound healing in vitro, possible using DHM through the simple automated determination of the wounded area and simultaneous determination of morphological parameters such as dry mass and layer thickness of migrating cells. In conclusion, DHM represents a valuable, novel and quantitative tool for the assessment of intestinal inflammation with absolute values for parameters possible, simplified quantification of epithelial wound healing in vitro and therefore has high potential for translational diagnostic use. PMID:27685659

  10. Cultivation and quantitative proteomic analyses of acidophilic microbial communities

    SciTech Connect

    Belnap, Christopher P.; Pan, Chongle; Verberkmoes, Nathan C; Power, Mary E.; Samatova, Nagiza F; Carver, Rudolf L.; Hettich, Robert {Bob} L; Banfield, Jillian F.

    2010-01-01

    Acid mine drainage (AMD), an extreme environment characterized by low pH and high metal concentrations, can support dense acidophilic microbial biofilm communities that rely on chemoautotrophic production based on iron oxidation. Field determined production rates indicate that, despite the extreme conditions, these communities are sufficiently well adapted to their habitats to achieve primary production rates comparable to those of microbial communities occurring in some non-extreme environments. To enable laboratory studies of growth, production and ecology of AMD microbial communities, a culturing system was designed to reproduce natural biofilms, including organisms recalcitrant to cultivation. A comprehensive metabolic labeling-based quantitative proteomic analysis was used to verify that natural and laboratory communities were comparable at the functional level. Results confirmed that the composition and core metabolic activities of laboratory-grown communities were similar to a natural community, including the presence of active, low abundance bacteria and archaea that have not yet been isolated. However, laboratory growth rates were slow compared with natural communities, and this correlated with increased abundance of stress response proteins for the dominant bacteria in laboratory communities. Modification of cultivation conditions reduced the abundance of stress response proteins and increased laboratory community growth rates. The research presented here represents the first description of the application of a metabolic labeling-based quantitative proteomic analysis at the community level and resulted in a model microbial community system ideal for testing physiological and ecological hypotheses.

  11. Quantitative analyses of the plant cytoskeleton reveal underlying organizational principles

    PubMed Central

    Breuer, David; Ivakov, Alexander; Sampathkumar, Arun; Hollandt, Florian; Persson, Staffan; Nikoloski, Zoran

    2014-01-01

    The actin and microtubule (MT) cytoskeletons are vital structures for cell growth and development across all species. While individual molecular mechanisms underpinning actin and MT dynamics have been intensively studied, principles that govern the cytoskeleton organization remain largely unexplored. Here, we captured biologically relevant characteristics of the plant cytoskeleton through a network-driven imaging-based approach allowing us to quantitatively assess dynamic features of the cytoskeleton. By introducing suitable null models, we demonstrate that the plant cytoskeletal networks exhibit properties required for efficient transport, namely, short average path lengths and high robustness. We further show that these advantageous features are maintained during temporal cytoskeletal rearrangements. Interestingly, man-made transportation networks exhibit similar properties, suggesting general laws of network organization supporting diverse transport processes. The proposed network-driven analysis can be readily used to identify organizational principles of cytoskeletons in other organisms. PMID:24920110

  12. Quantitative analyses of the plant cytoskeleton reveal underlying organizational principles.

    PubMed

    Breuer, David; Ivakov, Alexander; Sampathkumar, Arun; Hollandt, Florian; Persson, Staffan; Nikoloski, Zoran

    2014-08-01

    The actin and microtubule (MT) cytoskeletons are vital structures for cell growth and development across all species. While individual molecular mechanisms underpinning actin and MT dynamics have been intensively studied, principles that govern the cytoskeleton organization remain largely unexplored. Here, we captured biologically relevant characteristics of the plant cytoskeleton through a network-driven imaging-based approach allowing us to quantitatively assess dynamic features of the cytoskeleton. By introducing suitable null models, we demonstrate that the plant cytoskeletal networks exhibit properties required for efficient transport, namely, short average path lengths and high robustness. We further show that these advantageous features are maintained during temporal cytoskeletal rearrangements. Interestingly, man-made transportation networks exhibit similar properties, suggesting general laws of network organization supporting diverse transport processes. The proposed network-driven analysis can be readily used to identify organizational principles of cytoskeletons in other organisms. PMID:24920110

  13. Quantitative analyses of the plant cytoskeleton reveal underlying organizational principles.

    PubMed

    Breuer, David; Ivakov, Alexander; Sampathkumar, Arun; Hollandt, Florian; Persson, Staffan; Nikoloski, Zoran

    2014-08-01

    The actin and microtubule (MT) cytoskeletons are vital structures for cell growth and development across all species. While individual molecular mechanisms underpinning actin and MT dynamics have been intensively studied, principles that govern the cytoskeleton organization remain largely unexplored. Here, we captured biologically relevant characteristics of the plant cytoskeleton through a network-driven imaging-based approach allowing us to quantitatively assess dynamic features of the cytoskeleton. By introducing suitable null models, we demonstrate that the plant cytoskeletal networks exhibit properties required for efficient transport, namely, short average path lengths and high robustness. We further show that these advantageous features are maintained during temporal cytoskeletal rearrangements. Interestingly, man-made transportation networks exhibit similar properties, suggesting general laws of network organization supporting diverse transport processes. The proposed network-driven analysis can be readily used to identify organizational principles of cytoskeletons in other organisms.

  14. Understanding silicate hydration from quantitative analyses of hydrating tricalcium silicates.

    PubMed

    Pustovgar, Elizaveta; Sangodkar, Rahul P; Andreev, Andrey S; Palacios, Marta; Chmelka, Bradley F; Flatt, Robert J; d'Espinose de Lacaillerie, Jean-Baptiste

    2016-01-01

    Silicate hydration is prevalent in natural and technological processes, such as, mineral weathering, glass alteration, zeolite syntheses and cement hydration. Tricalcium silicate (Ca3SiO5), the main constituent of Portland cement, is amongst the most reactive silicates in water. Despite its widespread industrial use, the reaction of Ca3SiO5 with water to form calcium-silicate-hydrates (C-S-H) still hosts many open questions. Here, we show that solid-state nuclear magnetic resonance measurements of (29)Si-enriched triclinic Ca3SiO5 enable the quantitative monitoring of the hydration process in terms of transient local molecular composition, extent of silicate hydration and polymerization. This provides insights on the relative influence of surface hydroxylation and hydrate precipitation on the hydration rate. When the rate drops, the amount of hydroxylated Ca3SiO5 decreases, thus demonstrating the partial passivation of the surface during the deceleration stage. Moreover, the relative quantities of monomers, dimers, pentamers and octamers in the C-S-H structure are measured. PMID:27009966

  15. Understanding silicate hydration from quantitative analyses of hydrating tricalcium silicates.

    PubMed

    Pustovgar, Elizaveta; Sangodkar, Rahul P; Andreev, Andrey S; Palacios, Marta; Chmelka, Bradley F; Flatt, Robert J; d'Espinose de Lacaillerie, Jean-Baptiste

    2016-03-24

    Silicate hydration is prevalent in natural and technological processes, such as, mineral weathering, glass alteration, zeolite syntheses and cement hydration. Tricalcium silicate (Ca3SiO5), the main constituent of Portland cement, is amongst the most reactive silicates in water. Despite its widespread industrial use, the reaction of Ca3SiO5 with water to form calcium-silicate-hydrates (C-S-H) still hosts many open questions. Here, we show that solid-state nuclear magnetic resonance measurements of (29)Si-enriched triclinic Ca3SiO5 enable the quantitative monitoring of the hydration process in terms of transient local molecular composition, extent of silicate hydration and polymerization. This provides insights on the relative influence of surface hydroxylation and hydrate precipitation on the hydration rate. When the rate drops, the amount of hydroxylated Ca3SiO5 decreases, thus demonstrating the partial passivation of the surface during the deceleration stage. Moreover, the relative quantities of monomers, dimers, pentamers and octamers in the C-S-H structure are measured.

  16. Understanding silicate hydration from quantitative analyses of hydrating tricalcium silicates

    PubMed Central

    Pustovgar, Elizaveta; Sangodkar, Rahul P.; Andreev, Andrey S.; Palacios, Marta; Chmelka, Bradley F.; Flatt, Robert J.; d'Espinose de Lacaillerie, Jean-Baptiste

    2016-01-01

    Silicate hydration is prevalent in natural and technological processes, such as, mineral weathering, glass alteration, zeolite syntheses and cement hydration. Tricalcium silicate (Ca3SiO5), the main constituent of Portland cement, is amongst the most reactive silicates in water. Despite its widespread industrial use, the reaction of Ca3SiO5 with water to form calcium-silicate-hydrates (C-S-H) still hosts many open questions. Here, we show that solid-state nuclear magnetic resonance measurements of 29Si-enriched triclinic Ca3SiO5 enable the quantitative monitoring of the hydration process in terms of transient local molecular composition, extent of silicate hydration and polymerization. This provides insights on the relative influence of surface hydroxylation and hydrate precipitation on the hydration rate. When the rate drops, the amount of hydroxylated Ca3SiO5 decreases, thus demonstrating the partial passivation of the surface during the deceleration stage. Moreover, the relative quantities of monomers, dimers, pentamers and octamers in the C-S-H structure are measured. PMID:27009966

  17. Analyses on Regional Cultivated Land Changebased on Quantitative Method

    NASA Astrophysics Data System (ADS)

    Cao, Yingui; Yuan, Chun; Zhou, Wei; Wang, Jing

    Three Gorges Project is the great project in the world, which accelerates economic development in the reservoir area of Three Gorges Project. In the process of development in the reservoir area of Three Gorges Project, cultivated land has become the important resources, a lot of cultivated land has been occupied and become the constructing land. In the same time, a lot of cultivated land has been flooded because of the rising of the water level. This paper uses the cultivated land areas and social economic indicators of reservoir area of Three Gorges in 1990-2004, takes the statistic analyses and example research in order to analyze the process of cultivated land, get the driving forces of cultivated land change, find the new methods to stimulate and forecast the cultivated land areas in the future, and serve for the cultivated land protection and successive development in reservoir area of Three Gorges. The results indicate as follow, firstly, in the past 15 years, the cultivated land areas has decreased 200142 hm2, the decreasing quantity per year is 13343 hm2. The whole reservoir area is divided into three different areas, they are upper reaches area, belly area and lower reaches area. The trends of cultivated land change in different reservoir areas are similar to the whole reservoir area. Secondly, the curve of cultivated land areas and per capita GDP takes on the reverse U, and the steps between the change rate of cultivated land and the change rate of GDP are different in some years, which indicates that change of cultivated land and change of GDP are decoupling, besides that, change of cultivated land is connection with the development of urbanization and the policy of returning forestry greatly. Lastly, the precision of multi-regression is lower than the BP neural network in the stimulation of cultivated land, then takes use of the BP neural network to forecast the cultivated land areas in 2005, 2010 and 2015, and the forecasting results are reasonable.

  18. Psychotherapist effects in meta-analyses: How accurate are treatment effects?

    PubMed

    Owen, Jesse; Drinane, Joanna M; Idigo, K Chinwe; Valentine, Jeffrey C

    2015-09-01

    Psychotherapists are known to vary in their effectiveness with their clients, in randomized clinical trials as well as naturally occurring treatment settings. The fact that therapists matter has 2 effects in psychotherapy studies. First, if therapists are not randomly assigned to modalities (which is rare) this may bias the estimation of the treatment effects, as the modalities may have therapists of differing skill. In addition, if the data are analyzed at the client level (which is virtually always the case) then the standard errors for the effect sizes will be biased due to a violation of the assumption of independence. Thus, the conclusions of many meta-analyses may not reflect true estimates of treatment differences. We reexamined 20 treatment effects selected from 17 meta-analyses. We focused on meta-analyses that found statistically significant differences between treatments for a variety of disorders by correcting the treatment effects according to the variability in outcomes known to be associated with psychotherapists. The results demonstrated that after adjusting the results based on most small estimates of therapist effects, ∼80% of the reported treatment effects would still be statistically significant. However, at larger estimates, only 20% of the treatment effects would still be statistically significant after controlling for therapist effects. Although some meta-analyses were consistent in their estimates for treatment differences, the degree of certainty in the results was considerably reduced after considering therapist effects. Practice implications for understanding treatment effects, namely, therapist effects, in meta-analyses and original studies are provided. PMID:26301423

  19. Development of phantom for quantitative analyses of human dentin mineral density.

    PubMed

    Hayashi-Sakai, Sachiko; Kondo, Tatsuya; Kasuga, Yuto; Sakamoto, Makoto; Endo, Hideaki; Sakai, Jun

    2015-01-01

    The purpose of the present study was to develop a novel-designed phantom that could be scanned with a sample in the same image, that specialize for quantitative analyses of human dentin mineral density using the X-ray attenuation method. A further attempt was made to demonstrate the intracoronal dentin mineral density using this phantom in mandibular incisors. The phantom prepared with a 15 mm hole in the center of an acrylic resin bar having an outside diameter of 25 mm and 8 small holes (diameter, 3 mm) were made at equal intervals around the center. Liquid dipotassium hydrogen phosphate (K2HPO4) solutions were established at 0.4, 0.6, 0.8 and 1.0 g/cm3, and were arranged to these holes. The mean value of the intracoronal dentin mineral density was 1.486 ± 0.016 g/cm3 in the present study. As the results of the present study corresponded to previous reports, this new phantom was considered to be useful. This phantom enables the analysis of samples that are not readily available by conventional mechanical tests and may facilitate biomechanical investigations using X-ray images. It was suggested that this system is a simple, accurate and novel mineralization measuring system. PMID:26484556

  20. Qualitative and quantitative comparative analyses of 3D lidar landslide displacement field measurements

    NASA Astrophysics Data System (ADS)

    Haugen, Benjamin D.

    Landslide ground surface displacements vary at all spatial scales and are an essential component of kinematic and hazards analyses. Unfortunately, survey-based displacement measurements require personnel to enter unsafe terrain and have limited spatial resolution. And while recent advancements in LiDAR technology provide the ability remotely measure 3D landslide displacements at high spatial resolution, no single method is widely accepted. A series of qualitative metrics for comparing 3D landslide displacement field measurement methods were developed. The metrics were then applied to nine existing LiDAR techniques, and the top-ranking methods --Iterative Closest Point (ICP) matching and 3D Particle Image Velocimetry (3DPIV) -- were quantitatively compared using synthetic displacement and control survey data from a slow-moving translational landslide in north-central Colorado. 3DPIV was shown to be the most accurate and reliable point cloud-based 3D landslide displacement field measurement method, and the viability of LiDAR-based techniques for measuring 3D motion on landslides was demonstrated.

  1. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  2. Recycling and Ambivalence: Quantitative and Qualitative Analyses of Household Recycling among Young Adults

    ERIC Educational Resources Information Center

    Ojala, Maria

    2008-01-01

    Theories about ambivalence, as well as quantitative and qualitative empirical approaches, are applied to obtain an understanding of recycling among young adults. A questionnaire was mailed to 422 Swedish young people. Regression analyses showed that a mix of negative emotions (worry) and positive emotions (hope and joy) about the environmental…

  3. Challenges in Higher Education Research: The Use of Quantitative Tools in Comparative Analyses

    ERIC Educational Resources Information Center

    Reale, Emanuela

    2014-01-01

    Despite the value of the comparative perspective for the study of higher education is widely recognised, there is little consensus about the specific methodological approaches. Quantitative tools outlined their relevance for addressing comparative analyses since they are supposed to reducing the complexity, finding out and graduating similarities…

  4. Quantitative spectroscopy of hot stars: accurate atomic data applied on a large scale as driver of recent breakthroughs

    NASA Astrophysics Data System (ADS)

    Przybilla, Norbert; Schaffenroth, Veronika; Nieva, Maria-Fernanda

    2015-08-01

    OB-type stars present hotbeds for non-LTE physics because of their strong radiation fields that drive the atmospheric plasma out of local thermodynamic equilibrium. We report on recent breakthroughs in the quantitative analysis of the optical and UV-spectra of OB-type stars that were facilitated by application of accurate and precise atomic data on a large scale. An astophysicist's dream has come true, by bringing observed and model spectra into close match over wide parts of the observed wavelength ranges. This facilitates tight observational constraints to be derived from OB-type stars for wide applications in astrophysics. However, despite the progress made, many details of the modelling may be improved further. We discuss atomic data needs in terms of laboratory measurements and also ab-initio calculations. Particular emphasis is given to quantitative spectroscopy in the near-IR, which will be in focus in the era of the upcoming extremely large telescopes.

  5. Restriction Site Tiling Analysis: accurate discovery and quantitative genotyping of genome-wide polymorphisms using nucleotide arrays

    PubMed Central

    2010-01-01

    High-throughput genotype data can be used to identify genes important for local adaptation in wild populations, phenotypes in lab stocks, or disease-related traits in human medicine. Here we advance microarray-based genotyping for population genomics with Restriction Site Tiling Analysis. The approach simultaneously discovers polymorphisms and provides quantitative genotype data at 10,000s of loci. It is highly accurate and free from ascertainment bias. We apply the approach to uncover genomic differentiation in the purple sea urchin. PMID:20403197

  6. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping.

    PubMed

    Lee, Han B; Schwab, Tanya L; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L; Cervera, Roberto Lopez; McNulty, Melissa S; Bostwick, Hannah S; Clark, Karl J

    2016-06-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98-100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  7. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping.

    PubMed

    Lee, Han B; Schwab, Tanya L; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L; Cervera, Roberto Lopez; McNulty, Melissa S; Bostwick, Hannah S; Clark, Karl J

    2016-06-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98-100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  8. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping

    PubMed Central

    Lee, Han B.; Schwab, Tanya L.; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L.; Cervera, Roberto Lopez; McNulty, Melissa S.; Bostwick, Hannah S.; Clark, Karl J.

    2016-01-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98–100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  9. Rapid qualitative and quantitative analyses of proanthocyanidin oligomers and polymers by UPLC-MS/MS.

    PubMed

    Engström, Marica T; Pälijärvi, Maija; Fryganas, Christos; Grabber, John H; Mueller-Harvey, Irene; Salminen, Juha-Pekka

    2014-04-16

    This paper presents the development of a rapid method with ultraperformance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) for the qualitative and quantitative analyses of plant proanthocyanidins directly from crude plant extracts. The method utilizes a range of cone voltages to achieve the depolymerization step in the ion source of both smaller oligomers and larger polymers. The formed depolymerization products are further fragmented in the collision cell to enable their selective detection. This UPLC-MS/MS method is able to separately quantitate the terminal and extension units of the most common proanthocyanidin subclasses, that is, procyanidins and prodelphinidins. The resulting data enable (1) quantitation of the total proanthocyanidin content, (2) quantitation of total procyanidins and prodelphinidins including the procyanidin/prodelphinidin ratio, (3) estimation of the mean degree of polymerization for the oligomers and polymers, and (4) estimation of how the different procyanidin and prodelphinidin types are distributed along the chromatographic hump typically produced by large proanthocyanidins. All of this is achieved within the 10 min period of analysis, which makes the presented method a significant addition to the chemistry tools currently available for the qualitative and quantitative analyses of complex proanthocyanidin mixtures from plant extracts.

  10. Rapid and accurate analyses of silicon and phosphorus in plants using a portable X-ray fluorescence spectrometer.

    PubMed

    Reidinger, Stefan; Ramsey, Michael H; Hartley, Susan E

    2012-08-01

    The elemental analysis of plant material is a frequently employed tool across biological disciplines, yet accurate, convenient and economical methods for the determination of some important elements are currently lacking. For instance, digestion-based techniques are often hazardous and time-consuming and, particularly in the case of silicon (Si), can suffer from low accuracy due to incomplete solubilization and potential volatilization, whilst other methods may require large, expensive and specialised equipment. Here, we present a rapid, safe and accurate procedure for the simultaneous, nonconsumptive analysis of Si and phosphorus (P) in as little as 0.1 g dried and ground plant material using a portable X-ray fluorescence spectrometer (P-XRF). We used certified reference materials from different plant species to test the analytical performance of P-XRF and show that the analysis suffers from very little bias and that the repeatability precision of the measurements is as good as or better than that of other methods. Using this technique we were able to process and analyse 200 ground samples a day, so P-XRF could provide a particularly valuable tool for plant biologists requiring the simultaneous nonconsumptive analysis of multiple elements, including those known to be difficult to measure such as Si, in large numbers of samples.

  11. [Quantitative analyses of coronary artery calcification by using clinical cardiovascular imaging].

    PubMed

    Ehara, Shoichi; Yoshiyama, Minoru

    2010-11-01

    Coronary artery calcification (CAC) is a common phenomenon, but the clinical relevance of this phenomenon, for instance as a risk factor for plaque vulnerability, is still controversial. After the introduction of electron-beam computed tomography (EBCT), multislice computed tomography (MSCT), and intravascular ultrasound (IVUS), which enables quantitative assessment of CAC, the number of clinical studies concerning CAC has rapidly increased. In this review, we focus on the quantitative analyses of CAC by using clinical cardiovascular imaging and the clinical significance of CAC. PMID:21037389

  12. Stable isotope-labeled collagen: a novel and versatile tool for quantitative collagen analyses using mass spectrometry.

    PubMed

    Taga, Yuki; Kusubata, Masashi; Ogawa-Goto, Kiyoko; Hattori, Shunji

    2014-08-01

    Collagens are the most abundant proteins in animals and are involved in many physiological/pathological events. Although various methods have been used to quantify collagen and its post-translational modifications (PTMs) over the years, it is still difficult to accurately quantify type-specific collagen and minor collagen PTMs. We report a novel quantitative method targeting collagen using stable isotope-labeled collagen named "SI-collagen", which was labeled with isotopically heavy lysine, arginine, and proline in fibroblasts culture. We prepared highly labeled and purified SI-collagen for use as an internal standard in mass spectrometric analysis, particularly for a new approach using amino acid hydrolysis. Our method enabled accurate collagen analyses, including quantification of (1) type-specific collagen (types I and III in this paper), (2) total collagen, and (3) collagen PTMs by LC-MS with high sensitivity. SI-collagen is also applicable to other diverse analyses of collagen and can be a powerful tool for various studies, such as detailed investigation of collagen-related disorders.

  13. A Global Approach to Accurate and Automatic Quantitative Analysis of NMR Spectra by Complex Least-Squares Curve Fitting

    NASA Astrophysics Data System (ADS)

    Martin, Y. L.

    The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species

  14. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples.

    PubMed

    Mackie, David M; Jahnke, Justin P; Benyamin, Marcus S; Sumner, James J

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users' purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells.

  15. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples

    PubMed Central

    Mackie, David M.; Jahnke, Justin P.; Benyamin, Marcus S.; Sumner, James J.

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users’ purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells. PMID:26977411

  16. Preferential access to genetic information from endogenous hominin ancient DNA and accurate quantitative SNP-typing via SPEX

    PubMed Central

    Brotherton, Paul; Sanchez, Juan J.; Cooper, Alan; Endicott, Phillip

    2010-01-01

    The analysis of targeted genetic loci from ancient, forensic and clinical samples is usually built upon polymerase chain reaction (PCR)-generated sequence data. However, many studies have shown that PCR amplification from poor-quality DNA templates can create sequence artefacts at significant levels. With hominin (human and other hominid) samples, the pervasive presence of highly PCR-amplifiable human DNA contaminants in the vast majority of samples can lead to the creation of recombinant hybrids and other non-authentic artefacts. The resulting PCR-generated sequences can then be difficult, if not impossible, to authenticate. In contrast, single primer extension (SPEX)-based approaches can genotype single nucleotide polymorphisms from ancient fragments of DNA as accurately as modern DNA. A single SPEX-type assay can amplify just one of the duplex DNA strands at target loci and generate a multi-fold depth-of-coverage, with non-authentic recombinant hybrids reduced to undetectable levels. Crucially, SPEX-type approaches can preferentially access genetic information from damaged and degraded endogenous ancient DNA templates over modern human DNA contaminants. The development of SPEX-type assays offers the potential for highly accurate, quantitative genotyping from ancient hominin samples. PMID:19864251

  17. A quantitatively accurate theory of stable crack growth in single phase ductile metal alloys under the influence of cyclic loading

    NASA Astrophysics Data System (ADS)

    Huffman, Peter oel

    Although fatigue has been a well studied phenomenon over the past century and a half, there has yet to be found a quantitative link between fatigue crack growth rates and materials properties. This work serves to establish that link, in the case of well behaved, single phase, ductile metals. The primary mechanisms of fatigue crack growth are identified in general terms, followed by a description of the dependence of the stress intensity factor range on those mechanisms. A method is presented for calculating the crack growth rate for an ideal, linear elastic, non-brittle material, which is assumed to be similar to the crack growth rate for a real material at very small crack growth rate values. The threshold stress intensity factor is discussed as a consequence of "crack tip healing". Residual stresses are accounted for in the form of an approximated residual stress intensity factor. The results of these calculations are compared to data available in the literature. It is concluded that this work presents a new way to consider crack growth with respect to cyclic loading which is quantitatively accurate, and introduces a new way to consider fracture mechanics with respect to the relatively small, cyclic loads, normally associated with fatigue.

  18. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    PubMed

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html). PMID:17973343

  19. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait.

    PubMed

    Golestan Hashemi, Farahnaz Sadat; Rafii, Mohd Y; Ismail, Mohd Razi; Mohamed, Mahmud Tengku Muda; Rahim, Harun A; Latif, Mohammad Abdul; Aslani, Farzad

    2015-01-01

    When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL) analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP), the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance) was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM). These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice. PMID:26061689

  20. Quantitative methods for analysing cumulative effects on fish migration success: a review.

    PubMed

    Johnson, J E; Patterson, D A; Martins, E G; Cooke, S J; Hinch, S G

    2012-07-01

    It is often recognized, but seldom addressed, that a quantitative assessment of the cumulative effects, both additive and non-additive, of multiple stressors on fish survival would provide a more realistic representation of the factors that influence fish migration. This review presents a compilation of analytical methods applied to a well-studied fish migration, a more general review of quantitative multivariable methods, and a synthesis on how to apply new analytical techniques in fish migration studies. A compilation of adult migration papers from Fraser River sockeye salmon Oncorhynchus nerka revealed a limited number of multivariable methods being applied and the sub-optimal reliance on univariable methods for multivariable problems. The literature review of fisheries science, general biology and medicine identified a large number of alternative methods for dealing with cumulative effects, with a limited number of techniques being used in fish migration studies. An evaluation of the different methods revealed that certain classes of multivariable analyses will probably prove useful in future assessments of cumulative effects on fish migration. This overview and evaluation of quantitative methods gathered from the disparate fields should serve as a primer for anyone seeking to quantify cumulative effects on fish migration survival.

  1. Accurate, fast and cost-effective diagnostic test for monosomy 1p36 using real-time quantitative PCR.

    PubMed

    Cunha, Pricila da Silva; Pena, Heloisa B; D'Angelo, Carla Sustek; Koiffmann, Celia P; Rosenfeld, Jill A; Shaffer, Lisa G; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5-0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  2. Development and evaluation of a liquid chromatography-mass spectrometry method for rapid, accurate quantitation of malondialdehyde in human plasma.

    PubMed

    Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H

    2016-09-01

    Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618

  3. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    PubMed Central

    Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341

  4. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases.

    PubMed

    Hollingsworth, T Déirdre; Adams, Emily R; Anderson, Roy M; Atkins, Katherine; Bartsch, Sarah; Basáñez, María-Gloria; Behrend, Matthew; Blok, David J; Chapman, Lloyd A C; Coffeng, Luc; Courtenay, Orin; Crump, Ron E; de Vlas, Sake J; Dobson, Andy; Dyson, Louise; Farkas, Hajnal; Galvani, Alison P; Gambhir, Manoj; Gurarie, David; Irvine, Michael A; Jervis, Sarah; Keeling, Matt J; Kelly-Hope, Louise; King, Charles; Lee, Bruce Y; Le Rutte, Epke A; Lietman, Thomas M; Ndeffo-Mbah, Martial; Medley, Graham F; Michael, Edwin; Pandey, Abhishek; Peterson, Jennifer K; Pinsent, Amy; Porco, Travis C; Richardus, Jan Hendrik; Reimer, Lisa; Rock, Kat S; Singh, Brajendra K; Stolk, Wilma; Swaminathan, Subramanian; Torr, Steve J; Townsend, Jeffrey; Truscott, James; Walker, Martin; Zoueva, Alexandra

    2015-01-01

    Quantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an overview of a collection of novel model-based analyses which aim to address key questions on the dynamics of transmission and control of nine NTDs: Chagas disease, visceral leishmaniasis, human African trypanosomiasis, leprosy, soil-transmitted helminths, schistosomiasis, lymphatic filariasis, onchocerciasis and trachoma. Several common themes resonate throughout these analyses, including: the importance of epidemiological setting on the success of interventions; targeting groups who are at highest risk of infection or re-infection; and reaching populations who are not accessing interventions and may act as a reservoir for infection,. The results also highlight the challenge of maintaining elimination 'as a public health problem' when true elimination is not reached. The models elucidate the factors that may be contributing most to persistence of disease and discuss the requirements for eventually achieving true elimination, if that is possible. Overall this collection presents new analyses to inform current control initiatives. These papers form a base from which further development of the models and more rigorous validation against a variety of datasets can help to give more detailed advice. At the moment, the models' predictions are being considered as the world prepares for a final push towards control or elimination of neglected tropical diseases by 2020. PMID:26652272

  5. Accurate measurement of circulating mitochondrial DNA content from human blood samples using real-time quantitative PCR.

    PubMed

    Ajaz, Saima; Czajka, Anna; Malik, Afshan

    2015-01-01

    We describe a protocol to accurately measure the amount of human mitochondrial DNA (MtDNA) in peripheral blood samples which can be modified to quantify MtDNA from other body fluids, human cells, and tissues. This protocol is based on the use of real-time quantitative PCR (qPCR) to quantify the amount of MtDNA relative to nuclear DNA (designated the Mt/N ratio). In the last decade, there have been increasing numbers of studies describing altered MtDNA or Mt/N in circulation in common nongenetic diseases where mitochondrial dysfunction may play a role (for review see Malik and Czajka, Mitochondrion 13:481-492, 2013). These studies are distinct from those looking at genetic mitochondrial disease and are attempting to identify acquired changes in circulating MtDNA content as an indicator of mitochondrial function. However, the methodology being used is not always specific and reproducible. As more than 95 % of the human mitochondrial genome is duplicated in the human nuclear genome, it is important to avoid co-amplification of nuclear pseudogenes. Furthermore, template preparation protocols can also affect the results because of the size and structural differences between the mitochondrial and nuclear genomes. Here we describe how to (1) prepare DNA from blood samples; (2) pretreat the DNA to prevent dilution bias; (3) prepare dilution standards for absolute quantification using the unique primers human mitochondrial genome forward primer (hMitoF3) and human mitochondrial genome reverse primer(hMitoR3) for the mitochondrial genome, and human nuclear genome forward primer (hB2MF1) and human nuclear genome reverse primer (hB2MR1) primers for the human nuclear genome; (4) carry out qPCR for either relative or absolute quantification from test samples; (5) analyze qPCR data; and (6) calculate the sample size to adequately power studies. The protocol presented here is suitable for high-throughput use.

  6. Data for chicken semen proteome and label free quantitative analyses displaying sperm quality biomarkers.

    PubMed

    Labas, Valérie; Grasseau, Isabelle; Cahier, Karine; Gargaros, Audrey; Harichaux, Grégoire; Teixeira-Gomes, Ana-Paula; Alves, Sabine; Bourin, Marie; Gérard, Nadine; Blesbois, Elisabeth

    2014-12-01

    Understanding of biology of the avian male gamete is essential to improve the conservation of genetic resources and performances in farming. In this study, the semen proteome of the main domestic avian species (Gallus gallus) and evaluation of the molecular phenotype related to sperm quality were investigated using GeLC-MS/MS approach and label-free quantitative proteomic based on Spectral Counting (SC) and extracted ion chromatograms (XIC) methods. Here we describe in details the peptide/protein inventory of chicken ejaculated spermatozoa (SPZ) and seminal plasma (SP). We also show differential analyses of chicken semen (SPZ and corresponding SP) from 11 males demonstrating different levels of fertilizing capacity and sperm motility. The interpretation and description of these data can be found in a research article published by Labas and colleagues in the Journal of Proteomics in 2014 [1]. This is a new resource for exploring the molecular mechanisms involved in fertilizing capacity and to reveal new sets of fertility biomarkers.

  7. Quantitative analyses of glass via laser-induced breakdown spectroscopy in argon

    NASA Astrophysics Data System (ADS)

    Gerhard, C.; Hermann, J.; Mercadier, L.; Loewenthal, L.; Axente, E.; Luculescu, C. R.; Sarnet, T.; Sentis, M.; Viöl, W.

    2014-11-01

    We demonstrate that elemental analysis of glass with a measurement precision of about 10% can be performed via calibration-free laser-induced breakdown spectroscopy. Therefore, plasma emission spectra recorded during ultraviolet laser ablation of different glasses are compared to the spectral radiance computed for a plasma in local thermodynamic equilibrium. Using an iterative calculation algorithm, we deduce the relative elemental fractions and the plasma properties from the best agreement between measured and computed spectra. The measurement method is validated in two ways. First, the LIBS measurements are performed on fused silica composed of more than 99.9% of SiO2. Second, the oxygen fractions measured for heavy flint and barite crown glasses are compared to the values expected from the glass composing oxides. The measured compositions are furthermore compared with those obtained by X-ray photoelectron spectroscopy and energy-dispersive X-ray spectroscopy. It is shown that accurate LIBS analyses require spectra recording with short enough delays between laser pulse and detector gate, when the electron density is larger than 1017 cm- 3. The results show that laser-induced breakdown spectroscopy based on accurate plasma modeling is suitable for elemental analysis of complex materials such as glasses, with an analytical performance comparable or even better than that obtained with standard techniques.

  8. Quantitative proteomic analyses of the response of acidophilic microbial communities to different pH conditions

    SciTech Connect

    Belnap, Christopher P.; Pan, Chongle; Denef, Vincent; Samatova, Nagiza F; Hettich, Robert {Bob} L; Banfield, Jillian F.

    2011-01-01

    Extensive genomic characterization of multi-species acid mine drainage microbial consortia combined with laboratory cultivation has enabled the application of quantitative proteomic analyses at the community level. In this study, quantitative proteomic comparisons were used to functionally characterize laboratory-cultivated acidophilic communities sustained in pH 1.45 or 0.85 conditions. The distributions of all proteins identified for individual organisms indicated biases for either high or low pH, and suggests pH-specific niche partitioning for low abundance bacteria and archaea. Although the proteome of the dominant bacterium, Leptospirillum group II, was largely unaffected by pH treatments, analysis of functional categories indicated proteins involved in amino acid and nucleotide metabolism, as well as cell membrane/envelope biogenesis were overrepresented at high pH. Comparison of specific protein abundances indicates higher pH conditions favor Leptospirillum group III, whereas low pH conditions promote the growth of certain archaea. Thus, quantitative proteomic comparisons revealed distinct differences in community composition and metabolic function of individual organisms during different pH treatments. Proteomic analysis revealed other aspects of community function. Different numbers of phage proteins were identified across biological replicates, indicating stochastic spatial heterogeneity of phage outbreaks. Additionally, proteomic data were used to identify a previously unknown genotypic variant of Leptospirillum group II, an indication of selection for a specific Leptospirillum group II population in laboratory communities. Our results confirm the importance of pH and related geochemical factors in fine-tuning acidophilic microbial community structure and function at the species and strain level, and demonstrate the broad utility of proteomics in laboratory community studies.

  9. Accurate determination of human serum transferrin isoforms: Exploring metal-specific isotope dilution analysis as a quantitative proteomic tool.

    PubMed

    Busto, M Estela del Castillo; Montes-Bayón, Maria; Sanz-Medel, Alfredo

    2006-12-15

    Carbohydrate-deficient transferrin (CDT) measurements are considered a reliable marker for chronic alcohol consumption, and its use is becoming extensive in forensic medicine. However, CDT is not a single molecular entity but refers to a group of sialic acid-deficient transferrin isoforms from mono- to trisialotransferrin. Thus, the development of methods to analyze accurately and precisely individual transferrin isoforms in biological fluids such as serum is of increasing importance. The present work illustrates the use of ICPMS isotope dilution analysis for the quantification of transferrin isoforms once saturated with iron and separated by anion exchange chromatography (Mono Q 5/50) using a mobile phase consisting of a gradient of ammonium acetate (0-250 mM) in 25 mM Tris-acetic acid (pH 6.5). Species-specific and species-unspecific spikes have been explored. In the first part of the study, the use of postcolumn addition of a solution of 200 ng mL(-1) isotopically enriched iron (57Fe, 95%) in 25 mM sodium citrate/citric acid (pH 4) permitted the quantification of individual sialoforms of transferrin (from S2 to S5) in human serum samples of healthy individuals as well as alcoholic patients. Second, the species-specific spike method was performed by synthesizing an isotopically enriched standard of saturated transferrin (saturated with 57Fe). The characterization of the spike was performed by postcolumn reverse isotope dilution analysis (this is, by postcolumn addition of a solution of 200 ng mL(-1) natural iron in sodium citrate/citric acid of pH 4). Also, the stability of the transferrin spike was tested during one week with negligible species transformation. Finally, the enriched transferrin was used to quantify the individual isoforms in the same serum samples obtaining results comparative to those of postcolumn isotope dilution and to those previously published in the literature, demonstrating the suitability of both strategies for quantitative transferrin

  10. Genome-wide Linkage Analyses of Quantitative and Categorical Autism Subphenotypes

    PubMed Central

    Liu, Xiao-Qing; Paterson, Andrew D.; Szatmari, Peter

    2008-01-01

    Background The search for susceptibility genes in autism and autism spectrum disorders (ASD) has been hindered by the possible small effects of individual genes and by genetic (locus) heterogeneity. To overcome these obstacles, one method is to use autism-related subphenotypes instead of the categorical diagnosis of autism since they may be more directly related to the underlying susceptibility loci. Another strategy is to analyze subsets of families that meet certain clinical criteria to reduce genetic heterogeneity. Methods In this study, using 976 multiplex families from the Autism Genome Project consortium, we performed genome-wide linkage analyses on two quantitative subphenotypes, the total scores of the reciprocal social interaction domain and the restricted, repetitive, and stereotyped patterns of behavior domain from the Autism Diagnostic Interview-Revised. We also selected subsets of ASD families based on four binary subphenotypes, delayed onset of first words, delayed onset of first phrases, verbal status, and IQ ≥ 70. Results When the ASD families with IQ ≥ 70 were used, a logarithm of odds (LOD) score of 4.01 was obtained on chromosome 15q13.3-q14, which was previously linked to schizophrenia. We also obtained a LOD score of 3.40 on chromosome 11p15.4-p15.3 using the ASD families with delayed onset of first phrases. No significant evidence for linkage was obtained for the two quantitative traits. Conclusions This study demonstrates that selection of informative subphenotypes to define a homogeneous set of ASD families could be very important in detecting the susceptibility loci in autism. PMID:18632090

  11. Data for chicken semen proteome and label free quantitative analyses displaying sperm quality biomarkers

    PubMed Central

    Labas, Valérie; Grasseau, Isabelle; Cahier, Karine; Gargaros, Audrey; Harichaux, Grégoire; Teixeira-Gomes, Ana-Paula; Alves, Sabine; Bourin, Marie; Gérard, Nadine; Blesbois, Elisabeth

    2014-01-01

    Understanding of biology of the avian male gamete is essential to improve the conservation of genetic resources and performances in farming. In this study, the semen proteome of the main domestic avian species (Gallus gallus) and evaluation of the molecular phenotype related to sperm quality were investigated using GeLC–MS/MS approach and label-free quantitative proteomic based on Spectral Counting (SC) and extracted ion chromatograms (XIC) methods. Here we describe in details the peptide/protein inventory of chicken ejaculated spermatozoa (SPZ) and seminal plasma (SP). We also show differential analyses of chicken semen (SPZ and corresponding SP) from 11 males demonstrating different levels of fertilizing capacity and sperm motility. The interpretation and description of these data can be found in a research article published by Labas and colleagues in the Journal of Proteomics in 2014 [1]. This is a new resource for exploring the molecular mechanisms involved in fertilizing capacity and to reveal new sets of fertility biomarkers. PMID:26217683

  12. Improving Short Term Instability for Quantitative Analyses with Portable Electronic Noses

    PubMed Central

    Macías, Miguel Macías; Agudo, J. Enrique; Manso, Antonio García; Orellana, Carlos Javier García; Velasco, Horacio Manuel González; Caballero, Ramón Gallardo

    2014-01-01

    One of the main problems when working with electronic noses is the lack of reproducibility or repeatability of the sensor response, so that, if this problem is not properly considered, electronic noses can be useless, especially for quantitative analyses. On the other hand, irreproducibility is increased with portable and low cost electronic noses where laboratory equipment like gas zero generators cannot be used. In this work, we study the reproducibility of two portable electronic noses, the PEN3 (commercial) and CAPINose (a proprietary design) by using synthetic wine samples. We show that in both cases short term instability associated to the sensors' response to the same sample and under the same conditions represents a major problem and we propose an internal normalization technique that, in both cases, reduces the variability of the sensors' response. Finally, we show that the normalization proposed seems to be more effective in the CAPINose case, reducing, for example, the variability associated to the TGS2602 sensor from 12.19% to 2.2%. PMID:24932869

  13. Deconvoluting complex tissues for expression quantitative trait locus-based analyses

    PubMed Central

    Seo, Ji-Heui; Li, Qiyuan; Fatima, Aquila; Eklund, Aron; Szallasi, Zoltan; Polyak, Kornelia; Richardson, Andrea L.; Freedman, Matthew L.

    2013-01-01

    Breast cancer genome-wide association studies have pinpointed dozens of variants associated with breast cancer pathogenesis. The majority of risk variants, however, are located outside of known protein-coding regions. Therefore, identifying which genes the risk variants are acting through presents an important challenge. Variants that are associated with mRNA transcript levels are referred to as expression quantitative trait loci (eQTLs). Many studies have demonstrated that eQTL-based strategies provide a direct way to connect a trait-associated locus with its candidate target gene. Performing eQTL-based analyses in human samples is complicated because of the heterogeneous nature of human tissue. We addressed this issue by devising a method to computationally infer the fraction of cell types in normal human breast tissues. We then applied this method to 13 known breast cancer risk loci, which we hypothesized were eQTLs. For each risk locus, we took all known transcripts within a 2 Mb interval and performed an eQTL analysis in 100 reduction mammoplasty cases. A total of 18 significant associations were discovered (eight in the epithelial compartment and 10 in the stromal compartment). This study highlights the ability to perform large-scale eQTL studies in heterogeneous tissues. PMID:23650637

  14. Multi-Window Classical Least Squares Multivariate Calibration Methods for Quantitative ICP-AES Analyses

    SciTech Connect

    CHAMBERS,WILLIAM B.; HAALAND,DAVID M.; KEENAN,MICHAEL R.; MELGAARD,DAVID K.

    1999-10-01

    The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded during calibration). The MWCLS method is found to be vastly

  15. Comparison between a high-resolution single-stage Orbitrap and a triple quadrupole mass spectrometer for quantitative analyses of drugs.

    PubMed

    Henry, Hugues; Sobhi, Hamid Reza; Scheibner, Olaf; Bromirski, Maciej; Nimkar, Subodh B; Rochat, Bertrand

    2012-03-15

    The capabilities of a high-resolution (HR), accurate mass spectrometer (Exactive-MS) operating in full scan MS mode was investigated for the quantitative LC/MS analysis of drugs in patients' plasma samples. A mass resolution of 50,000 (FWHM) at m/z 200 and a mass extracted window of 5 ppm around the theoretical m/z of each analyte were used to construct chromatograms for quantitation. The quantitative performance of the Exactive-MS was compared with that of a triple quadrupole mass spectrometer (TQ-MS), TSQ Quantum Discovery or Quantum Ultra, operating in the conventional selected reaction monitoring (SRM) mode. The study consisted of 17 therapeutic drugs including 8 antifungal agents (anidulafungin, caspofungin, fluconazole, itraconazole, hydroxyitraconazole posaconazole, voriconazole and voriconazole-N-oxide), 4 immunosuppressants (ciclosporine, everolimus, sirolimus and tacrolimus) and 5 protein kinase inhibitors (dasatinib, imatinib, nilotinib, sorafenib and sunitinib). The quantitative results obtained with HR-MS acquisition show comparable detection specificity, assay precision, accuracy, linearity and sensitivity to SRM acquisition. Importantly, HR-MS offers several benefits over TQ-MS technology: absence of SRM optimization, time saving when changing the analysis from one MS to another, more complete information of what is in the samples and easier troubleshooting. Our work demonstrates that U/HPLC coupled to Exactive HR-MS delivers comparable results to TQ-MS in routine quantitative drug analyses. Considering the advantages of HR-MS, these results suggest that, in the near future, there should be a shift in how routine quantitative analyses of small molecules, particularly for therapeutic drugs, are performed.

  16. Analytical method for the accurate determination of tricothecenes in grains using LC-MS/MS: a comparison between MRM transition and MS3 quantitation.

    PubMed

    Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn

    2012-07-01

    The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are

  17. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    NASA Astrophysics Data System (ADS)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  18. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    PubMed Central

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-01-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042

  19. A quantitative method to analyse an open-ended questionnaire: A case study about the Boltzmann Factor

    NASA Astrophysics Data System (ADS)

    Rosario Battaglia, Onofrio; Di Paola, Benedetto

    2016-05-01

    This paper describes a quantitative method to analyse an open-ended questionnaire. Student responses to a specially designed written questionnaire are quantitatively analysed by not hierarchical clustering called k -means method. Through this we can characterise behaviour students with respect their expertise to formulate explanations for phenomena or processes and/or use a given model in the different context. The physics topic is about the Boltzmann Factor, which allows the students to have a unifying view of different phenomena in different contexts.

  20. Study Quality in SLA: An Assessment of Designs, Analyses, and Reporting Practices in Quantitative L2 Research

    ERIC Educational Resources Information Center

    Plonsky, Luke

    2013-01-01

    This study assesses research and reporting practices in quantitative second language (L2) research. A sample of 606 primary studies, published from 1990 to 2010 in "Language Learning and Studies in Second Language Acquisition," was collected and coded for designs, statistical analyses, reporting practices, and outcomes (i.e., effect…

  1. Quantitative Content Analysis Procedures to Analyse Students' Reflective Essays: A Methodological Review of Psychometric and Edumetric Aspects

    ERIC Educational Resources Information Center

    Poldner, E.; Simons, P. R. J.; Wijngaards, G.; van der Schaaf, M. F.

    2012-01-01

    Reflective essays are a common way to develop higher education students' reflection ability. Researchers frequently analyse reflective essays based on quantitative content analysis procedures (QCA). However, the quality criteria that should be met in QCA are not straightforward. This article aims to: (1) develop a framework of quality requirements…

  2. Linear Quantitative Profiling Method Fast Monitors Alkaloids of Sophora Flavescens That Was Verified by Tri-Marker Analyses.

    PubMed

    Hou, Zhifei; Sun, Guoxiang; Guo, Yong

    2016-01-01

    The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard. PMID:27529425

  3. Linear Quantitative Profiling Method Fast Monitors Alkaloids of Sophora Flavescens That Was Verified by Tri-Marker Analyses

    PubMed Central

    Hou, Zhifei; Sun, Guoxiang; Guo, Yong

    2016-01-01

    The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard. PMID:27529425

  4. Wavelet prism decomposition analysis applied to CARS spectroscopy: a tool for accurate and quantitative extraction of resonant vibrational responses.

    PubMed

    Kan, Yelena; Lensu, Lasse; Hehl, Gregor; Volkmer, Andreas; Vartiainen, Erik M

    2016-05-30

    We propose an approach, based on wavelet prism decomposition analysis, for correcting experimental artefacts in a coherent anti-Stokes Raman scattering (CARS) spectrum. This method allows estimating and eliminating a slowly varying modulation error function in the measured normalized CARS spectrum and yields a corrected CARS line-shape. The main advantage of the approach is that the spectral phase and amplitude corrections are avoided in the retrieved Raman line-shape spectrum, thus significantly simplifying the quantitative reconstruction of the sample's Raman response from a normalized CARS spectrum in the presence of experimental artefacts. Moreover, the approach obviates the need for assumptions about the modulation error distribution and the chemical composition of the specimens under study. The method is quantitatively validated on normalized CARS spectra recorded for equimolar aqueous solutions of D-fructose, D-glucose, and their disaccharide combination sucrose. PMID:27410113

  5. Wavelet prism decomposition analysis applied to CARS spectroscopy: a tool for accurate and quantitative extraction of resonant vibrational responses.

    PubMed

    Kan, Yelena; Lensu, Lasse; Hehl, Gregor; Volkmer, Andreas; Vartiainen, Erik M

    2016-05-30

    We propose an approach, based on wavelet prism decomposition analysis, for correcting experimental artefacts in a coherent anti-Stokes Raman scattering (CARS) spectrum. This method allows estimating and eliminating a slowly varying modulation error function in the measured normalized CARS spectrum and yields a corrected CARS line-shape. The main advantage of the approach is that the spectral phase and amplitude corrections are avoided in the retrieved Raman line-shape spectrum, thus significantly simplifying the quantitative reconstruction of the sample's Raman response from a normalized CARS spectrum in the presence of experimental artefacts. Moreover, the approach obviates the need for assumptions about the modulation error distribution and the chemical composition of the specimens under study. The method is quantitatively validated on normalized CARS spectra recorded for equimolar aqueous solutions of D-fructose, D-glucose, and their disaccharide combination sucrose.

  6. THE NEGLECTED SIDE OF THE COIN: QUANTITATIVE BENEFIT-RISK ANALYSES IN MEDICAL IMAGING

    PubMed Central

    Zanzonico, Pat B.

    2016-01-01

    While it is implicitly recognized that the benefits of diagnostic imaging far outweigh any theoretical radiogenic risks, quantitative estimates of the benefits are rarely, if ever, juxtaposed with quantitative estimates of risk. This alone - expression of benefit in purely qualitative terms versus expression of risk in quantitative, and therefore seemingly more certain, terms - may well contribute to a skewed sense of the relative benefits and risks of diagnostic imaging among healthcare providers as well as patients. The current paper, therefore, briefly compares the benefits of diagnostic imaging in several cases, based on actual mortality or morbidity data if ionizing radiation were not employed, with theoretical estimates of radiogenic cancer mortality based on the “linear no-threshold” (LNT) dose-response model. PMID:26808890

  7. The Neglected Side of the Coin: Quantitative Benefit-risk Analyses in Medical Imaging.

    PubMed

    Zanzonico, Pat B

    2016-03-01

    While it is implicitly recognized that the benefits of diagnostic imaging far outweigh any theoretical radiogenic risks, quantitative estimates of the benefits are rarely, if ever, juxtaposed with quantitative estimates of risk. This alone - expression of benefit in purely qualitative terms versus expression of risk in quantitative, and therefore seemingly more certain, terms - may well contribute to a skewed sense of the relative benefits and risks of diagnostic imaging among healthcare providers as well as patients. The current paper, therefore, briefly compares the benefits of diagnostic imaging in several cases, based on actual mortality or morbidity data if ionizing radiation were not employed, with theoretical estimates of radiogenic cancer mortality based on the "linear no-threshold" (LNT) dose-response model. PMID:26808890

  8. Infectious titres of sheep scrapie and bovine spongiform encephalopathy agents cannot be accurately predicted from quantitative laboratory test results.

    PubMed

    González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda

    2012-11-01

    It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).

  9. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    ERIC Educational Resources Information Center

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  10. Evaluation of the National Science Foundation's Local Course Improvement Program, Volume II: Quantitative Analyses.

    ERIC Educational Resources Information Center

    Kulik, James A.; And Others

    This report is the second of three volumes describing the results of the evaluation of the National Science Foundation (NSF) Local Course Improvement (LOCI) program. This volume describes the quantitative results of the program. Evaluation of the LOCI program involved answering questions in the areas of the need for science course improvement as…

  11. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR

    PubMed Central

    Zhang, Jing; Teixeira da Silva, Jaime A.; Wang, ChunXia; Sun, HongMei

    2015-01-01

    Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446

  12. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR.

    PubMed

    Li, XueYan; Cheng, JinYun; Zhang, Jing; Teixeira da Silva, Jaime A; Wang, ChunXia; Sun, HongMei

    2015-01-01

    Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446

  13. Evaluation of Faecalibacterium 16S rDNA genetic markers for accurate identification of swine faecal waste by quantitative PCR.

    PubMed

    Duan, Chuanren; Cui, Yamin; Zhao, Yi; Zhai, Jun; Zhang, Baoyun; Zhang, Kun; Sun, Da; Chen, Hang

    2016-10-01

    A genetic marker within the 16S rRNA gene of Faecalibacterium was identified for use in a quantitative PCR (qPCR) assay to detect swine faecal contamination in water. A total of 146,038 bacterial sequences were obtained using 454 pyrosequencing. By comparative bioinformatics analysis of Faecalibacterium sequences with those of numerous swine and other animal species, swine-specific Faecalibacterium 16S rRNA gene sequences were identified and Polymerase Chain Okabe (PCR) primer sets designed and tested against faecal DNA samples from swine and non-swine sources. Two PCR primer sets, PFB-1 and PFB-2, showed the highest specificity to swine faecal waste and had no cross-reaction with other animal samples. PFB-1 and PFB-2 amplified 16S rRNA gene sequences from 50 samples of swine with positive ratios of 86 and 90%, respectively. We compared swine-specific Faecalibacterium qPCR assays for the purpose of quantifying the newly identified markers. The quantification limits (LOQs) of PFB-1 and PFB-2 markers in environmental water were 6.5 and 2.9 copies per 100 ml, respectively. Of the swine-associated assays tested, PFB-2 was more sensitive in detecting the swine faecal waste and quantifying the microbial load. Furthermore, the microbial abundance and diversity of the microbiomes of swine and other animal faeces were estimated using operational taxonomic units (OTUs). The species specificity was demonstrated for the microbial populations present in various animal faeces. PMID:27353369

  14. Sample Preparation Approaches for iTRAQ Labeling and Quantitative Proteomic Analyses in Systems Biology.

    PubMed

    Spanos, Christos; Moore, J Bernadette

    2016-01-01

    Among a variety of global quantification strategies utilized in mass spectrometry (MS)-based proteomics, isobaric tags for relative and absolute quantitation (iTRAQ) are an attractive option for examining the relative amounts of proteins in different samples. The inherent complexity of mammalian proteomes and the diversity of protein physicochemical properties mean that complete proteome coverage is still unlikely from a single analytical method. Numerous options exist for reducing protein sample complexity and resolving digested peptides prior to MS analysis. Indeed, the reliability and efficiency of protein identification and quantitation from an iTRAQ workflow strongly depend on sample preparation upstream of MS. Here we describe our methods for: (1) total protein extraction from immortalized cells; (2) subcellular fractionation of murine tissue; (3) protein sample desalting, digestion, and iTRAQ labeling; (4) peptide separation by strong cation-exchange high-performance liquid chromatography; and (5) peptide separation by isoelectric focusing.

  15. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    DOE PAGES

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; Parks, Jerry M.

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalaminmore » in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co axial ligand binding, leading to substantial errors in predicted

  16. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    PubMed

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-01

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  17. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    PubMed

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-01

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  18. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    PubMed

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit.

  19. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    PubMed

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit. PMID:26897117

  20. Quantitative ultrastructural analyses of rabbit osteoclasts. The effect of in vivo treatment with sodium salicylate.

    PubMed

    Kusakabe, A; Francis, M J

    1984-03-01

    Rabbit tibial osteoclasts were examined by electron microscopy and quantitative data on their ultrastructural morphology collected by methods described by Holtrop. Osteoclasts from the tibiae of two groups of rabbits were compared: one fed a commercial diet and the other fed the same diet containing 2% sodium salicylate (v/w). No changes in the total number of tibial osteoclasts were detected but average osteoclast size, numbers of nuclei per osteoclast and ruffled border and clear zone areas decreased (p less than 0.05), as did the proportion of osteoclasts directly attached to bone. These results suggest that osteoclast activity is inhibited by in vivo salicylate therapy.

  1. Quantitative Proteomic Analyses of Molecular Mechanisms Associated with Cytoplasmic Incompatibility in Drosophila melanogaster Induced by Wolbachia.

    PubMed

    Yuan, Lin-Ling; Chen, Xiulan; Zong, Qiong; Zhao, Ting; Wang, Jia-Lin; Zheng, Ya; Zhang, Ming; Wang, Zailong; Brownlie, Jeremy C; Yang, Fuquan; Wang, Yu-Feng

    2015-09-01

    To investigate the molecular mechanisms of cytoplasmic incompatibility (CI) induced by Wolbachia bacteria in Drosophila melanogaster, we applied an isobaric tags for relative and absolute quantitation (iTRAQ)-based quantitative proteomic assay to identify differentially expressed proteins extracted from spermathecae and seminal receptacles (SSR) of uninfected females mated with either 1-day-old Wolbachia-uninfected (1T) or infected males (1W) or 5-day-old infected males (5W). In total, 1317 proteins were quantified; 83 proteins were identified as having at least a 1.5-fold change in expression when 1W was compared with 1T. Differentially expressed proteins were related to metabolism, immunity, and reproduction. Wolbachia changed the expression of seminal fluid proteins (Sfps). Wolbachia may disrupt the abundance of proteins in SSR by affecting ubiquitin-proteasome-mediated proteolysis. Knocking down two Sfp genes (CG9334 and CG2668) in Wolbachia-free males resulted in significantly lower embryonic hatch rates with a phenotype of chromatin bridges. Wolbachia-infected females may rescue the hatch rates. This suggests that the changed expression of some Sfps may be one of the mechanisms of CI induced by Wolbachia. This study provides a panel of candidate proteins that may be involved in the interaction between Wolbachia and their insect hosts and, through future functional studies, may help to elucidate the underlying mechanisms of Wolbachia-induced CI.

  2. Quantitative analyses of spectral measurement error based on Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Ma, Congcong; Zhang, Qi; Lu, Junsheng; Xu, Kexin

    2015-03-01

    The spectral measurement error is controlled by the resolution and the sensitivity of the spectroscopic instrument and the instability of involved environment. In this talk, the spectral measurement error has been analyzed quantitatively by using the Monte Carlo (MC) simulation. Take the floating reference point measurement for example, unavoidably there is a deviation between the measuring position and the theoretical position due to various influence factors. In order to determine the error caused by the positioning accuracy of the measuring device, Monte Carlo simulation has been carried out at the wavelength of 1310nm, simulating Intralipid solution of 2%. MC simulation was performed with the number of 1010 photons and the sampling interval of the ring at 1μm. The data from MC simulation will be analyzed on the basis of thinning and calculating method (TCM) proposed in this talk. The results indicate that TCM could be used to quantitatively analyze the spectral measurement error brought by the positioning inaccuracy.

  3. Interfacial undercooling in solidification of colloidal suspensions: analyses with quantitative measurements

    PubMed Central

    You, Jiaxue; Wang, Lilin; Wang, Zhijun; Li, Junjie; Wang, Jincheng; Lin, Xin; Huang, Weidong

    2016-01-01

    Interfacial undercooling in the complex solidification of colloidal suspensions is of significance and remains a puzzling problem. Two types of interfacial undercooling are supposed to be involved in the freezing of colloidal suspensions, i.e., solute constitutional supercooling (SCS) caused by additives in the solvent and particulate constitutional supercooling (PCS) caused by particles. However, quantitative identification of the interfacial undercooling in the solidification of colloidal suspensions, is still absent; thus, the question of which type of undercooling is dominant in this complex system remains unanswered. Here, we quantitatively measured the static and dynamic interface undercoolings of SCS and PCS in ideal and practical colloidal systems. We show that the interfacial undercooling primarily comes from SCS caused by the additives in the solvent, while PCS is minor. This finding implies that the thermodynamic effect of particles from the PCS is not the fundamental physical mechanism for pattern formation of cellular growth and lamellar structure in the solidification of colloidal suspensions, a general case of ice-templating method. Instead, the patterns in the ice-templating method can be controlled effectively by adjusting the additives. PMID:27329394

  4. Quantitative analyses of tartaric acid based on terahertz time domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Cao, Binghua; Fan, Mengbao

    2010-10-01

    Terahertz wave is the electromagnetic spectrum situated between microwave and infrared wave. Quantitative analysis based on terahertz spectroscopy is very important for the application of terahertz techniques. But how to realize it is still under study. L-tartaric acid is widely used as acidulant in beverage, and other food, such as soft drinks, wine, candy, bread and some colloidal sweetmeats. In this paper, terahertz time-domain spectroscopy is applied to quantify the tartaric acid. Two methods are employed to process the terahertz spectra of different samples with different content of tartaric acid. The first one is linear regression combining correlation analysis. The second is partial least square (PLS), in which the absorption spectra in the 0.8-1.4THz region are used to quantify the tartaric acid. To compare the performance of these two principles, the relative error of the two methods is analyzed. For this experiment, the first method does better than the second one. But the first method is suitable for the quantitative analysis of materials which has obvious terahertz absorption peaks, while for material which has no obvious terahertz absorption peaks, the second one is more appropriate.

  5. Interfacial undercooling in solidification of colloidal suspensions: analyses with quantitative measurements.

    PubMed

    You, Jiaxue; Wang, Lilin; Wang, Zhijun; Li, Junjie; Wang, Jincheng; Lin, Xin; Huang, Weidong

    2016-01-01

    Interfacial undercooling in the complex solidification of colloidal suspensions is of significance and remains a puzzling problem. Two types of interfacial undercooling are supposed to be involved in the freezing of colloidal suspensions, i.e., solute constitutional supercooling (SCS) caused by additives in the solvent and particulate constitutional supercooling (PCS) caused by particles. However, quantitative identification of the interfacial undercooling in the solidification of colloidal suspensions, is still absent; thus, the question of which type of undercooling is dominant in this complex system remains unanswered. Here, we quantitatively measured the static and dynamic interface undercoolings of SCS and PCS in ideal and practical colloidal systems. We show that the interfacial undercooling primarily comes from SCS caused by the additives in the solvent, while PCS is minor. This finding implies that the thermodynamic effect of particles from the PCS is not the fundamental physical mechanism for pattern formation of cellular growth and lamellar structure in the solidification of colloidal suspensions, a general case of ice-templating method. Instead, the patterns in the ice-templating method can be controlled effectively by adjusting the additives. PMID:27329394

  6. Quantitative characterization of metastatic disease in the spine. Part II. Histogram-based analyses

    SciTech Connect

    Whyne, Cari; Hardisty, Michael; Wu, Florence; Skrinskas, Tomas; Clemons, Mark; Gordon, Lyle; Basran, Parminder S.

    2007-08-15

    Radiological imaging is essential to the appropriate management of patients with bone metastasis; however, there have been no widely accepted guidelines as to the optimal method for quantifying the potential impact of skeletal lesions or to evaluate response to treatment. The current inability to rapidly quantify the response of bone metastases excludes patients with cancer and bone disease from participating in clinical trials of many new treatments as these studies frequently require patients with so-called measurable disease. Computed tomography (CT) can provide excellent skeletal detail with a sensitivity for the diagnosis of bone metastases. The purpose of this study was to establish an objective method to quantitatively characterize disease in the bony spine using CT-based segmentations. It was hypothesized that histogram analysis of CT vertebral density distributions would enable standardized segmentation of tumor tissue and consequently allow quantification of disease in the metastatic spine. Thirty two healthy vertebral CT scans were first studied to establish a baseline characterization. The histograms of the trabecular centrums were found to be Gaussian distributions (average root-mean-square difference=30 voxel counts), as expected for a uniform material. Intrapatient vertebral level similarity was also observed as the means were not significantly different (p>0.8). Thus, a patient-specific healthy vertebral body histogram is able to characterize healthy trabecular bone throughout that individual's thoracolumbar spine. Eleven metastatically involved vertebrae were analyzed to determine the characteristics of the lytic and blastic bone voxels relative to the healthy bone. Lytic and blastic tumors were segmented as connected areas with voxel intensities between specified thresholds. The tested thresholds were {mu}-1.0{sigma}, {mu}-1.5{sigma}, and {mu}-2.0{sigma}, for lytic and {mu}+2.0{sigma}, {mu}+3.0{sigma}, and {mu}+3.5{sigma} for blastic tissue where

  7. In-field measurements of PCDF emissions from coal combustion and their quantitative analyses

    SciTech Connect

    Pehlivan, M.; Beduk, D.; Pehlivan, E.

    2008-07-01

    In this study, a series of polychlorinated dibenzofurans (PCDFs) emitted to the surrounding soil as the result of the combustion of coal and wood from the industrial steam boilers and household stoves have been identified. Levels of polychlorinated dibenzofurans (PCDF) in soil samples were measured at different sites in proximity to the municipal solid waste incinerator (MSWI) to determine baseline contamination and the contributory role of incinerator emissions. PCDF contaminants were concentrated from soil samples and isolated from other materials by chromatographic methods. PCDF isomers were identified separately by column chromatography utilizing column packed with materials such as Kieselgel/44 vol% H{sub 2}SO{sub 4}, Macro Alumina B Super 1, Mix. Column, Bio Beads S-X3 Gel Chromatography, Min Alumina B Super 1 + Kieselgel/AgNO{sub 3} and their quantitative determinations were performed by GC/MS (gas chromatography/mass spectroscopy). The PCDF levels were subsequently compared with established values from previous studies.

  8. A Quantitative Microtiter Assay for Sialylated Glycoform Analyses Using Lectin Complexes.

    PubMed

    Srinivasan, Karunya; Roy, Sucharita; Washburn, Nathaniel; Sipsey, Sandra F; Meccariello, Robin; Meador, James W; Ling, Leona E; Manning, Anthony M; Kaundinya, Ganesh V

    2015-07-01

    Fidelity of glycan structures is a key requirement for biotherapeutics, with carbohydrates playing an important role for therapeutic efficacy. Comprehensive glycan profiling techniques such as liquid chromatography (LC) and mass spectrometry (MS), while providing detailed description of glycan structures, require glycan cleavage, labeling, and paradigms to deconvolute the considerable data sets they generate. On the other hand, lectins as probes on microarrays have recently been used in orthogonal approaches for in situ glycoprofiling but require analyte labeling to take advantage of the capabilities of automated microarray readers and data analysis they afford. Herein, we describe a lectin-based microtiter assay (lectin-enzyme-linked immunosorbent assay [ELISA]) to quantify terminal glycan moieties, applicable to in vitro and in-cell glycan-engineered Fc proteins as well as intact IgGs from intravenous immunoglobulin (IVIG), a blood product containing pooled polyvalent IgG antibodies extracted from plasma from healthy human donors. We corroborate our findings with industry-standard LC-MS profiling. This "customizable" ELISA juxtaposes readouts from multiple lectins, focusing on a subset of glycoforms, and provides the ability to discern single- versus dual-arm glycosylation while defining levels of epitopes at sensitivities comparable to MS. Extendable to other biologics, this ELISA can be used stand-alone or complementary to MS for quantitative glycan analysis. PMID:25851037

  9. A Quantitative Microtiter Assay for Sialylated Glycoform Analyses Using Lectin Complexes

    PubMed Central

    Srinivasan, Karunya; Washburn, Nathaniel; Sipsey, Sandra F.; Meccariello, Robin; Meador, James W.; Ling, Leona E.; Manning, Anthony M.; Kaundinya, Ganesh V.

    2015-01-01

    Fidelity of glycan structures is a key requirement for biotherapeutics, with carbohydrates playing an important role for therapeutic efficacy. Comprehensive glycan profiling techniques such as liquid chromatography (LC) and mass spectrometry (MS), while providing detailed description of glycan structures, require glycan cleavage, labeling, and paradigms to deconvolute the considerable data sets they generate. On the other hand, lectins as probes on microarrays have recently been used in orthogonal approaches for in situ glycoprofiling but require analyte labeling to take advantage of the capabilities of automated microarray readers and data analysis they afford. Herein, we describe a lectin-based microtiter assay (lectin–enzyme-linked immunosorbent assay [ELISA]) to quantify terminal glycan moieties, applicable to in vitro and in-cell glycan-engineered Fc proteins as well as intact IgGs from intravenous immunoglobulin (IVIG), a blood product containing pooled polyvalent IgG antibodies extracted from plasma from healthy human donors. We corroborate our findings with industry-standard LC-MS profiling. This “customizable” ELISA juxtaposes readouts from multiple lectins, focusing on a subset of glycoforms, and provides the ability to discern single- versus dual-arm glycosylation while defining levels of epitopes at sensitivities comparable to MS. Extendable to other biologics, this ELISA can be used stand-alone or complementary to MS for quantitative glycan analysis. PMID:25851037

  10. Structured multiplicity and confirmatory statistical analyses in pharmacodynamic studies using the quantitative electroencephalogram.

    PubMed

    Ferber, Georg; Staner, Luc; Boeijinga, Peter

    2011-09-30

    Pharmacodynamic (PD) clinical studies are characterised by a high degree of multiplicity. This multiplicity is the result of the design of these studies that typically investigate effects of a number of biomarkers at various doses and multiple time points. Measurements are taken at many or all points of a "hyper-grid" that can be understood as the cross-product of a number of dimensions each of which has typically 3-30 discrete values. This exploratory design helps understanding the phenomena under investigation, but has made a confirmatory statistical analysis of these studies difficult, so that such an analysis is often missing in this type of studies. In this contribution we show that the cross-product structure of PD studies allows to combine several well-known techniques to address multiplicity in an effective way, so that a confirmatory analysis of these studies becomes feasible without unrealistic loss of power. We demonstrate the application of this technique in two studies that use the quantitative EEG (qEEG) as biomarker for drug activity at the GABA-A receptor. QEEG studies suffer particularly from the curse of multiplicity, since, in addition to the common dimensions like dose and time, the qEEG is measured at many locations over the scalp and in a number of frequency bands which inflate the multiplicity by a factor of about 250.

  11. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    PubMed

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  12. Global earthquake casualties due to secondary effects: A quantitative analysis for improving rapid loss analyses

    USGS Publications Warehouse

    Marano, K.D.; Wald, D.J.; Allen, T.I.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER's overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra-Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability. ?? Springer Science+Business Media B.V. 2009.

  13. Health risks in wastewater irrigation: comparing estimates from quantitative microbial risk analyses and epidemiological studies.

    PubMed

    Mara, D D; Sleigh, P A; Blumenthal, U J; Carr, R M

    2007-03-01

    The combination of standard quantitative microbial risk analysis (QMRA) techniques and 10,000-trial Monte Carlo risk simulations was used to estimate the human health risks associated with the use of wastewater for unrestricted and restricted crop irrigation. A risk of rotavirus infection of 10(-2) per person per year (pppy) was used as the reference level of acceptable risk. Using the model scenario of involuntary soil ingestion for restricted irrigation, the risk of rotavirus infection is approximately 10(-2) pppy when the wastewater contains < or =10(6) Escherichia coli per 100ml and when local agricultural practices are highly mechanised. For labour-intensive agriculture the risk of rotavirus infection is approximately 10(-2) pppy when the wastewater contains < or = 10(5) E. coli per 100ml; however, the wastewater quality should be < or = 10(4) E. coli per 100ml when children under 15 are exposed. With the model scenario of lettuce consumption for unrestricted irrigation, the use of wastewaters containing < or =10(4) E. coli per 100ml results in a rotavirus infection risk of approximately 10(-2) pppy; however, again based on epidemiological evidence from Mexico, the current WHO guideline level of < or =1,000 E. coli per 100ml should be retained for root crops eaten raw. PMID:17402278

  14. Use of quantitative shape-activity relationships to model the photoinduced toxicity of polycyclic aromatic hydrocarbons: Electron density shape features accurately predict toxicity

    SciTech Connect

    Mezey, P.G.; Zimpel, Z.; Warburton, P.; Walker, P.D.; Irvine, D.G.; Huang, X.D.; Dixon, D.G.; Greenberg, B.M.

    1998-07-01

    The quantitative shape-activity relationship (QShAR) methodology, based on accurate three-dimensional electron densities and detailed shape analysis methods, has been applied to a Lemna gibba photoinduced toxicity data set of 16 polycyclic aromatic hydrocarbon (PAH) molecules. In the first phase of the studies, a shape fragment QShAR database of PAHs was developed. The results provide a very good match to toxicity based on a combination of the local shape features of single rings in comparison to the central ring of anthracene and a more global shape feature involving larger molecular fragments. The local shape feature appears as a descriptor of the susceptibility of PAHs to photomodification and the global shape feature is probably related to photosensitization activity.

  15. Laboratory Assay of Brood Care for Quantitative Analyses of Individual Differences in Honey Bee (Apis mellifera) Affiliative Behavior

    PubMed Central

    Shpigler, Hagai Y.; Robinson, Gene E.

    2015-01-01

    Care of offspring is a form of affiliative behavior that is fundamental to studies of animal social behavior. Insects do not figure prominently in this topic because Drosophila melanogaster and other traditional models show little if any paternal or maternal care. However, the eusocial honey bee exhibits cooperative brood care with larvae receiving intense and continuous care from their adult sisters, but this behavior has not been well studied because a robust quantitative assay does not exist. We present a new laboratory assay that enables quantification of group or individual honey bee brood “nursing behavior” toward a queen larva. In addition to validating the assay, we used it to examine the influence of the age of the larva and the genetic background of the adult bees on nursing performance. This new assay also can be used in the future for mechanistic analyses of eusociality and comparative analyses of affilative behavior with other animals. PMID:26569402

  16. Comparative Genomics Analyses Reveal Extensive Chromosome Colinearity and Novel Quantitative Trait Loci in Eucalyptus

    PubMed Central

    Weng, Qijie; Li, Mei; Yu, Xiaoli; Guo, Yong; Wang, Yu; Zhang, Xiaohong; Gan, Siming

    2015-01-01

    Dense genetic maps, along with quantitative trait loci (QTLs) detected on such maps, are powerful tools for genomics and molecular breeding studies. In the important woody genus Eucalyptus, the recent release of E. grandis genome sequence allows for sequence-based genomic comparison and searching for positional candidate genes within QTL regions. Here, dense genetic maps were constructed for E. urophylla and E. tereticornis using genomic simple sequence repeats (SSR), expressed sequence tag (EST) derived SSR, EST-derived cleaved amplified polymorphic sequence (EST-CAPS), and diversity arrays technology (DArT) markers. The E. urophylla and E. tereticornis maps comprised 700 and 585 markers across 11 linkage groups, totaling at 1,208.2 and 1,241.4 cM in length, respectively. Extensive synteny and colinearity were observed as compared to three earlier DArT-based eucalypt maps (two maps with E. grandis × E. urophylla and one map of E. globulus) and with the E. grandis genome sequence. Fifty-three QTLs for growth (10–56 months of age) and wood density (56 months) were identified in 22 discrete regions on both maps, in which only one colocalizaiton was found between growth and wood density. Novel QTLs were revealed as compared with those previously detected on DArT-based maps for similar ages in Eucalyptus. Eleven to 585 positional candidate genes were obained for a 56-month-old QTL through aligning QTL confidence interval with the E. grandis genome. These results will assist in comparative genomics studies, targeted gene characterization, and marker-assisted selection in Eucalyptus and the related taxa. PMID:26695430

  17. Comparative analyses of genomic locations and race specificities of loci for quantitative resistance to Pyricularia grisea in rice and barley.

    PubMed

    Chen, Huilan; Wang, Shiping; Xing, Yongzhong; Xu, Caiguo; Hayes, Patrick M; Zhang, Qifa

    2003-03-01

    Comparative genomic analyses have revealed extensive colinearity in gene orders in distantly related taxa in mammals and grasses, which opened new horizons for evolutionary study. The objective of our study was to assess syntenic relationships of quantitative trait loci (QTL) for disease resistance in cereals by using a model system in which rice and barley were used as the hosts and the blast fungus Pyricularia grisea Sacc. as the pathogen. In total, 12 QTL against three isolates were identified in rice; two had effects on all three isolates, and the other 10 had effects on only one or two of the three isolates. Twelve QTL for blast resistance were identified in barley; one had effect on all three isolates, and the other 11 had effects on only one or two of the three isolates. The observed isolate specificity led to a hypothesis about the durability of quantitative resistance commonly observed in many plant host-pathogen systems. Four pairs of the QTL showed corresponding map positions between rice and barley, two of the four QTL pairs had complete conserved isolate specificity, and another two QTL pairs had partial conserved isolate specificity. Such corresponding locations and conserved specificity suggested a common origin and conserved functionality of the genes underlying the QTL for quantitative resistance and may have utility in gene discovery, understanding the function of the genomes, and identifying the evolutionary forces that structured the organization of the grass genomes.

  18. Quantitative Proteomic and Genetic Analyses of the Schizophrenia Susceptibility Factor Dysbindin Identify Novel Roles of the BLOC-1 Complex

    PubMed Central

    Gokhale, Avanti; Larimore, Jennifer; Werner, Erica; So, Lomon; De Luca, Andres Moreno; Lese-Martin, Christa; Lupashin, Vladimir V.; Smith, Yoland; Faundez, Victor

    2012-01-01

    The Biogenesis of Lysosome-Related Organelles Complex 1 (BLOC-1) is a protein complex containing the schizophrenia susceptibility factor dysbindin, which is encoded by the gene DTNBP1. However, mechanisms engaged by dysbindin defining schizophrenia susceptibility pathways have not been quantitatively elucidated. Here, we discovered prevalent and novel cellular roles of the BLOC-1 complex in neuronal cells by performing large-scale Stable Isotopic Labeling of Cells in Culture quantitative proteomics (SILAC) combined with genetic analyses in dysbindin-null mice (Mus musculus) and the genome of schizophrenia patients. We identified 24 proteins that associate with the BLOC-1 complex many of which were altered in content/distribution in cells or tissues deficient in BLOC-1. New findings include BLOC-1 interactions with the COG complex, a Golgi apparatus tether, and antioxidant enzymes peroxiredoxins 1-2. Importantly, loci encoding eight of the 24 proteins are affected by genomic copy number variation in schizophrenia patients. Thus, our quantitative proteomic studies expand the functional repertoire of the BLOC-1 complex and provide insight into putative molecular pathways of schizophrenia susceptibility. PMID:22423091

  19. Stability Test and Quantitative and Qualitative Analyses of the Amino Acids in Pharmacopuncture Extracted from Scolopendra subspinipes mutilans

    PubMed Central

    Cho, GyeYoon; Han, KyuChul; Yoon, JinYoung

    2015-01-01

    Objectives: Scolopendra subspinipes mutilans (S. subspinipes mutilans) is known as a traditional medicine and includes various amino acids, peptides and proteins. The amino acids in the pharmacopuncture extracted from S. subspinipes mutilans by using derivatization methods were analyzed quantitatively and qualitatively by using high performance liquid chromatography (HPLC) over a 12 month period to confirm its stability. Methods: Amino acids of pharmacopuncture extracted from S. subspinipes mutilans were derived by using O-phthaldialdehyde (OPA) & 9-fluorenyl methoxy carbonyl chloride (FMOC) reagent and were analyzed using HPLC. The amino acids were detected by using a diode array detector (DAD) and a fluorescence detector (FLD) to compare a mixed amino acid standard (STD) to the pharmacopuncture from centipedes. The stability tests on the pharmacopuncture from centipedes were done using HPLC for three conditions: a room temperature test chamber, an acceleration test chamber, and a cold test chamber. Results: The pharmacopuncture from centipedes was prepared by using the method of the Korean Pharmacopuncture Institute (KPI) and through quantitative analyses was shown to contain 9 amino acids of the 16 amino acids in the mixed amino acid STD. The amounts of the amino acids in the pharmacopuncture from centipedes were 34.37 ppm of aspartate, 123.72 ppm of arginine, 170.63 ppm of alanine, 59.55 ppm of leucine and 57 ppm of lysine. The relative standard deviation (RSD %) results for the pharmacopuncture from centipedes had a maximum value of 14.95% and minimum value of 1.795% on the room temperature test chamber, the acceleration test chamber and the cold test chamber stability tests. Conclusion: Stability tests on and quantitative and qualitative analyses of the amino acids in the pharmacopuncture extracted from centipedes by using derivatization methods were performed by using HPLC. Through research, we hope to determine the relationship between time and the

  20. Quantitation and Identification of Intact Major Milk Proteins for High-Throughput LC-ESI-Q-TOF MS Analyses

    PubMed Central

    Vincent, Delphine; Elkins, Aaron; Condina, Mark R.; Ezernieks, Vilnis; Rochfort, Simone

    2016-01-01

    Cow’s milk is an important source of proteins in human nutrition. On average, cow’s milk contains 3.5% protein. The most abundant proteins in bovine milk are caseins and some of the whey proteins, namely beta-lactoglobulin, alpha-lactalbumin, and serum albumin. A number of allelic variants and post-translationally modified forms of these proteins have been identified. Their occurrence varies with breed, individuality, stage of lactation, and health and nutritional status of the animal. It is therefore essential to have reliable methods of detection and quantitation of these proteins. Traditionally, major milk proteins are quantified using liquid chromatography (LC) and ultra violet detection method. However, as these protein variants co-elute to some degree, another dimension of separation is beneficial to accurately measure their amounts. Mass spectrometry (MS) offers such a tool. In this study, we tested several RP-HPLC and MS parameters to optimise the analysis of intact bovine proteins from milk. From our tests, we developed an optimum method that includes a 20-28-40% phase B gradient with 0.02% TFA in both mobile phases, at 0.2 mL/min flow rate, using 75°C for the C8 column temperature, scanning every 3 sec over a 600–3000 m/z window. The optimisations were performed using external standards commercially purchased for which ionisation efficiency, linearity of calibration, LOD, LOQ, sensitivity, selectivity, precision, reproducibility, and mass accuracy were demonstrated. From the MS analysis, we can use extracted ion chromatograms (EICs) of specific ion series of known proteins and integrate peaks at defined retention time (RT) window for quantitation purposes. This optimum quantitative method was successfully applied to two bulk milk samples from different breeds, Holstein-Friesian and Jersey, to assess differences in protein variant levels. PMID:27749892

  1. Cloning, characterisation, and comparative quantitative expression analyses of receptor for advanced glycation end products (RAGE) transcript forms.

    PubMed

    Sterenczak, Katharina A; Willenbrock, Saskia; Barann, Matthias; Klemke, Markus; Soller, Jan T; Eberle, Nina; Nolte, Ingo; Bullerdiek, Jörn; Murua Escobar, Hugo

    2009-04-01

    RAGE is a member of the immunoglobulin superfamily of cell surface molecules playing key roles in pathophysiological processes, e.g. immune/inflammatory disorders, Alzheimer's disease, diabetic arteriosclerosis and tumourigenesis. In humans 19 naturally occurring RAGE splicing variants resulting in either N-terminally or C-terminally truncated proteins were identified and are lately discussed as mechanisms for receptor regulation. Accordingly, deregulation of sRAGE levels has been associated with several diseases e.g. Alzheimer's disease, Type 1 diabetes, and rheumatoid arthritis. Administration of recombinant sRAGE to animal models of cancer blocked tumour growth successfully. In spite of its obvious relationship to cancer and metastasis data focusing sRAGE deregulation and tumours is rare. In this study we screened a set of tumours, healthy tissues and various cancer cell lines for RAGE splicing variants and analysed their structure. Additionally, we analysed the ratio of the mainly found transcript variants using quantitative Real-Time PCR. In total we characterised 24 previously not described canine and 4 human RAGE splicing variants, analysed their structure, classified their characteristics, and derived their respective protein forms. Interestingly, the healthy and the neoplastic tissue samples showed in majority RAGE transcripts coding for the complete receptor and transcripts showing insertions of intron 1. PMID:19061941

  2. Accurate and easy-to-use assessment of contiguous DNA methylation sites based on proportion competitive quantitative-PCR and lateral flow nucleic acid biosensor.

    PubMed

    Xu, Wentao; Cheng, Nan; Huang, Kunlun; Lin, Yuehe; Wang, Chenguang; Xu, Yuancong; Zhu, Longjiao; Du, Dan; Luo, Yunbo

    2016-06-15

    Many types of diagnostic technologies have been reported for DNA methylation, but they require a standard curve for quantification or only show moderate accuracy. Moreover, most technologies have difficulty providing information on the level of methylation at specific contiguous multi-sites, not to mention easy-to-use detection to eliminate labor-intensive procedures. We have addressed these limitations and report here a cascade strategy that combines proportion competitive quantitative PCR (PCQ-PCR) and lateral flow nucleic acid biosensor (LFNAB), resulting in accurate and easy-to-use assessment. The P16 gene with specific multi-methylated sites, a well-studied tumor suppressor gene, was used as the target DNA sequence model. First, PCQ-PCR provided amplification products with an accurate proportion of multi-methylated sites following the principle of proportionality, and double-labeled duplex DNA was synthesized. Then, a LFNAB strategy was further employed for amplified signal detection via immune affinity recognition, and the exact level of site-specific methylation could be determined by the relative intensity of the test line and internal reference line. This combination resulted in all recoveries being greater than 94%, which are pretty satisfactory recoveries in DNA methylation assessment. Moreover, the developed cascades show significantly high usability as a simple, sensitive, and low-cost tool. Therefore, as a universal platform for sensing systems for the detection of contiguous multi-sites of DNA methylation without external standards and expensive instrumentation, this PCQ-PCR-LFNAB cascade method shows great promise for the point-of-care diagnosis of cancer risk and therapeutics.

  3. Simultaneous measurement in mass and mass/mass mode for accurate qualitative and quantitative screening analysis of pharmaceuticals in river water.

    PubMed

    Martínez Bueno, M J; Ulaszewska, Maria M; Gomez, M J; Hernando, M D; Fernández-Alba, A R

    2012-09-21

    A new approach for the analysis of pharmaceuticals (target and non-target) in water by LC-QTOF-MS is described in this work. The study has been designed to assess the performance of the simultaneous quantitative screening of target compounds, and the qualitative analysis of non-target analytes, in just one run. The features of accurate mass full scan mass spectrometry together with high MS/MS spectral acquisition rates - by means of information dependent acquisition (IDA) - have demonstrated their potential application in this work. Applying this analytical strategy, an identification procedure is presented based on library searching for compounds which were not included a priori in the analytical method as target compounds, thus allowing their characterization by data processing of accurate mass measurements in MS and MS/MS mode. The non-target compounds identified in river water samples were ketorolac, trazodone, fluconazole, metformin and venlafaxine. Simultaneously, this strategy allowed for the identification of other compounds which were not included in the library by screening the highest intensity peaks detected in the samples and by analysis of the full scan TOF-MS, isotope pattern and MS/MS spectra - the example of loratadine (histaminergic) is described. The group of drugs of abuse selected as target compounds for evaluation included analgesics, opioids and psychostimulants. Satisfactory results regarding sensitivity and linearity of the developed method were obtained. Limits of detection for the selected target compounds were from 0.003 to 0.01 μg/L and 0.01 to 0.5 μg/L, in MS and MS/MS mode, respectively - by direct sample injection of 100 μL.

  4. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    SciTech Connect

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  5. Accurate, quantitative assays for the hydrolysis of soluble type I, II, and III /sup 3/H-acetylated collagens by bacterial and tissue collagenases

    SciTech Connect

    Mallya, S.K.; Mookhtiar, K.A.; Van Wart, H.E.

    1986-11-01

    Accurate and quantitative assays for the hydrolysis of soluble /sup 3/H-acetylated rat tendon type I, bovine cartilage type II, and human amnion type III collagens by both bacterial and tissue collagenases have been developed. The assays are carried out at any temperature in the 1-30/sup 0/C range in a single reaction tube and the progress of the reaction is monitored by withdrawing aliquots as a function of time, quenching with 1,10-phenanthroline, and quantitation of the concentration of hydrolysis fragments. The latter is achieved by selective denaturation of these fragments by incubation under conditions described in the previous paper of this issue. The assays give percentages of hydrolysis of all three collagen types by neutrophil collagenase that agree well with the results of gel electrophoresis experiments. The initial rates of hydrolysis of all three collagens are proportional to the concentration of both neutrophil or Clostridial collagenases over a 10-fold range of enzyme concentrations. All three assays can be carried out at collagen concentrations that range from 0.06 to 2 mg/ml and give linear double reciprocal plots for both tissue and bacterial collagenases that can be used to evaluate the kinetic parameters K/sub m/ and k/sub cat/ or V/sub max/. The assay developed for the hydrolysis of rat type I collagen by neutrophil collagenase is shown to be more sensitive by at least one order of magnitude than comparable assays that use rat type I collagen fibrils or gels as substrate.

  6. Rapid Quantitative Analyses of Elements on Herb Medicine and Food Powder Using TEA CO2 Laser-Induced Plasma

    NASA Astrophysics Data System (ADS)

    Khumaeni, Ali; Ramli, Muliadi; Idris, Nasrullah; Lee, Yong Inn; Kurniawan, Koo Hendrik; Lie, Tjung Jie; Deguchi, Yoji; Niki, Hideaki; Kagawa, Kiichiro

    2009-03-01

    A novel technique for rapid quantitative analyses of elements on herb medicine and food powder has successfully been developed. In this technique, the powder samples were plugged in a small hole (2 mm in diameter and 3 mm in depth) and covered by a metal mesh. The Transversely Excited Atmospheric (TEA) CO2 laser (1500 mJ, 200 ns) was focused on the powder sample surfaces passing through the metal mesh at atmospheric pressure of nitrogen surrounding gas. It is hypothesized that the small hole functions to confine the powder particles and suppresses the blowing-off, while the metal mesh works as the source of electrons to initiate the strong gas breakdown plasma. The confined powder particles are subsequently ablated by the laser irradiation and the ablated particles move into the strong gas breakdown plasma region to be atomized and excited. Using this method, a quantitative analysis of the milk powder sample containing different concentrations of Ca was successfully demonstrated, resulting in a good linear calibration curve with high precision.

  7. Allele Specific Locked Nucleic Acid Quantitative PCR (ASLNAqPCR): An Accurate and Cost-Effective Assay to Diagnose and Quantify KRAS and BRAF Mutation

    PubMed Central

    Morandi, Luca; de Biase, Dario; Visani, Michela; Cesari, Valentina; De Maglio, Giovanna; Pizzolitto, Stefano; Pession, Annalisa; Tallini, Giovanni

    2012-01-01

    The use of tyrosine kinase inhibitors (TKIs) requires the testing for hot spot mutations of the molecular effectors downstream the membrane-bound tyrosine kinases since their wild type status is expected for response to TKI therapy. We report a novel assay that we have called Allele Specific Locked Nucleic Acid quantitative PCR (ASLNAqPCR). The assay uses LNA-modified allele specific primers and LNA-modified beacon probes to increase sensitivity, specificity and to accurately quantify mutations. We designed primers specific for codon 12/13 KRAS mutations and BRAF V600E, and validated the assay with 300 routine samples from a variety of sources, including cytology specimens. All were analyzed by ASLNAqPCR and Sanger sequencing. Discordant cases were pyrosequenced. ASLNAqPCR correctly identified BRAF and KRAS mutations in all discordant cases and all had a mutated/wild type DNA ratio below the analytical sensitivity of the Sanger method. ASLNAqPCR was 100% specific with greater accuracy, positive and negative predictive values compared with Sanger sequencing. The analytical sensitivity of ASLNAqPCR is 0.1%, allowing quantification of mutated DNA in small neoplastic cell clones. ASLNAqPCR can be performed in any laboratory with real-time PCR equipment, is very cost-effective and can easily be adapted to detect hot spot mutations in other oncogenes. PMID:22558339

  8. Mitochondrial DNA as a non-invasive biomarker: accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias.

    PubMed

    Malik, Afshan N; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as β-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a "dilution bias" when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  9. Detection and quantitation of trace phenolphthalein (in pharmaceutical preparations and in forensic exhibits) by liquid chromatography-tandem mass spectrometry, a sensitive and accurate method.

    PubMed

    Sharma, Kakali; Sharma, Shiba P; Lahiri, Sujit C

    2013-01-01

    Phenolphthalein, an acid-base indicator and laxative, is important as a constituent of widely used weight-reducing multicomponent food formulations. Phenolphthalein is an useful reagent in forensic science for the identification of blood stains of suspected victims and for apprehending erring officials accepting bribes in graft or trap cases. The pink-colored alkaline hand washes originating from the phenolphthalein-smeared notes can easily be determined spectrophotometrically. But in many cases, colored solution turns colorless with time, which renders the genuineness of bribe cases doubtful to the judiciary. No method is known till now for the detection and identification of phenolphthalein in colorless forensic exhibits with positive proof. Liquid chromatography-tandem mass spectrometry had been found to be most sensitive, accurate method capable of detection and quantitation of trace phenolphthalein in commercial formulations and colorless forensic exhibits with positive proof. The detection limit of phenolphthalein was found to be 1.66 pg/L or ng/mL, and the calibration curve shows good linearity (r(2) = 0.9974). PMID:23106487

  10. Allele specific locked nucleic acid quantitative PCR (ASLNAqPCR): an accurate and cost-effective assay to diagnose and quantify KRAS and BRAF mutation.

    PubMed

    Morandi, Luca; de Biase, Dario; Visani, Michela; Cesari, Valentina; De Maglio, Giovanna; Pizzolitto, Stefano; Pession, Annalisa; Tallini, Giovanni

    2012-01-01

    The use of tyrosine kinase inhibitors (TKIs) requires the testing for hot spot mutations of the molecular effectors downstream the membrane-bound tyrosine kinases since their wild type status is expected for response to TKI therapy. We report a novel assay that we have called Allele Specific Locked Nucleic Acid quantitative PCR (ASLNAqPCR). The assay uses LNA-modified allele specific primers and LNA-modified beacon probes to increase sensitivity, specificity and to accurately quantify mutations. We designed primers specific for codon 12/13 KRAS mutations and BRAF V600E, and validated the assay with 300 routine samples from a variety of sources, including cytology specimens. All were analyzed by ASLNAqPCR and Sanger sequencing. Discordant cases were pyrosequenced. ASLNAqPCR correctly identified BRAF and KRAS mutations in all discordant cases and all had a mutated/wild type DNA ratio below the analytical sensitivity of the Sanger method. ASLNAqPCR was 100% specific with greater accuracy, positive and negative predictive values compared with Sanger sequencing. The analytical sensitivity of ASLNAqPCR is 0.1%, allowing quantification of mutated DNA in small neoplastic cell clones. ASLNAqPCR can be performed in any laboratory with real-time PCR equipment, is very cost-effective and can easily be adapted to detect hot spot mutations in other oncogenes.

  11. Validation of Reference Genes for Transcriptional Analyses in Pleurotus ostreatus by Using Reverse Transcription-Quantitative PCR.

    PubMed

    Castanera, Raúl; López-Varas, Leticia; Pisabarro, Antonio G; Ramírez, Lucía

    2015-06-15

    Recently, the lignin-degrading basidiomycete Pleurotus ostreatus has become a widely used model organism for fungal genomic and transcriptomic analyses. The increasing interest in this species has led to an increasing number of studies analyzing the transcriptional regulation of multigene families that encode extracellular enzymes. Reverse transcription (RT) followed by real-time PCR is the most suitable technique for analyzing the expression of gene sets under multiple culture conditions. In this work, we tested the suitability of 13 candidate genes for their use as reference genes in P. ostreatus time course cultures for enzyme production. We applied three different statistical algorithms and obtained a combination of stable reference genes for optimal normalization of RT-quantitative PCR assays. This reference index can be used for future transcriptomic analyses and validation of transcriptome sequencing or microarray data. Moreover, we analyzed the expression patterns of a laccase and a manganese peroxidase (lacc10 and mnp3, respectively) in lignocellulose and glucose-based media using submerged, semisolid, and solid-state fermentation. By testing different normalization strategies, we demonstrate that the use of nonvalidated reference genes as internal controls leads to biased results and misinterpretations of the biological responses underlying expression changes. PMID:25862220

  12. Comparing the accuracy of quantitative versus qualitative analyses of interim PET to prognosticate Hodgkin lymphoma: a systematic review protocol of diagnostic test accuracy

    PubMed Central

    Procházka, Vít; Klugar, Miloslav; Bachanova, Veronika; Klugarová, Jitka; Tučková, Dagmar; Papajík, Tomáš

    2016-01-01

    Introduction Hodgkin lymphoma is an effectively treated malignancy, yet 20% of patients relapse or are refractory to front-line treatments with potentially fatal outcomes. Early detection of poor treatment responders is crucial for appropriate application of tailored treatment strategies. Tumour metabolic imaging of Hodgkin lymphoma using visual (qualitative) 18-fluorodeoxyglucose positron emission tomography (FDG-PET) is a gold standard for staging and final outcome assessment, but results gathered during the interim period are less accurate. Analysis of continuous metabolic–morphological data (quantitative) FDG-PET may enhance the robustness of interim disease monitoring, and help to improve treatment decision-making processes. The objective of this review is to compare diagnostic test accuracy of quantitative versus qualitative interim FDG-PET in the prognostication of patients with Hodgkin lymphoma. Methods The literature on this topic will be reviewed in a 3-step strategy that follows methods described by the Joanna Briggs Institute (JBI). First, MEDLINE and EMBASE databases will be searched. Second, listed databases for published literature (MEDLINE, Tripdatabase, Pedro, EMBASE, the Cochrane Central Register of Controlled Trials and WoS) and unpublished literature (Open Grey, Current Controlled Trials, MedNar, ClinicalTrials.gov, Cos Conference Papers Index and International Clinical Trials Registry Platform of the WHO) will be queried. Third, 2 independent reviewers will analyse titles, abstracts and full texts, and perform hand search of relevant studies, and then perform critical appraisal and data extraction from selected studies using the DATARI tool (JBI). If possible, a statistical meta-analysis will be performed on pooled sensitivity and specificity data gathered from the selected studies. Statistical heterogeneity will be assessed. Funnel plots, Begg's rank correlations and Egger's regression tests will be used to detect and/or correct publication

  13. A Second-Generation Device for Automated Training and Quantitative Behavior Analyses of Molecularly-Tractable Model Organisms

    PubMed Central

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L.; Granata, Christopher; Levin, Michael

    2010-01-01

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424

  14. Specific catalysis of asparaginyl deamidation by carboxylic acids: kinetic, thermodynamic, and quantitative structure-property relationship analyses.

    PubMed

    Connolly, Brian D; Tran, Benjamin; Moore, Jamie M R; Sharma, Vikas K; Kosky, Andrew

    2014-04-01

    Asparaginyl (Asn) deamidation could lead to altered potency, safety, and/or pharmacokinetics of therapeutic protein drugs. In this study, we investigated the effects of several different carboxylic acids on Asn deamidation rates using an IgG1 monoclonal antibody (mAb1*) and a model hexapeptide (peptide1) with the sequence YGKNGG. Thermodynamic analyses of the kinetics data revealed that higher deamidation rates are associated with predominantly more negative ΔS and, to a lesser extent, more positive ΔH. The observed differences in deamidation rates were attributed to the unique ability of each type of carboxylic acid to stabilize the energetically unfavorable transition-state conformations required for imide formation. Quantitative structure property relationship (QSPR) analysis using kinetic data demonstrated that molecular descriptors encoding for the geometric spatial distribution of atomic properties on various carboxylic acids are effective determinants for the deamidation reaction. Specifically, the number of O-O and O-H atom pairs on carboxyl and hydroxyl groups with interatomic distances of 4-5 Å on a carboxylic acid buffer appears to determine the rate of deamidation. Collectively, the results from structural and thermodynamic analyses indicate that carboxylic acids presumably form multiple hydrogen bonds and charge-charge interactions with the relevant deamidation site and provide alignment between the reactive atoms on the side chain and backbone. We propose that carboxylic acids catalyze deamidation by stabilizing a specific, energetically unfavorable transition-state conformation of l-asparaginyl intermediate II that readily facilitates bond formation between the γ-carbonyl carbon and the deprotonated backbone nitrogen for cyclic imide formation.

  15. Empirical Bayes factor analyses of quantitative trait loci for gestation length in Iberian × Meishan F2 sows.

    PubMed

    Casellas, J; Varona, L; Muñoz, G; Ramírez, O; Barragán, C; Tomás, A; Martínez-Giner, M; Ovilo, C; Sánchez, A; Noguera, J L; Rodríguez, M C

    2008-02-01

    The aim of this study was to investigate chromosomal regions affecting gestation length in sows. An experimental F2 cross between Iberian and Meishan pig breeds was used for this purpose and we genotyped 119 markers covering the 18 porcine autosomal chromosomes. Within this context, we have developed a new empirical Bayes factor (BF) approach to compare between nested models, with and without the quantitative trait loci (QTL) effect, and after including the location of the QTL as an unknown parameter in the model. This empirical BF can be easily calculated from the output of a Markov chain Monte Carlo sampling by averaging conditional densities at the null QTL effects. Linkage analyses were performed in each chromosome using an animal model to account for infinitesimal genetic effects. Initially, three QTL were detected at chromosomes 6, 8 and 11 although, after correcting for multiple testing, only the additive QTL located in cM 110 of chromosome 8 remained. For this QTL, the allelic effect of substitution of the Iberian allele increased gestation length in 0.521 days, with a highest posterior density region at 95% ranged between 0.121 and 0.972 days. Although future studies are necessary to confirm if detected QTL is relevant and segregating in commercial pig populations, a hot-spot on the genetic regulation of gestation length in pigs seems to be located in chromosome 8.

  16. Genetic relationship between lodging and lodging components in barley (Hordeum vulgare) based on unconditional and conditional quantitative trait locus analyses.

    PubMed

    Chen, W Y; Liu, Z M; Deng, G B; Pan, Z F; Liang, J J; Zeng, X Q; Tashi, N M; Long, H; Yu, M Q

    2014-03-17

    Lodging (LD) is a major constraint limiting the yield and forage quality of barley. Detailed analyses of LD component (LDC) traits were conducted using 246 F2 plants generated from a cross between cultivars ZQ320 and 1277. Genetic relationships between LD and LDC were evaluated by unconditional and conditional quantitative trait locus (QTL) mapping with 117 simple sequence repeat markers. Ultimately, 53 unconditional QTL related to LD were identified on seven barley chromosomes. Up to 15 QTL accounted for over 10% of the phenotypic variation, and up to 20 QTL for culm strength were detected. Six QTL with pleiotropic effects showing significant negative correlations with LD were found between markers Bmag353 and GBM1482 on chromosome 4H. These alleles and alleles of QTL for wall thickness, culm strength, plant height, and plant weight originated from ZQ320. Conditional mapping identified 96 additional QTL for LD. Conditional QTL analysis demonstrated that plant height, plant height center of gravity, and length of the sixth internode had the greatest contribution to LD, whereas culm strength and length of the fourth internode, and culm strength of the second internode were the key factors for LD-resistant. Therefore, lodging resistance in barley can be improved based on selection of alleles affecting culm strength, wall thickness, plant height, and plant weight. The conditional QTL mapping method can be used to evaluate possible genetic relationships between LD and LDC while efficiently and precisely determining counteracting QTL, which will help in understanding the genetic basis of LD in barley.

  17. Quantitative solid-state 13C nuclear magnetic resonance spectrometric analyses of wood xylen: effect of increasing carbohydrate content

    USGS Publications Warehouse

    Bates, A.L.; Hatcher, P.G.

    1992-01-01

    Isolated lignin with a low carbohydrate content was spiked with increasing amounts of alpha-cellulose, and then analysed by solid-state 13C nuclear magnetic resonance (NMR) using cross-polarization with magic angle spinning (CPMAS) and dipolar dephasing methods in order to assess the quantitative reliability of CPMAS measurement of carbohydrate content and to determine how increasingly intense resonances for carbohydrate carbons affect calculations of the degree of lignin's aromatic ring substitution and methoxyl carbon content. Comparisons were made of the carbohydrate content calculated by NMR with carbohydrate concentrations obtained by phenol-sulfuric acid assay and by the calculation from the known amounts of cellulose added. The NMR methods used in this study yield overestimates for carbohydrate carbons due to resonance area overlap from the aliphatic side chain carbons of lignin. When corrections are made for these overlapping resonance areas, the NMR results agree very well with results obtained by other methods. Neither the calculated methoxyl carbon content nor the degree of aromatic ring substitution in lignin, both calculated from dipolar dephasing spectra, change with cellulose content. Likewise, lignin methoxyl content does not correlate with cellulose abundance when measured by integration of CPMAS spectra. ?? 1992.

  18. Value of Quantitative and Qualitative Analyses of Circulating Cell-Free DNA as Diagnostic Tools for Hepatocellular Carcinoma

    PubMed Central

    Liao, Wenjun; Mao, Yilei; Ge, Penglei; Yang, Huayu; Xu, Haifeng; Lu, Xin; Sang, Xinting; Zhong, Shouxian

    2015-01-01

    Abstract Qualitative and quantitative analyses of circulating cell-free DNA (cfDNA) are potential methods for the detection of hepatocellular carcinoma (HCC). Many studies have evaluated these approaches, but the results have been variable. This meta-analysis is the first to synthesize these published results and evaluate the use of circulating cfDNA values for HCC diagnosis. All articles that met our inclusion criteria were assessed using QUADAS guidelines after the literature research. We also investigated 3 subgroups in this meta-analysis: qualitative analysis of abnormal concentrations of circulating cfDNA; qualitative analysis of single-gene methylation alterations; and multiple analyses combined with alpha-fetoprotein (AFP). Statistical analyses were performed using the software Stata 12.0. We synthesized these published results and calculated accuracy measures (pooled sensitivity and specificity, positive/negative likelihood ratios [PLRs/NLRs], diagnostic odds ratios [DORs], and corresponding 95% confidence intervals [95% CIs]). Data were pooled using bivariate generalized linear mixed model. Furthermore, summary receiver operating characteristic curves and area under the curve (AUC) were used to summarize overall test performance. Heterogeneity and publication bias were also examined. A total of 2424 subjects included 1280 HCC patients in 22 studies were recruited in this meta-analysis. Pooled sensitivity and specificity, PLR, NLR, DOR, AUC, and CIs of quantitative analysis were 0.741 (95% CI: 0.610–0.840), 0.851 (95% CI: 0.718–0.927), 4.970 (95% CI: 2.694–9.169), 0.304 (95% CI: 0.205–0.451), 16.347 (95% CI: 8.250–32.388), and 0.86 (95% CI: 0.83–0.89), respectively. For qualitative analysis, the values were 0.538 (95% CI: 0.401–0.669), 0.944 (95% CI: 0.889–0.972), 9.545 (95% CI: 5.298–17.196), 0.490 (95% CI: 0.372–0.646), 19.491 (95% CI: 10.458–36.329), and 0.87 (95% CI: 0.84–0.90), respectively. After combining with AFP assay, the

  19. Quantitative Proteomic Analyses of Human Cytomegalovirus-Induced Restructuring of Endoplasmic Reticulum-Mitochondrial Contacts at Late Times of Infection*

    PubMed Central

    Zhang, Aiping; Williamson, Chad D.; Wong, Daniel S.; Bullough, Matthew D.; Brown, Kristy J.; Hathout, Yetrib; Colberg-Poley, Anamaris M.

    2011-01-01

    Endoplasmic reticulum-mitochondrial contacts, known as mitochondria-associated membranes, regulate important cellular functions including calcium signaling, bioenergetics, and apoptosis. Human cytomegalovirus is a medically important herpesvirus whose growth increases energy demand and depends upon continued cell survival. To gain insight into how human cytomegalovirus infection affects endoplasmic reticulum-mitochondrial contacts, we undertook quantitative proteomics of mitochondria-associated membranes using differential stable isotope labeling by amino acids in cell culture strategy and liquid chromatography-tandem MS analysis. This is the first reported quantitative proteomic analyses of a suborganelle during permissive human cytomegalovirus infection. Human fibroblasts were uninfected or human cytomegalovirus-infected for 72 h. Heavy mitochondria-associated membranes were isolated from paired unlabeled, uninfected cells and stable isotope labeling by amino acids in cell culture-labeled, infected cells and analyzed by liquid chromatography-tandem MS analysis. The results were verified by a reverse labeling experiment. Human cytomegalovirus infection dramatically altered endoplasmic reticulum-mitochondrial contacts by late times. Notable is the increased abundance of several fundamental networks in the mitochondria-associated membrane fraction of human cytomegalovirus-infected fibroblasts. Chaperones, including HSP60 and BiP, which is required for human cytomegalovirus assembly, were prominently increased at endoplasmic reticulum-mitochondrial contacts after infection. Minimal translational and translocation machineries were also associated with endoplasmic reticulum-mitochondrial contacts and increased after human cytomegalovirus infection as were glucose regulated protein 75 and the voltage dependent anion channel, which can form an endoplasmic reticulum-mitochondrial calcium signaling complex. Surprisingly, mitochondrial metabolic enzymes and cytosolic

  20. Genome-Wide Identification and Validation of Reference Genes in Infected Tomato Leaves for Quantitative RT-PCR Analyses

    PubMed Central

    Müller, Oliver A.; Grau, Jan; Thieme, Sabine; Prochaska, Heike; Adlung, Norman; Sorgatz, Anika; Bonas, Ulla

    2015-01-01

    The Gram-negative bacterium Xanthomonas campestris pv. vesicatoria (Xcv) causes bacterial spot disease of pepper and tomato by direct translocation of type III effector proteins into the plant cell cytosol. Once in the plant cell the effectors interfere with host cell processes and manipulate the plant transcriptome. Quantitative RT-PCR (qRT-PCR) is usually the method of choice to analyze transcriptional changes of selected plant genes. Reliable results depend, however, on measuring stably expressed reference genes that serve as internal normalization controls. We identified the most stably expressed tomato genes based on microarray analyses of Xcv-infected tomato leaves and evaluated the reliability of 11 genes for qRT-PCR studies in comparison to four traditionally employed reference genes. Three different statistical algorithms, geNorm, NormFinder and BestKeeper, concordantly determined the superiority of the newly identified reference genes. The most suitable reference genes encode proteins with homology to PHD finger family proteins and the U6 snRNA-associated protein LSm7. In addition, we identified pepper orthologs and validated several genes as reliable normalization controls for qRT-PCR analysis of Xcv-infected pepper plants. The newly identified reference genes will be beneficial for future qRT-PCR studies of the Xcv-tomato and Xcv-pepper pathosystems, as well as for the identification of suitable normalization controls for qRT-PCR studies of other plant-pathogen interactions, especially, if related plant species are used in combination with bacterial pathogens. PMID:26313760

  1. A comparative study of quantitative microsegregation analyses performed during the solidification of the Ni-base superalloy CMSX-10

    SciTech Connect

    Seo, Seong-Moon; Jeong, Hi-Won; Ahn, Young-Keun; Yun, Dae Won; Lee, Je-Hyun; Yoo, Young-Soo

    2014-03-01

    Quantitative microsegregation analyses were systematically carried out during the solidification of the Ni-base superalloy CMSX-10 to clarify the methodological effect on the quantification of microsegregation and to fully understand the solidification microstructure. Three experimental techniques, namely, mushy zone quenching (MZQ), planar directional solidification followed by quenching (PDSQ), and random sampling (RS), were implemented for the analysis of microsegregation tendency and the magnitude of solute elements by electron probe microanalysis. The microprobe data and the calculation results of the diffusion field ahead of the solid/liquid (S/L) interface of PDSQ samples revealed that the liquid composition at the S/L interface is significantly influenced by quenching. By applying the PDSQ technique, it was also found that the partition coefficients of all solute elements do not change appreciably during the solidification of primary γ. All three techniques could reasonably predict the segregation behavior of most solute elements. Nevertheless, the RS approach has a tendency to overestimate the magnitude of segregation for most solute elements when compared to the MZQ and PDSQ techniques. Moreover, the segregation direction of Cr and Mo predicted by the RS approach was found to be opposite from the results obtained by the MZQ and PDSQ techniques. This conflicting segregation behavior of Cr and Mo was discussed intensively. It was shown that the formation of Cr-rich areas near the γ/γ′ eutectic in various Ni-base superalloys, including the CMSX-10 alloy, could be successfully explained by the results of microprobe analysis performed on a sample quenched during the planar directional solidification of γ/γ′ eutectic. - Highlights: • Methodological effect on the quantification of microsegregation was clarified. • The liquid composition at the S/L interface was influenced by quenching. • The segregation direction of Cr varied depending on the

  2. Methods for differential and quantitative analyses of brain neurosteroid levels by LC/MS/MS with ESI-enhancing and isotope-coded derivatization.

    PubMed

    Higashi, Tatsuya; Aiba, Naoto; Tanaka, Tomoya; Yoshizawa, Kazumi; Ogawa, Shoujiro

    2016-01-01

    The analysis of changes in the brain neurosteroid (NS) levels due to various stimuli can contribute to the elucidation of their physiological roles, and the discovery and development of new antipsychotic agents targeting neurosteroidogenesis. We developed methods for the differential and quantitative analyses of the brain levels of allopregnanolene (AP) and its precursor, pregnenolone (PREG), using liquid chromatography/electrospray ionization-tandem mass spectrometry (LC/ESI-MS/MS) combined with derivatization using 2-hydrazino-1-methylpyridine (HMP) and its isotope-coded analogue, (2)H3-HMP (d-HMP). For the differential analysis, the brain sample of an untreated rat was derivatized with HMP, while the brain sample of a treated (stressed or drug-administered) rat was derivatized with d-HMP. The two derivatives were mixed and then subjected to LC/ESI-MS/MS. The stress- and drug (clozapine and fluoxetine)-evoked increases in the brain AP and PREG levels were accurately analyzed by the developed method. It was also possible to determine the absolute concentrations of the brain steroids when a deuterium-coded moiety was introduced to the standard steroids of known amounts by the derivatization and the resulting derivatives were used as internal standards. The HMP-derivatization enabled the highly sensitive detection and the use of d-HMP significantly improved the assay precision [the intra- (n=5) and inter-assay (n=5) relative standard deviations did not exceed 13.7%] and accuracy (analytical recovery ranged from 98.7 to 106.7%).

  3. Diachronous fault array growth within continental rift basins: Quantitative analyses from the East Shetland Basin, northern North Sea

    NASA Astrophysics Data System (ADS)

    Claringbould, Johan; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Odinsen, Tore

    2016-04-01

    The evolution of rift basins has been the subject of many studies, however, these studies have been mainly restricted to investigating the geometry of rift-related fault arrays. The relative timing of development of individual faults that make up the fault array is not yet well constrained. First-order tectono-stratigraphic models for rifts predict that normal faults develop broadly synchronously throughout the basin during a temporally distinct 'syn-rift' episode. However, largely due to the mechanical interaction between adjacent structures, distinctly diachronous activity is known to occur on the scale of individual fault segments and systems. Our limited understanding of how individual segments and systems contribute to array-scale strain largely reflects the limited dimension and resolution of the data available and methods applied. Here we utilize a regional extensive subsurface dataset comprising multiple 3D seismic MegaSurveys (10,000 km2), long (>75km) 2D seismic profiles, and exploration wells, to investigate the evolution of the fault array in the East Shetland Basin, North Viking Graben, northern North Sea. Previous studies propose this basin formed in response to multiphase rifting during two temporally distinct extensional phases in the Permian-Triassic and Middle-to-Late Jurassic, separated by a period of tectonic quiescence and thermal subsidence in the Early Jurassic. We document the timing of growth of individual structures within the rift-related fault array across the East Shetland Basin, constraining the progressive migration of strain from pre-Triassic-to-Late Jurassic. The methods used include (i) qualitative isochron map analysis, (ii) quantitative syn-kinematic deposit thickness difference across fault & expansion index calculations, and (iii) along fault throw-depth & backstripped displacement-length analyses. In contrast to established models, we demonstrate that the initiation, growth, and cessation of individual fault segments and

  4. Quantitative evaluation of grain shapes by utilizing elliptic Fourier and principal component analyses: Implications for sedimentary environment discrimination

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Fujiwara, H.; Ohta, T.

    2013-12-01

    Fourier analysis has allowed new advancements in determining the shape of sand grains. However, the full quantification of grain shapes has not as yet been accomplished, because Fourier expansion produces numerous descriptors, making it difficult to give a comprehensive interpretation to the results of Fourier analysis. In order to overcome this difficulty, this study focuses on the combined application of elliptic Fourier and principal component analyses (EF-PCA). The EF-PCA method allows to reduce the number of extracted Fourier variables and enables a visual inspection of the results of Fourier analysis. Thus, this approach would facilitate the understanding of the sedimentological significances of the results obtained using Fourier expansion. 0.250-0.355 mm sized quartz grains collected from glacial, foreshore, fluvial and aeolian environments were scanned by digitalizing microscope in 200 magnification ratio. Then the elliptic Fourier coefficients of grain outlines were analyzed using a program package SHAPE (Iwata and Ukai, 2002). In order to examine the degree of roundness and surface smoothness of grains, principal component analysis was then performed on both unstandardized and standardized data matrices obtained by elliptic Fourier analysis. The result of EF-PCA based on unstandardized data matrix extracted descriptors describing overall form and shape of grains because unstandardized data matrix would enhance the contribution of large amplitude and low frequency trigonometric functions. The shape descriptors extracted by this method can be interpreted as elongation index (REF1) and multiple bump indices (REF2, REF3, and REF2 + REF3). These descriptors indicate that aeolian, foreshore, and fluvial sediments contain grains with shapes similar to circles, ellipses, and cylinders, respectively. Meanwhile, the result of EF-PCA based on standardized data matrix enhanced the contribution of low amplitude and high frequency trigonometric functions, meaning that

  5. 75 FR 29537 - Draft Transportation Conformity Guidance for Quantitative Hot-spot Analyses in PM2.5

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-26

    ... (58 FR 62188) and has subsequently published several amendments. II. Background on the Draft Guidance.... In its March 10, 2006 final rule (71 FR 12468), EPA stated that quantitative PM 2.5 and PM 10 hot... its March 2006 final rule (71 FR 12502), this draft guidance was developed in coordination with...

  6. Reduced Number of Pigmented Neurons in the Substantia Nigra of Dystonia Patients? Findings from Extensive Neuropathologic, Immunohistochemistry, and Quantitative Analyses

    PubMed Central

    Iacono, Diego; Geraci-Erck, Maria; Peng, Hui; Rabin, Marcie L.; Kurlan, Roger

    2015-01-01

    Background Dystonias (Dys) represent the third most common movement disorder after essential tremor (ET) and Parkinson's disease (PD). While some pathogenetic mechanisms and genetic causes of Dys have been identified, little is known about their neuropathologic features. Previous neuropathologic studies have reported generically defined neuronal loss in various cerebral regions of Dys brains, mostly in the basal ganglia (BG), and specifically in the substantia nigra (SN). Enlarged pigmented neurons in the SN of Dys patients with and without specific genetic mutations (e.g., GAG deletions in DYT1 dystonia) have also been described. Whether or not Dys brains are associated with decreased numbers or other morphometric changes of specific neuronal types is unknown and has never been addressed with quantitative methodologies. Methods Quantitative immunohistochemistry protocols were used to estimate neuronal counts and volumes of nigral pigmented neurons in 13 SN of Dys patients and 13 SN of age-matched control subjects (C). Results We observed a significant reduction (∼20%) of pigmented neurons in the SN of Dys compared to C (p<0.01). Neither significant volumetric changes nor evident neurodegenerative signs were observed in the remaining pool of nigral pigmented neurons in Dys brains. These novel quantitative findings were confirmed after exclusion of possible co-occurring SN pathologies including Lewy pathology, tau-neurofibrillary tangles, β-amyloid deposits, ubiquitin (ubiq), and phosphorylated-TAR DNA-binding protein 43 (pTDP43)-positive inclusions. Discussion A reduced number of nigral pigmented neurons in the absence of evident neurodegenerative signs in Dys brains could indicate previously unconsidered pathogenetic mechanisms of Dys such as neurodevelopmental defects in the SN. PMID:26069855

  7. Quantitative and qualitative analyses of under-balcony acoustics with real and simulated arrays of multiple sources

    NASA Astrophysics Data System (ADS)

    Kwon, Youngmin

    The objective of this study was to quantitatively and qualitatively identify the acoustics of the under-balcony areas in music performance halls under realistic conditions that are close to an orchestral performance in consideration of multiple music instrumental sources and their diverse sound propagation patterns. The study executed monaural and binaural impulse response measurements with an array of sixteen directional sources (loudspeakers) for acoustical assessments. Actual measurements in a performance hall as well as computer simulations were conducted for the quantitative assessments. Psycho-acoustical listening tests were conducted for the qualitative assessments using the music signals binaurally recorded in the hall with the same source array. The results obtained from the multiple directional source tests were analyzed by comparing them to those obtained from the tests performed with a single omni-directional source. These two sets of results obtained in the under-balcony area were also compared to those obtained in the main orchestra area. The quantitative results showed that the use of a single source conforming to conventional measurement protocol seems to be competent for measurements of the room acoustical parameters such as EDTmid, RTmid, C80500-2k, IACCE3 and IACCL3. These quantitative measures, however, did not always agree with the results of the qualitative assessments. The primary reason is that, in many other acoustical analysis respects, the acoustical phenomena shown from the multiple source measurements were not similar to those shown from the single source measurements. Remarkable differences were observed in time-domain impulse responses, frequency content, spectral distribution, directional distribution of the early reflections, and in sound energy density over time. Therefore, the room acoustical parameters alone should not be the acoustical representative characterizing a performance hall or a specific area such as the under

  8. AquaLite, a bioluminescent label for immunoassay and nucleic acid detection: quantitative analyses at the attomol level

    NASA Astrophysics Data System (ADS)

    Smith, David F.; Stults, Nancy L.

    1996-04-01

    AquaLiteR is a direct, bioluminescent label capable of detecting attomol levels of analyte in clinical immunoassays and assays for the quantitative measurement of nucleic acids. Bioluminescent immunoassays (BIAs) require no radioisotopes and avoid complex fluorescent measurements and many of the variables of indirect enzyme immunoassays (EIAs). AquaLite, a recombinant form of the photoprotein aequorin from a bioluminescent jellyfish, is coupled directly to antibodies to prepare bioluminescent conjugates for assay development. When the AquaLite-antibody complex is exposed to a solution containing calcium ions, a flash of blue light ((lambda) max equals 469 nm) is generated. The light signal is measured in commercially available luminometers that simultaneously inject a calcium solution and detect subattomol photoprotein levies in either test tubes or microtiter plates. Immunometric or 'sandwich' type assays are available for the quantitative measurement of human endocrine hormones and nucleic acids. The AquaLite TSH assay can detect 1 attomol of thyroid stimulating hormone (TSH) in 0.2 mL of human serum and is a useful clinical tool for diagnosing hyperthyroid patients. AquaLite-based nucleic acid detection permits quantifying attomol levels of specific nucleic acid markers and represents possible solution to the difficult problem of quantifying the targets of nucleic acid amplification methods.

  9. Validation of real-time PCR analyses for line-specific quantitation of genetically modified maize and soybean using new reference molecules.

    PubMed

    Shindo, Yoichiro; Kuribara, Hideo; Matsuoka, Takeshi; Futo, Satoshi; Sawada, Chihiro; Shono, Jinji; Akiyama, Hiroshi; Goda, Yukihiro; Toyoda, Masatake; Hino, Akihiro

    2002-01-01

    Novel analytical methods based on real-time quantitative polymerase chain reactions by use of new reference molecules were validated in interlaboratory studies for the quantitation of genetically modified (GM) maize and soy. More than 13 laboratories from Japan, Korea, and the United States participated in the studies. The interlaboratory studies included 2 separate stages: (1) measurement tests of coefficient values, the ratio of recombinant DNA (r-DNA) sequence, and endogenous DNA sequence in the seeds of GM maize and GM soy; and (2) blind tests with 6 pairs of maize and soy samples, including different levels of GM maize or GM soy. Test results showed that the methods are applicable to the specific quantitation of the 5 lines of GM maize and one line of GM soy. After statistical treatment to remove outliers, the repeatability and reproducibility of these methods at a level of 5.0% were <13.7 and 15.9%, respectively. The quantitation limits of the methods were 0.50% for Bt11, T25, and MON810, and 0.10% for GA21, Event176, and Roundup Ready soy. The results of blind tests showed that the numerical information obtained from these methods will contribute to practical analyses for labeling systems of GM crops. PMID:12374412

  10. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

    PubMed

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2014-09-01

    The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data.

  11. Quantitative profiling of bile acids in biofluids and tissues based on accurate mass high resolution LC-FT-MS: compound class targeting in a metabolomics workflow.

    PubMed

    Bobeldijk, Ivana; Hekman, Maarten; de Vries-van der Weij, Jitske; Coulier, Leon; Ramaker, Raymond; Kleemann, Robert; Kooistra, Teake; Rubingh, Carina; Freidig, Andreas; Verheij, Elwin

    2008-08-15

    We report a sensitive, generic method for quantitative profiling of bile acids and other endogenous metabolites in small quantities of various biological fluids and tissues. The method is based on a straightforward sample preparation, separation by reversed-phase high performance liquid-chromatography mass spectrometry (HPLC-MS) and electrospray ionisation in the negative ionisation mode (ESI-). Detection is performed in full scan using the linear ion trap Fourier transform mass spectrometer (LTQ-FTMS) generating data for many (endogenous) metabolites, not only bile acids. A validation of the method in urine, plasma and liver was performed for 17 bile acids including their taurine, sulfate and glycine conjugates. The method is linear in the 0.01-1 microM range. The accuracy in human plasma ranges from 74 to 113%, in human urine 77 to 104% and in mouse liver 79 to 140%. The precision ranges from 2 to 20% for pooled samples even in studies with large number of samples (n>250). The method was successfully applied to a multi-compartmental APOE*3-Leiden mouse study, the main goal of which was to analyze the effect of increasing dietary cholesterol concentrations on hepatic cholesterol homeostasis and bile acid synthesis. Serum and liver samples from different treatment groups were profiled with the new method. Statistically significant differences between the diet groups were observed regarding total as well as individual bile acid concentrations.

  12. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction.

    PubMed

    Lu, Y; Rong, C Z; Zhao, J Y; Lao, X J; Xie, L; Li, S; Qin, X

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  13. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction.

    PubMed

    Lu, Y; Rong, C Z; Zhao, J Y; Lao, X J; Xie, L; Li, S; Qin, X

    2016-08-25

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period.

  14. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction

    PubMed Central

    Lu, Y.; Rong, C.Z.; Zhao, J.Y.; Lao, X.J.; Xie, L.; Li, S.; Qin, X.

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  15. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    PubMed

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts.

  16. A gel-free MS-based quantitative proteomic approach accurately measures cytochrome P450 protein concentrations in human liver microsomes.

    PubMed

    Wang, Michael Zhuo; Wu, Judy Qiju; Dennison, Jennifer B; Bridges, Arlene S; Hall, Stephen D; Kornbluth, Sally; Tidwell, Richard R; Smith, Philip C; Voyksner, Robert D; Paine, Mary F; Hall, James Edwin

    2008-10-01

    The human cytochrome P450 (P450) superfamily consists of membrane-bound proteins that metabolize a myriad of xenobiotics and endogenous compounds. Quantification of P450 expression in various tissues under normal and induced conditions has an important role in drug safety and efficacy. Conventional immunoquantification methods have poor dynamic range, low throughput, and a limited number of specific antibodies. Recent advances in MS-based quantitative proteomics enable absolute protein quantification in a complex biological mixture. We have developed a gel-free MS-based protein quantification strategy to quantify CYP3A enzymes in human liver microsomes (HLM). Recombinant protein-derived proteotypic peptides and synthetic stable isotope-labeled proteotypic peptides were used as calibration standards and internal standards, respectively. The lower limit of quantification was approximately 20 fmol P450. In two separate panels of HLM examined (n = 11 and n = 22), CYP3A, CYP3A4 and CYP3A5 concentrations were determined reproducibly (CV or=0.87) and marker activities (r(2)>or=0.88), including testosterone 6beta-hydroxylation (CYP3A), midazolam 1'-hydroxylation (CYP3A), itraconazole 6-hydroxylation (CYP3A4) and CYP3A5-mediated vincristine M1 formation (CYP3A5). Taken together, our MS-based method provides a specific, sensitive and reliable means of P450 protein quantification and should facilitate P450 characterization during drug development, especially when specific substrates and/or antibodies are unavailable.

  17. Identification of Phosphorylated Cyclin-Dependent Kinase 1 Associated with Colorectal Cancer Survival Using Label-Free Quantitative Analyses

    PubMed Central

    Tyan, Yu-Chang; Hsiao, Eric S. L.; Chu, Po-Chen; Lee, Chung-Ta; Lee, Jenq-Chang; Chen, Yi-Ming Arthur; Liao, Pao-Chi

    2016-01-01

    Colorectal cancer is the most common form of cancer in the world, and the five-year survival rate is estimated to be almost 90% in the early stages. Therefore, the identification of potential biomarkers to assess the prognosis of early stage colorectal cancer patients is critical for further clinical treatment. Dysregulated tyrosine phosphorylation has been found in several diseases that play a significant regulator of signaling in cellular pathways. In this study, this strategy was used to characterize the tyrosine phosphoproteome of colorectal cell lines with different progression abilities (SW480 and SW620). We identified a total of 280 phosphotyrosine (pTyr) peptides comprising 287 pTyr sites from 261 proteins. Label-free quantitative analysis revealed the differential level of a total of 103 pTyr peptides between SW480 and SW620 cells. We showed that cyclin-dependent kinase I (CDK1) pTyr15 level in SW480 cells was 3.3-fold greater than in SW620 cells, and these data corresponded with the label-free mass spectrometry-based proteomic quantification analysis. High level CDK1 pTyr15 was associated with prolonged disease-free survival for stage II colorectal cancer patients (n = 79). Taken together, our results suggest that the CDK1 pTyr15 protein is a potential indicator of the progression of colorectal cancer. PMID:27383761

  18. COTS-Based Fault Tolerance in Deep Space: Qualitative and Quantitative Analyses of a Bus Network Architecture

    NASA Technical Reports Server (NTRS)

    Tai, Ann T.; Chau, Savio N.; Alkalai, Leon

    2000-01-01

    Using COTS products, standards and intellectual properties (IPs) for all the system and component interfaces is a crucial step toward significant reduction of both system cost and development cost as the COTS interfaces enable other COTS products and IPs to be readily accommodated by the target system architecture. With respect to the long-term survivable systems for deep-space missions, the major challenge for us is, under stringent power and mass constraints, to achieve ultra-high reliability of the system comprising COTS products and standards that are not developed for mission-critical applications. The spirit of our solution is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper, we discuss our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. We first derive and qualitatively analyze a -'stacktree topology" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.

  19. LobeFinder: A Convex Hull-Based Method for Quantitative Boundary Analyses of Lobed Plant Cells.

    PubMed

    Wu, Tzu-Ching; Belteton, Samuel A; Pack, Jessica; Szymanski, Daniel B; Umulis, David M

    2016-08-01

    Dicot leaves are composed of a heterogeneous mosaic of jigsaw puzzle piece-shaped pavement cells that vary greatly in size and the complexity of their shape. Given the importance of the epidermis and this particular cell type for leaf expansion, there is a strong need to understand how pavement cells morph from a simple polyhedral shape into highly lobed and interdigitated cells. At present, it is still unclear how and when the patterns of lobing are initiated in pavement cells, and one major technological bottleneck to addressing the problem is the lack of a robust and objective methodology to identify and track lobing events during the transition from simple cell geometry to lobed cells. We developed a convex hull-based algorithm termed LobeFinder to identify lobes, quantify geometric properties, and create a useful graphical output of cell coordinates for further analysis. The algorithm was validated against manually curated images of pavement cells of widely varying sizes and shapes. The ability to objectively count and detect new lobe initiation events provides an improved quantitative framework to analyze mutant phenotypes, detect symmetry-breaking events in time-lapse image data, and quantify the time-dependent correlation between cell shape change and intracellular factors that may play a role in the morphogenesis process. PMID:27288363

  20. Comprehensive and quantitative proteomic analyses of zebrafish plasma reveals conserved protein profiles between genders and between zebrafish and human

    PubMed Central

    Li, Caixia; Tan, Xing Fei; Lim, Teck Kwang; Lin, Qingsong; Gong, Zhiyuan

    2016-01-01

    Omic approaches have been increasingly used in the zebrafish model for holistic understanding of molecular events and mechanisms of tissue functions. However, plasma is rarely used for omic profiling because of the technical challenges in collecting sufficient blood. In this study, we employed two mass spectrometric (MS) approaches for a comprehensive characterization of zebrafish plasma proteome, i.e. conventional shotgun liquid chromatography-tandem mass spectrometry (LC-MS/MS) for an overview study and quantitative SWATH (Sequential Window Acquisition of all THeoretical fragment-ion spectra) for comparison between genders. 959 proteins were identified in the shotgun profiling with estimated concentrations spanning almost five orders of magnitudes. Other than the presence of a few highly abundant female egg yolk precursor proteins (vitellogenins), the proteomic profiles of male and female plasmas were very similar in both number and abundance and there were basically no other highly gender-biased proteins. The types of plasma proteins based on IPA (Ingenuity Pathway Analysis) classification and tissue sources of production were also very similar. Furthermore, the zebrafish plasma proteome shares significant similarities with human plasma proteome, in particular in top abundant proteins including apolipoproteins and complements. Thus, the current study provided a valuable dataset for future evaluation of plasma proteins in zebrafish. PMID:27071722

  1. Prediction of neural differentiation fate of rat mesenchymal stem cells by quantitative morphological analyses using image processing techniques.

    PubMed

    Kazemimoghadam, Mahdieh; Janmaleki, Mohsen; Fouani, Mohamad Hassan; Abbasi, Sara

    2015-02-01

    Differentiation of bone marrow mesenchymal stem cells (BMSCs) into neural cells has received significant attention in recent years. However, there is still no practical method to evaluate differentiation process non-invasively and practically. The cellular quality evaluation method is still limited to conventional techniques, which are based on extracting genes or proteins from the cells. These techniques are invasive, costly, time consuming, and should be performed by relevant experts in equipped laboratories. Moreover, they cannot anticipate the future status of cells. Recently, cell morphology has been introduced as a feasible way of monitoring cell behavior because of its relationship with cell proliferation, functions and differentiation. In this study, rat BMSCs were induced to differentiate into neurons. Subsequently, phase contrast images of cells taken at certain intervals were subjected to a series of image processing steps and cell morphology features were calculated. In order to validate the viability of applying image-based approaches for estimating the quality of differentiation process, neural-specific markers were measured experimentally throughout the induction. The strong correlation between quantitative imaging metrics and experimental outcomes revealed the capability of the proposed approach as an auxiliary method of assessing cell behavior during differentiation.

  2. LobeFinder: A Convex Hull-Based Method for Quantitative Boundary Analyses of Lobed Plant Cells.

    PubMed

    Wu, Tzu-Ching; Belteton, Samuel A; Pack, Jessica; Szymanski, Daniel B; Umulis, David M

    2016-08-01

    Dicot leaves are composed of a heterogeneous mosaic of jigsaw puzzle piece-shaped pavement cells that vary greatly in size and the complexity of their shape. Given the importance of the epidermis and this particular cell type for leaf expansion, there is a strong need to understand how pavement cells morph from a simple polyhedral shape into highly lobed and interdigitated cells. At present, it is still unclear how and when the patterns of lobing are initiated in pavement cells, and one major technological bottleneck to addressing the problem is the lack of a robust and objective methodology to identify and track lobing events during the transition from simple cell geometry to lobed cells. We developed a convex hull-based algorithm termed LobeFinder to identify lobes, quantify geometric properties, and create a useful graphical output of cell coordinates for further analysis. The algorithm was validated against manually curated images of pavement cells of widely varying sizes and shapes. The ability to objectively count and detect new lobe initiation events provides an improved quantitative framework to analyze mutant phenotypes, detect symmetry-breaking events in time-lapse image data, and quantify the time-dependent correlation between cell shape change and intracellular factors that may play a role in the morphogenesis process.

  3. LobeFinder: A Convex Hull-Based Method for Quantitative Boundary Analyses of Lobed Plant Cells1[OPEN

    PubMed Central

    Wu, Tzu-Ching; Belteton, Samuel A.; Szymanski, Daniel B.; Umulis, David M.

    2016-01-01

    Dicot leaves are composed of a heterogeneous mosaic of jigsaw puzzle piece-shaped pavement cells that vary greatly in size and the complexity of their shape. Given the importance of the epidermis and this particular cell type for leaf expansion, there is a strong need to understand how pavement cells morph from a simple polyhedral shape into highly lobed and interdigitated cells. At present, it is still unclear how and when the patterns of lobing are initiated in pavement cells, and one major technological bottleneck to addressing the problem is the lack of a robust and objective methodology to identify and track lobing events during the transition from simple cell geometry to lobed cells. We developed a convex hull-based algorithm termed LobeFinder to identify lobes, quantify geometric properties, and create a useful graphical output of cell coordinates for further analysis. The algorithm was validated against manually curated images of pavement cells of widely varying sizes and shapes. The ability to objectively count and detect new lobe initiation events provides an improved quantitative framework to analyze mutant phenotypes, detect symmetry-breaking events in time-lapse image data, and quantify the time-dependent correlation between cell shape change and intracellular factors that may play a role in the morphogenesis process. PMID:27288363

  4. Comprehensive and quantitative proteomic analyses of zebrafish plasma reveals conserved protein profiles between genders and between zebrafish and human.

    PubMed

    Li, Caixia; Tan, Xing Fei; Lim, Teck Kwang; Lin, Qingsong; Gong, Zhiyuan

    2016-01-01

    Omic approaches have been increasingly used in the zebrafish model for holistic understanding of molecular events and mechanisms of tissue functions. However, plasma is rarely used for omic profiling because of the technical challenges in collecting sufficient blood. In this study, we employed two mass spectrometric (MS) approaches for a comprehensive characterization of zebrafish plasma proteome, i.e. conventional shotgun liquid chromatography-tandem mass spectrometry (LC-MS/MS) for an overview study and quantitative SWATH (Sequential Window Acquisition of all THeoretical fragment-ion spectra) for comparison between genders. 959 proteins were identified in the shotgun profiling with estimated concentrations spanning almost five orders of magnitudes. Other than the presence of a few highly abundant female egg yolk precursor proteins (vitellogenins), the proteomic profiles of male and female plasmas were very similar in both number and abundance and there were basically no other highly gender-biased proteins. The types of plasma proteins based on IPA (Ingenuity Pathway Analysis) classification and tissue sources of production were also very similar. Furthermore, the zebrafish plasma proteome shares significant similarities with human plasma proteome, in particular in top abundant proteins including apolipoproteins and complements. Thus, the current study provided a valuable dataset for future evaluation of plasma proteins in zebrafish. PMID:27071722

  5. International collaborative study of the endogenous reference gene LAT52 used for qualitative and quantitative analyses of genetically modified tomato.

    PubMed

    Yang, Litao; Zhang, Haibo; Guo, Jinchao; Pan, Liangwen; Zhang, Dabing

    2008-05-28

    One tomato ( Lycopersicon esculentum) gene, LAT52, has been proved to be a suitable endogenous reference gene for genetically modified (GM) tomato detection in a previous study. Herein are reported the results of a collaborative ring trial for international validation of the LAT52 gene as endogenous reference gene and its analytical systems; 14 GMO detection laboratories from 8 countries were invited, and results were finally received from 13. These data confirmed the species specificity by testing 10 plant genomic DNAs, less allelic variation and stable single copy number of the LAT52 gene, among 12 different tomato cultivars. Furthermore, the limit of detection of LAT52 qualitative PCR was proved to be 0.1%, which corresponded to 11 copies of haploid tomato genomic DNA, and the limit of quantification for the quantitative PCR system was about 10 copies of haploid tomato genomic DNA with acceptable PCR efficiency and linearity. Additionally, the bias between the test and true values of 8 blind samples ranged from 1.94 to 10.64%. All of these validated results indicated that the LAT52 gene is suitable for use as an endogenous reference gene for the identification and quantification of GM tomato and its derivates.

  6. Quantitative Analyse von Korallengemeinschaften des Sanganeb-Atolls (mittleres Rotes Meer). I. Die Besiedlungsstruktur hydrodynamisch unterschiedlich exponierter Außen- und Innenriffe

    NASA Astrophysics Data System (ADS)

    Mergner, H.; Schuhmacher, H.

    1985-12-01

    The Sanganeb-Atoll off Port Sudan is an elongate annular reef which rests on a probably raised block in the fracture zone along the Red Sea-graben. Its gross morphology was most likely formed by subaerial erosion during low sealevel conditions. Features of its topography and hydrography are described. The prevailing wind waves are from NE, Hence, the outer and inner reef slopes are exposed to different hydrodynamic conditions. The sessile benthos was analysed using the quadrat method. Four test quadrats (5×5 m each) were selected on the outer and inner slopes at a depth of 10 m along a SSW-NNE transect across the atoll. Cnidaria were by far the most dominating group; coralline algae, Porifera, Bryozoa and Ascidia, however, counted for just under 3 % living cover. Light and temperature intensities did not differ significantly at the sites studied; water movement, however, decreased in the following order: TQ IV (outer NE side of the reef ring) was exposed to strong swell and surf; TQ II (inner side of the SW-ring) was met by a strong longreef current; TQ I was situated on the outer lee of the SW-atoll ring and TQ III in the inner lee of the NE-side. This hydrodynamic gradient correlates with the composition of the coral communities from predominantly branching Scleractinia (staghorn-like and other Acropora species and Pocillopora) in TQ IV, through a Lobophyllia, Porites and Xenia-dominated community in TQ II, and a mixed community with an increasing percentage of xeniid and alcyoniid soft corals in TQ I, to a community in TQ III which is dominated by the soft corals Sinularia and Dendronephthya. The cnidarian cover ranged between 42.4 and 56.6 % whereby the two exposed test quadrats had a higher living coverage than the protected ones. In total, 2649 colonies comprising 124 species of stony, soft and hydrocorals were recorded by an elaborate method of accurate in-situ mapping. The 90 scleractinian species found include 3 species new to the Red Sea and 11 hitherto

  7. Quantitative and molecular analyses of mutation in a pSV2gpt transformed CHO cell line

    SciTech Connect

    Stankowski, L.F. Jr.; Tindall, K.R.; Hsie, A.W.

    1983-01-01

    Following NDA-mediated gene transfer we have isolated a cell line useful for studying gene mutation at the molecular level. This line, AS52, derived from a hypoxanthine-guanine phosphoribosyl transferase (HGPRT) deficient Chinese hamster ovary (CHO) cell line, carries a single copy of the E. coli xanthine-guanine phosphoribosyl transferase (XGPRT) gene (gpt) and exhibits a spontaneous mutant frequency of 20 TG/sup r/ mutants/10/sup 6/ clonable cells. As with HGPRT/sup -/ mutants, XGPRT/sup -/ mutants can be selected in 6-thioguanine. AS52 (XGPRT/sup +/) and wild type CHO (HGPRT/sup +/) cell exhibit almost identical cytotoxic responses to various agents. We observed significant differences in mutation induction by UV light and ethyl methanesulfonate (EMS). Ratios of XGPRT/sup -/ to HGPRT/sup -/ mutants induced per unit dose (J/m/sup 2/ for UV light and ..mu..g/ml for EMS) are 1.4 and 0.70, respectively. Preliminary Southern blot hybridization analyses has been performed on 30 XGPRT/sup -/ AS52 mutants. A majority of spontaneous mutants have deletions ranging in size from 1 to 4 kilobases (9/19) to complete loss of gpt sequences (4/19); the remainder have no detectable (5/19) or only minor (1/19) alterations. 5/5 UV-induced and 5/6 EMS-induced mutants do not show a detectable change. Similar analyses are underway for mutations induced by x-irradiation and ICR 191 treatment.

  8. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    PubMed

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  9. Quantitative determination of the oxidation state of iron in biotite using x-ray photoelectron spectroscopy: II. In situ analyses

    SciTech Connect

    Raeburn, S.P. |; Ilton, E.S.; Veblen, D.R.

    1997-11-01

    X-ray photoelectron spectroscopy (XPS) was used to determine Fe(III)/{Sigma}Fe in individual biotite crystals in thin sections of ten metapelites and one syenite. The in situ XPS analyses of Fe(III)/{Sigma}Fe in biotite crystals in the metapelites were compared with published Fe(III)/{Sigma}Fe values determined by Moessbauer spectroscopy (MS) for mineral separates from the same hand samples. The difference between Fe(III)/{Sigma}Fe by the two techniques was greatest for samples with the lowest Fe(III)/{Sigma}Fe (by MS). For eight metamorphic biotites with Fe(III)/{Sigma}Fe = 9-27% comparison of the two techniques yielded a linear correlation of r = 0.94 and a statistically acceptable fit of [Fe(III)/{Sigma}Fe]{sub xps} = [Fe(III)/{Sigma}Fe]{sub ms}. The difference between Fe(III)/{Sigma}Fe by the two techniques was greater for two samples with Fe(III)/{Sigma}Fe {le} 6% (by MS). For biotite in the syenite sample, Fe(III)/{Sigma}Fe determined by both in situ XPS and bulk wet chemistry/electron probe microanalysis were similar. This contribution demonstrates that XPS can be used to analyze bulk Fe(III)/{Sigma}Fe in minerals in thin sections when appropriate precautions taken to avoid oxidation of the near-surface during preparation of samples. 25 refs., 3 figs., 4 tabs.

  10. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  11. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  12. Quantitative analyses of the composition and abundance of ammonia-oxidizing archaea and ammonia-oxidizing bacteria in eight full-scale biological wastewater treatment plants.

    PubMed

    Gao, Jing-Feng; Luo, Xin; Wu, Gui-Xia; Li, Ting; Peng, Yong-Zhen

    2013-06-01

    This study investigated the diversity and abundance of AOA and AOB amoA genes in eight full-scale wastewater treatment plants (WWTPs). Although the process principles and system operations of the eight WWTPs were different, quantitative real-time PCR measurements showed that AOB amoA genes outnumbered AOA amoA genes with the ratio varying from 2.56 to 2.41×10(3), and ammonia may be partially oxidized by AOA. Phylogenetic analyses based on cloning and sequencing showed that Nitrososphaera cluster was the most dominant AOA species and might be distributed worldwide, and Nitrosopumilis cluster was few. Statistical analysis indicated that there might be versatile AOA ecotypes and some AOA might be not obligate autotrophic. The Nitrosomonas europaea cluster and Nitrosomonas oligotropha cluster were the two most dominant AOB species, and AOB species showed higher diversity than AOA species.

  13. Till formation under a soft-bedded palaeo-ice stream of the Scandinavian Ice Sheet, constrained using qualitative and quantitative microstructural analyses

    NASA Astrophysics Data System (ADS)

    Narloch, Włodzimierz; Piotrowski, Jan A.; Wysota, Wojciech; Tylmann, Karol

    2015-08-01

    This study combines micro- and macroscale studies, laboratory experiments and quantitative analyses to decipher processes of till formation under a palaeo-ice stream and the nature of subglacial sediment deformation. Till micromorphology (grain lineations, grain stacks, turbate structures, crushed grains, intraclasts and domains), grain-size and till fabric data are used to investigate a basal till generated by the Vistula Ice Stream of the Scandinavian Ice Sheet during the last glaciation in north-central Poland. A comparison of microstructures from the in situ basal till and laboratory-sheared till experiments show statistical relationships between the number of grain lineations and grain stacks; and between the number of grain lineations and turbate structures. Microstructures in the in situ till document both brittle and ductile styles of deformation, possibly due to fluctuating basal water pressures beneath the ice stream. No systematic vertical and lateral trends are detected in the parameters investigated in the in situ till, which suggests a subglacial mosaic of relatively stable and unstable areas. This situation can be explained by an unscaled space-transgressive model of subglacial till formation whereby at any given point in time different processes operated in different places under the ice sheet, possibly related to the distance from the ice margin and water pressure at the ice-bed interface. A new quantitative measure reflecting the relationship between the number of grain lineations and grain stacks may be helpful in discriminating between pervasive and non-pervasive deformation and constraining the degree of stress heterogeneity within a deformed bed. Independent strain magnitude estimations revealed by a quantitative analysis of micro- and macro-particle data show low cumulative strain in the ice-stream till in the order of 10-102.

  14. Combined metabonomic and quantitative real-time PCR analyses reveal systems metabolic changes of Fusarium graminearum induced by Tri5 gene deletion.

    PubMed

    Chen, Fangfang; Zhang, Jingtao; Song, Xiushi; Yang, Jian; Li, Heping; Tang, Huiru; Liao, Yu-Cai

    2011-05-01

    Fusarium graminearum (FG) is a serious plant pathogen causing huge losses in global production of wheat and other cereals. Tri5-gene encoded trichodiene synthase is the first key enzyme for biosynthesis of trichothecene mycotoxins in FG. To further our understandings of FG metabolism which is essential for developing novel strategies for controlling FG, we conducted a comprehensive investigation on the metabolic changes caused by Tri5-deletion by comparing metabolic differences between the wild-type FG5035 and an FG strain, Tri5(-), with Tri5 deleted. NMR methods identified more than 50 assigned fungal metabolites. Combined metabonomic and quantitative RT-PCR (qRT-PCR) analyses revealed that Tri5 deletion caused significant and comprehensive metabolic changes for FG apart from mycotoxin biosynthesis. These changes involved both carbon and nitrogen metabolisms including alterations in GABA shunt, TCA cycle, shikimate pathway, and metabolisms of lipids, amino acids, inositol, choline, pyrimidine, and purine. The hexose transporter has also been affected. These findings have shown that Tri5 gene deletion induces widespread changes in FG primary metabolism and demonstrated the combination of NMR-based metabonomics and qRT-PCR analyses as a useful way to understand the systems metabolic changes resulting from a single specific gene knockout in an eukaryotic genome and thus Tri5 gene functions. PMID:21413710

  15. Quantitative analysis

    PubMed Central

    Nevin, John A.

    1984-01-01

    Quantitative analysis permits the isolation of invariant relations in the study of behavior. The parameters of these relations can serve as higher-order dependent variables in more extensive analyses. These points are illustrated by reference to quantitative descriptions of performance maintained by concurrent schedules, multiple schedules, and signal-detection procedures. Such quantitative descriptions of empirical data may be derived from mathematical theories, which in turn can lead to novel empirical analyses so long as their terms refer to behavioral and environmental events. Thus, quantitative analysis is an integral aspect of the experimental analysis of behavior. PMID:16812400

  16. Quantitative x-ray diffraction analyses of samples used for sorption studies by the Isotope and Nuclear Chemistry Division, Los Alamos National Laboratory

    SciTech Connect

    Chipera, S.J.; Bish, D.L.

    1989-09-01

    Yucca Mountain, Nevada, is currently being investigated to determine its suitability to host our nation`s first geologic high-level nuclear waste repository. As part of an effort to determine how radionuclides will interact with rocks at Yucca Mountain, the Isotope and Nuclear Chemistry (INC) Division of Los Alamos National Laboratory has conducted numerous batch sorption experiments using core samples from Yucca Mountain. In order to understand better the interaction between the rocks and radionuclides, we have analyzed the samples used by INC with quantitative x-ray diffraction methods. Our analytical methods accurately determine the presence or absence of major phases, but we have not identified phases present below {approximately}1 wt %. These results should aid in understanding and predicting the potential interactions between radionuclides and the rocks at Yucca Mountain, although the mineralogic complexity of the samples and the lack of information on trace phases suggest that pure mineral studies may be necessary for a more complete understanding. 12 refs., 1 fig., 1 tab.

  17. LC-MS/MS method development and validation for quantitative analyses of 2-aminothiazoline-4-carboxylic acid--a new cyanide exposure marker in post mortem blood.

    PubMed

    Giebułtowicz, Joanna; Rużycka, Monika; Fudalej, Marcin; Krajewski, Paweł; Wroczyński, Piotr

    2016-04-01

    2-aminothiazoline-4-carboxylic acid (ATCA) is a hydrogen cyanide metabolite that has been found to be a reliable biomarker of cyanide poisoning, because of its long-term stability in biological material. There are several methods of ATCA determination; however, they are restricted to extraction on mixed mode cation exchange sorbents. To date, there has been no reliable method of ATCA determination in whole blood, the most frequently used material in forensic analysis. This novel method for ATCA determination in post mortem specimen includes protein precipitation, and derivatization of interfering compounds and their later extraction with ethyl acetate. ATCA was quantitatively analyzed via high performance liquid chromatography-tandem mass spectrometry with positive electrospray ionization detection using a hydrophilic interaction liquid chromatography column. The method satisfied all validation criteria and was tested on the real samples with satisfactory results. Therefore, this analytical approach has been proven to be a tool for measuring endogenous levels of ATCA in post mortem specimens. To conclude, a novel, accurate and sensitive method of ATCA determination in post mortem blood was developed. The establishment of the method provides new possibilities in the field of forensic science.

  18. Haplotype and quantitative transcript analyses of Portuguese breast/ovarian cancer families with the BRCA1 R71G founder mutation of Galician origin.

    PubMed

    Santos, Catarina; Peixoto, Ana; Rocha, Patrícia; Vega, Ana; Soares, Maria José; Cerveira, Nuno; Bizarro, Susana; Pinheiro, Manuela; Pereira, Deolinda; Rodrigues, Helena; Castro, Fernando; Henrique, Rui; Teixeira, Manuel R

    2009-01-01

    We investigated the functional effect of the missense variant c.211A>G (R71G) localized at position -2 of exon 5 donor splice site in the BRCA1 gene and evaluated whether Portuguese and Galician families with this mutation share a common ancestry. Three unrelated Portuguese breast/ovarian cancer families carrying this variant were studied through qualitative and quantitative transcript analyses. We also evaluated the presence of loss of heterozigosity and the histopathologic characteristics of the carcinomas in those families. Informative families (two from Portugal and one from Galicia) were genotyped for polymorphic microsatellite markers flanking BRCA1 to reconstruct haplotypes. Qualitative RNA analysis revealed the presence of two alternative transcripts both in carriers of the BRCA1 R71G variant and in controls. Semi-quantitative fragment analysis and real-time RT-PCR showed a significant increase of the transcript with an out of frame deletion of the last 22nt of exon 5 (BRCA1-Delta22ntex5) and a decrease of the full-length transcript (BRCA1-ex5FL) in patients carrying the R71G mutation as compared to controls, whereas no significant differences were found for the transcript with in frame skipping of exon 5 (BRCA1-Deltaex5). One haplotype was found to segregate in the two informative Portuguese families and in the Galician family. We demonstrate that disruption of alternative transcript ratios is the mechanism causing hereditary breast/ovarian cancer associated with the BRCA1 R71G mutation. Furthermore, our findings indicate a common ancestry of the Portuguese and Galician families sharing this mutation. PMID:19123044

  19. Qualitative and Quantitative Drug residue analyses: Florfenicol in white-tailed deer (Odocoileus virginianus) and supermarket meat by liquid chromatography tandem-mass spectrometry.

    PubMed

    Anderson, Shanoy C; Subbiah, Seenivasan; Gentles, Angella; Austin, Galen; Stonum, Paul; Brooks, Tiffanie A; Brooks, Chance; Smith, Ernest E

    2016-10-15

    A method for confirmation and detection of Florfenicol amine residues in white-tailed deer tissues was developed and validated in our laboratory. Tissue samples were extracted with ethyl acetate and cleaned up on sorbent (Chem-elut) cartridges. Liguid chromatography (LC) separation was achieved on a Zorbax Eclipse plus C18 column with gradient elution using a mobile phase composed of ammonium acetate in water and methanol at a flow rate of 300μL/min. Qualitative and quantitative analyses were carried out using liquid chromatography - heated electrospray ionization(HESI) and atmospheric pressure chemical ionization (APCI)-tandem mass spectrometry in the multiple reaction monitoring (MRM) interface. The limits of detection (LODs) for HESI and APCI probe were 1.8ng/g and 1.4ng/g respectively. Limits of quantitation (LOQs) for HESI and APCI probe were 5.8ng/g and 3.4ng/g respectively. Mean recovery values ranged from 79% to 111% for APCI and 30% to 60% for HESI. The validated method was used to determine white-tailed deer florfenicol tissue residue concentration 10-days after exposure. Florfenicol tissue residues concentration ranged from 0.4 to 0.6μg/g for liver and 0.02-0.05μg/g for muscle and a trace in blood samples. The concentration found in the tested edible tissues were lower than the maximum residual limit (MRL) values established by the federal drug administration (FDA) for bovine tissues. In summary, the resulting optimization procedures using the sensitivity of HESI and APCI probes in the determination of florfenicol in white-tailed deer tissue are the most compelling conclusions in this study, to the extent that we have applied this method in the evaluation of supermarket samples drug residue levels as a proof of principle.

  20. Qualitative and Quantitative Drug residue analyses: Florfenicol in white-tailed deer (Odocoileus virginianus) and supermarket meat by liquid chromatography tandem-mass spectrometry.

    PubMed

    Anderson, Shanoy C; Subbiah, Seenivasan; Gentles, Angella; Austin, Galen; Stonum, Paul; Brooks, Tiffanie A; Brooks, Chance; Smith, Ernest E

    2016-10-15

    A method for confirmation and detection of Florfenicol amine residues in white-tailed deer tissues was developed and validated in our laboratory. Tissue samples were extracted with ethyl acetate and cleaned up on sorbent (Chem-elut) cartridges. Liguid chromatography (LC) separation was achieved on a Zorbax Eclipse plus C18 column with gradient elution using a mobile phase composed of ammonium acetate in water and methanol at a flow rate of 300μL/min. Qualitative and quantitative analyses were carried out using liquid chromatography - heated electrospray ionization(HESI) and atmospheric pressure chemical ionization (APCI)-tandem mass spectrometry in the multiple reaction monitoring (MRM) interface. The limits of detection (LODs) for HESI and APCI probe were 1.8ng/g and 1.4ng/g respectively. Limits of quantitation (LOQs) for HESI and APCI probe were 5.8ng/g and 3.4ng/g respectively. Mean recovery values ranged from 79% to 111% for APCI and 30% to 60% for HESI. The validated method was used to determine white-tailed deer florfenicol tissue residue concentration 10-days after exposure. Florfenicol tissue residues concentration ranged from 0.4 to 0.6μg/g for liver and 0.02-0.05μg/g for muscle and a trace in blood samples. The concentration found in the tested edible tissues were lower than the maximum residual limit (MRL) values established by the federal drug administration (FDA) for bovine tissues. In summary, the resulting optimization procedures using the sensitivity of HESI and APCI probes in the determination of florfenicol in white-tailed deer tissue are the most compelling conclusions in this study, to the extent that we have applied this method in the evaluation of supermarket samples drug residue levels as a proof of principle. PMID:27529828

  1. Quantitative in vivo Analyses Reveal Calcium-dependent Phosphorylation Sites and Identifies a Novel Component of the Toxoplasma Invasion Motor Complex

    PubMed Central

    Nebl, Thomas; Prieto, Judith Helena; Kapp, Eugene; Smith, Brian J.; Williams, Melanie J.; Yates, John R.; Cowman, Alan F.; Tonkin, Christopher J.

    2011-01-01

    Apicomplexan parasites depend on the invasion of host cells for survival and proliferation. Calcium-dependent signaling pathways appear to be essential for micronemal release and gliding motility, yet the target of activated kinases remains largely unknown. We have characterized calcium-dependent phosphorylation events during Toxoplasma host cell invasion. Stimulation of live tachyzoites with Ca2+-mobilizing drugs leads to phosphorylation of numerous parasite proteins, as shown by differential 2-DE display of 32[P]-labeled protein extracts. Multi-dimensional Protein Identification Technology (MudPIT) identified ∼546 phosphorylation sites on over 300 Toxoplasma proteins, including 10 sites on the actomyosin invasion motor. Using a Stable Isotope of Amino Acids in Culture (SILAC)-based quantitative LC-MS/MS analyses we monitored changes in the abundance and phosphorylation of the invasion motor complex and defined Ca2+-dependent phosphorylation patterns on three of its components - GAP45, MLC1 and MyoA. Furthermore, calcium-dependent phosphorylation of six residues across GAP45, MLC1 and MyoA is correlated with invasion motor activity. By analyzing proteins that appear to associate more strongly with the invasion motor upon calcium stimulation we have also identified a novel 15-kDa Calmodulin-like protein that likely represents the MyoA Essential Light Chain of the Toxoplasma invasion motor. This suggests that invasion motor activity could be regulated not only by phosphorylation but also by the direct binding of calcium ions to this new component. PMID:21980283

  2. Quantitative Analyses of Postmortem Heat Shock Protein mRNA Profiles in the Occipital Lobes of Human Cerebral Cortices: Implications in Cause of Death

    PubMed Central

    Chung, Ukhee; Seo, Joong-Seok; Kim, Yu-Hoon; Son, Gi Hoon; Hwang, Juck-Joon

    2012-01-01

    Quantitative RNA analyses of autopsy materials to diagnose the cause and mechanism of death are challenging tasks in the field of forensic molecular pathology. Alterations in mRNA profiles can be induced by cellular stress responses during supravital reactions as well as by lethal insults at the time of death. Here, we demonstrate that several gene transcripts encoding heat shock proteins (HSPs), a gene family primarily responsible for cellular stress responses, can be differentially expressed in the occipital region of postmortem human cerebral cortices with regard to the cause of death. HSPA2 mRNA levels were higher in subjects who died due to mechanical asphyxiation (ASP), compared with those who died by traumatic injury (TI). By contrast, HSPA7 and A13 gene transcripts were much higher in the TI group than in the ASP and sudden cardiac death (SCD) groups. More importantly, relative abundances between such HSP mRNA species exhibit a stronger correlation to, and thus provide more discriminative information on, the death process than does routine normalization to a housekeeping gene. Therefore, the present study proposes alterations in HSP mRNA composition in the occipital lobe as potential forensic biological markers, which may implicate the cause and process of death. PMID:23135635

  3. Lichens biomonitoring as feasible methodology to assess air pollution in natural ecosystems: combined study of quantitative PAHs analyses and lichen biodiversity in the Pyrenees Mountains.

    PubMed

    Blasco, María; Domeño, Celia; Nerín, Cristina

    2008-06-01

    The air quality in the Aragón valley, in the central Pyrenees, has been assessed by evaluation of lichen biodiversity and mapped by elaboration of the Index of Air Purity (IAP) based on observations of the presence and abundance of eight kinds of lichen with different sensitivity to air pollution. The IAP values obtained have been compared with quantitative analytical measures of 16 PAHs in the lichen Evernia prunastri, because this species was associated with a wide range of traffic exposure and levels of urbanization. Analyses of PAHs were carried out by the DSASE method followed by an SPE clean-up step and GC-MS analysis. The concentration of total PAHs found in lichen samples from the Aragón valley ranged from 692 to 6420 ng g(-1) and the PAHs profile showed predominance of compounds with three aromatic rings. The influence of the road traffic in the area has been shown because values over the median concentration of PAHs (>1092 ng g(-1)), percentage of combustion PAHs (>50%), and equivalent toxicity (>169) were found in lichens collected at places exposed to the influence of traffic. The combination of both methods suggests IAP as a general method for evaluating the air pollution referenced to PAHs because it can be correlated with the content of combustion PAHs and poor lichen biodiversity can be partly explained by the air pollution caused by specific PAHs.

  4. NEW TARGET AND CONTROL ASSAYS FOR QUANTITATIVE POLYMERASE CHAIN REACTION (QPCR) ANALYSIS OF ENTEROCOCCI IN WATER

    EPA Science Inventory

    Enterococci are frequently monitored in water samples as indicators of fecal pollution. Attention is now shifting from culture based methods for enumerating these organisms to more rapid molecular methods such as QPCR. Accurate quantitative analyses by this method requires highly...

  5. Quantitative pharmacological analyses of the interaction between flumazenil and midazolam in monkeys discriminating midazolam: Determination of the functional half life of flumazenil.

    PubMed

    Zanettini, Claudio; France, Charles P; Gerak, Lisa R

    2014-01-15

    The duration of action of a drug is commonly estimated using plasma concentration, which is not always practical to obtain or an accurate estimate of functional half life. For example, flumazenil is used clinically to reverse the effects of benzodiazepines like midazolam; however, its elimination can be altered by other drugs, including some benzodiazepines, thereby altering its half life. This study used Schild analyses to characterize antagonism of midazolam by flumazenil and determine the functional half life of flumazenil. Four monkeys discriminated 0.178mg/kg midazolam while responding under a fixed-ratio 10 schedule of stimulus-shock termination; flumazenil was given at various times before determination of a midazolam dose-effect curve. There was a time-related decrease in the magnitude of shift of the midazolam dose-effect curve as the interval between flumazenil and midazolam increased. The potency of flumazenil, estimated by apparent pA2 values (95% CI), was 7.30 (7.12, 7.49), 7.17 (7.03, 7.31), 6.91 (6.72, 7.10) and 6.80 (6.67, 6.92) at 15, 30, 60 and 120min after flumazenil administration, respectively. The functional half life of flumazenil, derived from potency estimates, was 57±13min. Thus, increasing the interval between flumazenil and midazolam causes orderly decreases in flumazenil potency; however, across a broad range of conditions, the qualitative nature of the interaction does not change, as indicated by slopes of Schild plots at all time points that are not different from unity. Differences in potency of flumazenil are therefore due to elimination of flumazenil and not due to pharmacodynamic changes over time.

  6. Quantitative Genetic Analyses of Male Color Pattern and Female Mate Choice in a Pair of Cichlid Fishes of Lake Malawi, East Africa.

    PubMed

    Ding, Baoqing; Daugherty, Daniel W; Husemann, Martin; Chen, Ming; Howe, Aimee E; Danley, Patrick D

    2014-01-01

    The traits involved in sexual selection, such as male secondary sexual characteristics and female mate choice, often co-evolve which can promote population differentiation. However, the genetic architecture of these phenotypes can influence their evolvability and thereby affect the divergence of species. The extraordinary diversity of East African cichlid fishes is often attributed to strong sexual selection and thus this system provides an excellent model to test predictions regarding the genetic architecture of sexually selected traits that contribute to reproductive isolation. In particular, theory predicts that rapid speciation is facilitated when male sexual traits and female mating preferences are controlled by a limited number of linked genes. However, few studies have examined the genetic basis of male secondary sexual traits and female mating preferences in cichlids and none have investigated the genetic architecture of both jointly. In this study, we artificially hybridized a pair of behaviorally isolated cichlid fishes from Lake Malawi and quantified both melanistic color pattern and female mate choice. We investigated the genetic architecture of both phenotypes using quantitative genetic analyses. Our results suggest that 1) many non-additively acting genetic factors influence melanistic color patterns, 2) female mate choice may be controlled by a minimum of 1-2 non-additive genetic factors, and 3) F2 female mate choice is not influenced by male courting effort. Furthermore, a joint analysis of color pattern and female mate choice indicates that the genes underlying these two traits are unlikely to be physically linked. These results suggest that reproductive isolation may evolve rapidly owing to the few genetic factors underlying female mate choice. Hence, female mate choice likely played an important role in the unparalleled speciation of East African cichlid fish. PMID:25494046

  7. Quantitative Genetic Analyses of Male Color Pattern and Female Mate Choice in a Pair of Cichlid Fishes of Lake Malawi, East Africa

    PubMed Central

    Ding, Baoqing; Daugherty, Daniel W.; Husemann, Martin; Chen, Ming; Howe, Aimee E.; Danley, Patrick D.

    2014-01-01

    The traits involved in sexual selection, such as male secondary sexual characteristics and female mate choice, often co-evolve which can promote population differentiation. However, the genetic architecture of these phenotypes can influence their evolvability and thereby affect the divergence of species. The extraordinary diversity of East African cichlid fishes is often attributed to strong sexual selection and thus this system provides an excellent model to test predictions regarding the genetic architecture of sexually selected traits that contribute to reproductive isolation. In particular, theory predicts that rapid speciation is facilitated when male sexual traits and female mating preferences are controlled by a limited number of linked genes. However, few studies have examined the genetic basis of male secondary sexual traits and female mating preferences in cichlids and none have investigated the genetic architecture of both jointly. In this study, we artificially hybridized a pair of behaviorally isolated cichlid fishes from Lake Malawi and quantified both melanistic color pattern and female mate choice. We investigated the genetic architecture of both phenotypes using quantitative genetic analyses. Our results suggest that 1) many non-additively acting genetic factors influence melanistic color patterns, 2) female mate choice may be controlled by a minimum of 1–2 non-additive genetic factors, and 3) F2 female mate choice is not influenced by male courting effort. Furthermore, a joint analysis of color pattern and female mate choice indicates that the genes underlying these two traits are unlikely to be physically linked. These results suggest that reproductive isolation may evolve rapidly owing to the few genetic factors underlying female mate choice. Hence, female mate choice likely played an important role in the unparalleled speciation of East African cichlid fish. PMID:25494046

  8. Quantitative Analyses about Market- and Prevalence-Based Needs for Adapted Physical Education Teachers in the Public Schools in the United States

    ERIC Educational Resources Information Center

    Zhang, Jiabei

    2011-01-01

    The purpose of this study was to analyze quantitative needs for more adapted physical education (APE) teachers based on both market- and prevalence-based models. The market-based need for more APE teachers was examined based on APE teacher positions funded, while the prevalence-based need for additional APE teachers was analyzed based on students…

  9. Evaluation of a real-time quantitative PCR method with propidium monazide treatment for analyses of viable fecal indicator bacteria in wastewater samples

    EPA Science Inventory

    The U.S. EPA is currently evaluating rapid, real-time quantitative PCR (qPCR) methods for determining recreational water quality based on measurements of fecal indicator bacteria DNA sequences. In order to potentially use qPCR for other Clean Water Act needs, such as updating cri...

  10. Fluorine-18-labeled boronophenylalanine positron emission tomography for oral cancers: Qualitative and quantitative analyses of malignant tumors and normal structures in oral and maxillofacial regions

    PubMed Central

    ARIYOSHI, YASUNORI; SHIMAHARA, MASASHI; KIMURA, YOSHIHIRO; ITO, YUICHI; SHIMAHARA, TAKESHI; MIYATAKE, SHIN-ICHI; KAWABATA, SHINJI

    2011-01-01

    The present study aimed to demonstrate the features of fluorine-18-labeled boronophenylalanine positron emission tomography (18F-BPA-PET) to reveal oral cancer, as well as normal structures in the oral and maxillofacial regions. We analyzed 18F-BPA-PET findings from 8 patients with histologically confirmed recurrent and/or advanced oral cancer scheduled for boron neutron capture therapy. The capacity of 18F-BPA-PET to delineate tumor and normal structures was assessed qualitatively and quantitatively. Tumors were easily identified as high uptake areas in all cases. Although the eyes, which were depicted as a low uptake area, and tongue musculature were readily identified, major vessels were not noted in any of the cases. Areas corresponding to the surface of the dorsum tongue to middle pharynx were expressed as high uptake areas in all of the cases. Quantitatively, tumors were expressed as the highest uptake area in 6 of the 8 cases, while the dorsum tongue had the highest uptake area in the remaining 2 cases. 18F-BPA-PET is useful in demonstrating the presence of a tumor. Thus, it is crucial to note the presence of a high uptake area corresponding to the dorsum area of the tongue when diagnosing a tumor using this technique. PMID:22866098

  11. Origin of mobility enhancement by chemical treatment of gate-dielectric surface in organic thin-film transistors: Quantitative analyses of various limiting factors in pentacene thin films

    NASA Astrophysics Data System (ADS)

    Matsubara, R.; Sakai, Y.; Nomura, T.; Sakai, M.; Kudo, K.; Majima, Y.; Knipp, D.; Nakamura, M.

    2015-11-01

    For the better performance of organic thin-film transistors (TFTs), gate-insulator surface treatments are often applied. However, the origin of mobility increase has not been well understood because mobility-limiting factors have not been compared quantitatively. In this work, we clarify the influence of gate-insulator surface treatments in pentacene thin-film transistors on the limiting factors of mobility, i.e., size of crystal-growth domain, crystallite size, HOMO-band-edge fluctuation, and carrier transport barrier at domain boundary. We quantitatively investigated these factors for pentacene TFTs with bare, hexamethyldisilazane-treated, and polyimide-coated SiO2 layers as gate dielectrics. By applying these surface treatments, size of crystal-growth domain increases but both crystallite size and HOMO-band-edge fluctuation remain unchanged. Analyzing the experimental results, we also show that the barrier height at the boundary between crystal-growth domains is not sensitive to the treatments. The results imply that the essential increase in mobility by these surface treatments is only due to the increase in size of crystal-growth domain or the decrease in the number of energy barriers at domain boundaries in the TFT channel.

  12. Quantitative analyses of the effect of silk fibroin/nano-hydroxyapatite composites on osteogenic differentiation of MG-63 human osteosarcoma cells.

    PubMed

    Lin, Linxue; Hao, Runsong; Xiong, Wei; Zhong, Jian

    2015-05-01

    Silk fibroin (SF)/nano-hydroxyapatite (n-HA) composites are potential biomaterials for bone defect repair. Up to now, the biological evaluation studies of SF/n-HA composites have primarily concentrated on their biocompatibility at cell level such as cell viability and proliferation and tissue level such as material absorption and new bone formation. In this work, SF/n-HA composites were fabricated using a simplified coprecipitation methods and were deposited onto Ti alloy substrates. Then the cell adhesion ability of SF/n-HA composites was observed by SEM and cell proliferation ability of SF/n-HA composites was determined by MTT assay. The ALP activity, BGP contents, and Col I contents of MG-63 human osteosarcoma cells on SF/n-HA composites were quantitatively analyzed. HA nanocrystals were used as controls. These experiments showed that SF/n-HA composites had better cell adhesion and osteogenic differentiation abilities than n-HA materials. This work provides quantitative data to analyze the effect of SF/n-HA composites on cell osteogenic differentiation.

  13. Genetic and Molecular Analyses of Natural Variation Indicate CBF2 as a Candidate Gene for Underlying a Freezing Tolerance Quantitative Trait Locus in Arabidopsis1[w

    PubMed Central

    Alonso-Blanco, Carlos; Gomez-Mena, Concepción; Llorente, Francisco; Koornneef, Maarten; Salinas, Julio; Martínez-Zapater, José M.

    2005-01-01

    Natural variation for freezing tolerance is a major component of adaptation and geographic distribution of plant species. However, little is known about the genes and molecular mechanisms that determine its naturally occurring diversity. We have analyzed the intraspecific freezing tolerance variation existent between two geographically distant accessions of Arabidopsis (Arabidopsis thaliana), Cape Verde Islands (Cvi) and Landsberg erecta (Ler). They differed in their freezing tolerance before and after cold acclimation, as well as in the cold acclimation response in relation to photoperiod conditions. Using a quantitative genetic approach, we found that freezing tolerance differences after cold acclimation were determined by seven quantitative trait loci (QTL), named FREEZING TOLERANCE QTL 1 (FTQ1) to FTQ7. FTQ4 was the QTL with the largest effect detected in two photoperiod conditions, while five other FTQ loci behaved as photoperiod dependent. FTQ4 colocated with the tandem repeated genes C-REPEAT BINDING FACTOR 1 (CBF1), CBF2, and CBF3, which encode transcriptional activators involved in the cold acclimation response. The low freezing tolerance of FTQ4-Cvi alleles was associated with a deletion of the promoter region of Cvi CBF2, and with low RNA expression of CBF2 and of several CBF target genes. Genetic complementation of FTQ4-Cvi plants with a CBF2-Ler transgene suggests that such CBF2 allelic variation is the cause of CBF2 misexpression and the molecular basis of FTQ4. PMID:16244146

  14. Detailed Chemical Composition of Condensed Tannins via Quantitative (31)P NMR and HSQC Analyses: Acacia catechu, Schinopsis balansae, and Acacia mearnsii.

    PubMed

    Crestini, Claudia; Lange, Heiko; Bianchetti, Giulia

    2016-09-23

    The chemical composition of Acacia catechu, Schinopsis balansae, and Acacia mearnsii proanthocyanidins has been determined using a novel analytical approach that rests on the concerted use of quantitative (31)P NMR and two-dimensional heteronuclear NMR spectroscopy. This approach has offered significant detailed information regarding the structure and purity of these complex and often elusive proanthocyanidins. More specifically, rings A, B, and C of their flavan-3-ol units show well-defined and resolved absorbance regions in both the quantitative (31)P NMR and HSQC spectra. By integrating each of these regions in the (31)P NMR spectra, it is possible to identify the oxygenation patterns of the flavan-3-ol units. At the same time it is possible to acquire a fingerprint of the proanthocyanidin sample and evaluate its purity via the HSQC information. This analytical approach is suitable for both the purified natural product proanthocyanidins and their commercial analogues. Overall, this effort demonstrates the power of the concerted use of these two NMR techniques for the structural elucidation of natural products containing labile hydroxy protons and a carbon framework that can be traced out via HSQC. PMID:27551744

  15. Applications of Quaternary stratigraphic, soil-geomorphic, and quantitative geomorphic analyses to the evaluation of tectonic activity and landscape evolution in the Upper Coastal Plain, South Carolina

    SciTech Connect

    Hanson, K.L.; Bullard, T.F.; de Wit, M.W.; Stieve, A.L.

    1993-07-01

    Geomorphic analyses combined with mapping of fluvial terraces and upland geomorphic surfaces provide new approaches and data for evaluating the Quaternary activity of post-Cretaceous faults that are recognized in subsurface data at the Savannah River Site in the Upper Coastal Plain of southwestern South Carolina. Analyses of longitudinal stream and terrace profiles, regional slope maps, and drainage basin morphometry indicate long-term uplift and southeast tilt of the site region. Preliminary results of drainage basin characterization suggests an apparent rejuvenation of drainages along the trace of the Pen Branch fault (a Tertiary reactivated reverse fault that initiated as a basin-margin normal fault along the northern boundary of the Triassic Dunbarton Basin). This apparent rejuvenation of drainages may be the result of nontectonic geomorphic processes or local tectonic uplift and tilting within a framework of regional uplift.

  16. Rigour in quantitative research.

    PubMed

    Claydon, Leica Sarah

    2015-07-22

    This article which forms part of the research series addresses scientific rigour in quantitative research. It explores the basis and use of quantitative research and the nature of scientific rigour. It examines how the reader may determine whether quantitative research results are accurate, the questions that should be asked to determine accuracy and the checklists that may be used in this process. Quantitative research has advantages in nursing, since it can provide numerical data to help answer questions encountered in everyday practice.

  17. 3D numerical analyses for the quantitative risk assessment of subsidence and water flood due to the partial collapse of an abandoned gypsum mine.

    NASA Astrophysics Data System (ADS)

    Castellanza, R.; Orlandi, G. M.; di Prisco, C.; Frigerio, G.; Flessati, L.; Fernandez Merodo, J. A.; Agliardi, F.; Grisi, S.; Crosta, G. B.

    2015-09-01

    After the abandonment occurred in the '70s, the mining system (rooms and pillars) located in S. Lazzaro di Savena (BO, Italy), grown on three levels with the method rooms and pillars, has been progressively more and more affected by degradation processes due to water infiltration. The mine is located underneath a residential area causing significant concern to the local municipality. On the basis of in situ surveys, laboratory and in situ geomechanical tests, some critical scenarios were adopted in the analyses to simulate the progressive collapse of pillars and of roofs in the most critical sectors of the mine. A first set of numerical analyses using 3D geotechnical FEM codes were performed to predict the extension of the subsidence area and its interaction with buildings. Secondly 3D CFD analyses were used to evaluated the amount of water that could be eventually ejected outside the mine and eventually flooding the downstream village. The predicted extension of the subsidence area together with the predicted amount of the ejected water have been used to design possible remedial measurements.

  18. Quantitative studies and sensory analyses on the influence of cultivar, spatial tissue distribution, and industrial processing on the bitter off-taste of carrots (Daucus carota l.) and carrot products.

    PubMed

    Czepa, Andreas; Hofmann, Thomas

    2004-07-14

    Although various reports pointed to 6-methoxymellein (1) as a key player imparting the bitter taste in carrots, activity-guided fractionation experiments recently gave evidence that not this isocoumarin but bisacetylenic oxylipins contribute mainly to the off-taste. Among these, (Z)-heptadeca-1,9-dien-4,6-diyn-3-ol (2), (Z)-3-acetoxy-heptadeca-1,9-dien-4,6-diyn-8-ol (3), and (Z)-heptadeca-1,9-dien-4,6-diyn-3,8-diol (falcarindiol, 4) have been successfully identified. In the present study, an analytical procedure was developed enabling an accurate quantitation of 1-4 in carrots and carrot products. To achieve this, (E)-heptadeca-1,9-dien-4,6-diyn-3,8-diol was synthesized as a suitable internal standard for the quantitative analysis of the bisacetylenes. On the basis of taste activity values, calculated as the ratio of the concentration and the human sensory threshold of a compound, a close relationship between the concentration of 4 and the intensity of the bitter off-taste in carrots, carrot puree, and carrot juice was demonstrated, thus showing that compound 4 might offer a new analytical measure for an objective evaluation of the quality of carrot products. Quantitative analysis on the intermediate products in industrial carrot processing revealed that removing the peel as well as green parts successfully decreased the concentrations in the final carrot puree by more than 50%.

  19. Evaluation of Drosophila metabolic labeling strategies for in vivo quantitative proteomic analyses with applications to early pupa formation and amino acid starvation.

    PubMed

    Chang, Ying-Che; Tang, Hong-Wen; Liang, Suh-Yuen; Pu, Tsung-Hsien; Meng, Tzu-Ching; Khoo, Kay-Hooi; Chen, Guang-Chao

    2013-05-01

    Although stable isotope labeling by amino acids in cell culture (SILAC)-based quantitative proteomics was first developed as a cell culture-based technique, stable isotope-labeled amino acids have since been successfully introduced in vivo into select multicellular model organisms by manipulating the feeding diets. An earlier study by others has demonstrated that heavy lysine labeled Drosophila melanogaster can be derived by feeding with an exclusive heavy lysine labeled yeast diet. In this work, we have further evaluated the use of heavy lysine and/or arginine for metabolic labeling of fruit flies, with an aim to determine its respective quantification accuracy and versatility. In vivo conversion of heavy lysine and/or heavy arginine to several nonessential amino acids was observed in labeled flies, leading to distorted isotope pattern and underestimated heavy to light ratio. These quantification defects can nonetheless be rectified at protein level using the normalization function. The only caveat is that such a normalization strategy may not be suitable for every biological application, particularly when modified peptides need to be individually quantified at peptide level. In such cases, we showed that peptide ratios calculated from the summed intensities of all isotope peaks are less affected by the heavy amino acid conversion and therefore less sequence-dependent and more reliable. Applying either the single Lys8 or double Lys6/Arg10 metabolic labeling strategy to flies, we quantitatively mapped the proteomic changes during the onset of metamorphosis and upon amino acid deprivation. The expression of a number of steroid hormone 20-hydroxyecdysone regulated proteins was found to be changed significantly during larval-pupa transition, while several subunits of the V-ATPase complex and components regulating actomyosin were up-regulated under starvation-induced autophagy conditions.

  20. Quantitative research.

    PubMed

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  1. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    SciTech Connect

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-08-15

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput quantitative LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. In addition, we also report on the optimization of a reversed-phase LC method for the separation of lipids in these sample types.

  2. Quantitative In Vivo Fluorescence Cross-Correlation Analyses Highlight the Importance of Competitive Effects in the Regulation of Protein-Protein Interactions

    PubMed Central

    Sadaie, Wakako; Harada, Yoshie; Matsuda, Michiyuki

    2014-01-01

    Computer-assisted simulation is a promising approach for clarifying complicated signaling networks. However, this approach is currently limited by a deficiency of kinetic parameters determined in living cells. To overcome this problem, we applied fluorescence cross-correlation spectrometry (FCCS) to measure dissociation constant (Kd) values of signaling molecule complexes in living cells (in vivo Kd). Among the pairs of fluorescent molecules tested, that of monomerized enhanced green fluorescent protein (mEGFP) and HaloTag-tetramethylrhodamine was most suitable for the measurement of in vivo Kd by FCCS. Using this pair, we determined 22 in vivo Kd values of signaling molecule complexes comprising the epidermal growth factor receptor (EGFR)–Ras–extracellular signal-regulated kinase (ERK) mitogen-activated protein (MAP) kinase pathway. With these parameters, we developed a kinetic simulation model of the EGFR-Ras-ERK MAP kinase pathway and uncovered a potential role played by stoichiometry in Shc binding to EGFR during the peak activations of Ras, MEK, and ERK. Intriguingly, most of the in vivo Kd values determined in this study were higher than the in vitro Kd values reported previously, suggesting the significance of competitive bindings inside cells. These in vivo Kd values will provide a sound basis for the quantitative understanding of signal transduction. PMID:24958104

  3. Regulation of flavonol content and composition in (Syrah×Pinot Noir) mature grapes: integration of transcriptional profiling and metabolic quantitative trait locus analyses

    PubMed Central

    Malacarne, Giulia; Costantini, Laura; Coller, Emanuela; Battilana, Juri; Velasco, Riccardo; Vrhovsek, Urska; Grando, Maria Stella; Moser, Claudio

    2015-01-01

    Flavonols are a ubiquitous class of flavonoids that accumulate preferentially in flowers and mature berries. Besides their photo-protective function, they play a fundamental role during winemaking, stabilizing the colour by co-pigmentation with anthocyanins and contributing to organoleptic characteristics. Although the general flavonol pathway has been genetically and biochemically elucidated, the genetic control of flavonol content and composition at harvest is still not clear. To this purpose, the grapes of 170 segregating F1 individuals from a ‘Syrah’×’Pinot Noir’ population were evaluated at the mature stage for the content of six flavonol aglycons in four seasons. Metabolic data in combination with genetic data enabled the identification of 16 mQTLs (metabolic quantitative trait loci). For the first time, major genetic control by the linkage group 2 (LG 2)/MYBA region on flavonol variation, in particular of tri-hydroxylated flavonols, is demonstrated. Moreover, seven regions specifically associated with the fine control of flavonol biosynthesis are identified. Gene expression profiling of two groups of individuals significantly divergent for their skin flavonol content identified a large set of differentially modulated transcripts. Among these, the transcripts coding for MYB and bZIP transcription factors, methyltranferases, and glucosyltranferases specific for flavonols, proteins, and factors belonging to the UV-B signalling pathway and co-localizing with the QTL regions are proposed as candidate genes for the fine regulation of flavonol content and composition in mature grapes. PMID:26071529

  4. Quantitative in vivo fluorescence cross-correlation analyses highlight the importance of competitive effects in the regulation of protein-protein interactions.

    PubMed

    Sadaie, Wakako; Harada, Yoshie; Matsuda, Michiyuki; Aoki, Kazuhiro

    2014-09-01

    Computer-assisted simulation is a promising approach for clarifying complicated signaling networks. However, this approach is currently limited by a deficiency of kinetic parameters determined in living cells. To overcome this problem, we applied fluorescence cross-correlation spectrometry (FCCS) to measure dissociation constant (Kd) values of signaling molecule complexes in living cells (in vivo Kd). Among the pairs of fluorescent molecules tested, that of monomerized enhanced green fluorescent protein (mEGFP) and HaloTag-tetramethylrhodamine was most suitable for the measurement of in vivo Kd by FCCS. Using this pair, we determined 22 in vivo Kd values of signaling molecule complexes comprising the epidermal growth factor receptor (EGFR)-Ras-extracellular signal-regulated kinase (ERK) mitogen-activated protein (MAP) kinase pathway. With these parameters, we developed a kinetic simulation model of the EGFR-Ras-ERK MAP kinase pathway and uncovered a potential role played by stoichiometry in Shc binding to EGFR during the peak activations of Ras, MEK, and ERK. Intriguingly, most of the in vivo Kd values determined in this study were higher than the in vitro Kd values reported previously, suggesting the significance of competitive bindings inside cells. These in vivo Kd values will provide a sound basis for the quantitative understanding of signal transduction.

  5. Quantitative analyses of oxidation states for cubic SrMnO3 and orthorhombic SrMnO2.5 with electron energy loss spectroscopy

    PubMed Central

    Kobayashi, S.; Tokuda, Y.; Mizoguchi, T.; Shibata, N.; Sato, Y.; Ikuhara, Y.; Yamamoto, T.

    2010-01-01

    The oxidation state of Mn in cubic SrMnO3 and orthorhombic SrMnO2.5 was investigated by electron energy loss (EEL) spectroscopy. Change in the oxidation state of Mn produced some spectral changes in the O-K edge as well as in the Mn-L2,3 edge EEL spectra. This study demonstrated that the oxidation state of Mn and the amount of oxygen vacancies in cubic SrMnO3 and orthorhombic SrMnO2.5 could be quantified by analyzing the features of the O-K edge spectrum and the Mn L3∕L2 ratio in the Mn-L2,3 edge spectrum. Our quantitative analysis showed that the spectral changes in the Mn-L2,3 edge were mainly caused by the oxidation state of Mn, whereas those in the O-K edge could be sensitive to both the oxidation state of Mn and to lattice distortions. PMID:21245943

  6. Comparative quantitative study of Ki-67 antibody staining in 78 B and T cell malignant lymphoma (ML) using two image analyser systems.

    PubMed

    Caulet, S; Lesty, C; Raphael, M; Schoevaert, D; Brousset, P; Binet, J L; Diebold, J; Delsol, G

    1992-06-01

    Total Ki-67 stained area percentage was studied in 32 B and 46 T malignant lymphomas (ML) using two different image analyser systems (TAS, Leitz; SAMBA TM 2005, TITN) respectively. The total Ki-67 area percentage was highly correlated to the number of Ki-67 positive cellular profiles (B-ML, r = 0.93; T-ML, r = 0.88), indicating that area percentage is a reliable alternative method to the manual cell counting. Image analysis allows quicker measurements, appropriate to large and strictly lymphomatous regions. The cell image processor (SAMBA TM 2005, TITN) linked to a color video camera was more suitable for immunohistochemical sections and allowed more automated and faster measurements than the texture analyser (TAS, Leitz) linked with a black and white camera. Alkaline phosphatase technique with fast red as chromogen was more suitable for the detection of Ki-67 stained area by thresholding than peroxidase technique with aminoethylcarbazol or with diaminobenzidine as chromogens. Significant differences were found between low and high grade in B and T ML according to the Kiel classification (mean values +/- SD of 7.7 +/- 3.8% and 16.6 +/- 6.2% in B-ML and of 10.2 +/- 7.9% and 25.6 +/- 16.3% in T-ML respectively). In follicular B-ML, considering follicular areas only, values were comparable to high grade ML; angioimmunoblastic-lymphadenopathy-like (AILD-type) T-ML belonging to low grade ML showed similar values to pleomorphic T-ML with medium and/or large cells belonging to high grade ML.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1409077

  7. The quantitative spectrum of inositol phosphate metabolites in avian erythrocytes, analysed by proton n.m.r. and h.p.l.c. with direct isomer detection.

    PubMed Central

    Radenberg, T; Scholz, P; Bergmann, G; Mayr, G W

    1989-01-01

    The spectrum of inositol phosphate isomers present in avian erythrocytes was investigated in qualitative and quantitative terms. Inositol phosphates were isolated in micromolar quantities from turkey blood by anion-exchange chromatography on Q-Sepharose and subjected to proton n.m.r. and h.p.l.c. analysis. We employed a h.p.l.c. technique with a novel, recently described complexometric post-column detection system, called 'metal-dye detection' [Mayr (1988) Biochem. J. 254, 585-591], which enabled us to identify non-radioactively labelled inositol phosphate isomers and to determine their masses. The results indicate that avian erythrocytes contain the same inositol phosphate isomers as mammalian cells. Denoted by the 'lowest-locant rule' [NC-IUB Recommendations (1988) Biochem. J. 258, 1-2] irrespective of true enantiomerism, these are Ins(1,4)P2, Ins(1,6)P2, Ins(1,3,4)P3, Ins(1,4,5)P3, Ins(1,3,4,5)P4, Ins(1,3,4,6)P4, Ins(1,4,5,6)P4, Ins(1,3,4,5,6)P5, and InsP6. Furthermore, we identified two inositol trisphosphate isomers hitherto not described for mammalian cells, namely Ins(1,5,6)P3 and Ins(2,4,5)P3. The possible position of these two isomers in inositol phosphate metabolism and implications resulting from absolute abundances of inositol phosphates are discussed. PMID:2604720

  8. TopCAT and PySESA: Open-source software tools for point cloud decimation, roughness analyses, and quantitative description of terrestrial surfaces

    NASA Astrophysics Data System (ADS)

    Hensleigh, J.; Buscombe, D.; Wheaton, J. M.; Brasington, J.; Welcker, C. W.; Anderson, K.

    2015-12-01

    The increasing use of high-resolution topography (HRT) constructed from point clouds obtained from technology such as LiDAR, SoNAR, SAR, SfM and a variety of range-imaging techniques, has created a demand for custom analytical tools and software for point cloud decimation (data thinning and gridding) and spatially explicit statistical analysis of terrestrial surfaces. We will present on a number of analytical and computational tools designed to quantify surface roughness and texture, directly from point clouds in a variety of ways (using spatial- and frequency-domain statistics). TopCAT (Topographic Point Cloud Analysis Toolkit; Brasington et al., 2012) and PySESA (Python program for Spatially Explicit Spectral Analysis) both work by applying a small moving window to (x,y,z) data to calculate a suite of (spatial and spectral domain) statistics, which are then spatially-referenced on a regular (x,y) grid at a user-defined resolution. Collectively, these tools facilitate quantitative description of surfaces and may allow, for example, fully automated texture characterization and segmentation, roughness and grain size calculation, and feature detection and classification, on very large point clouds with great computational efficiency. Using tools such as these, it may be possible to detect geomorphic change in surfaces which have undergone minimal elevation difference, for example deflation surfaces which have coarsened but undergone no net elevation change, or surfaces which have eroded and accreted, leaving behind a different textural surface expression than before. The functionalities of the two toolboxes are illustrated with example high-resolution bathymetric point cloud data collected with multibeam echosounder, and topographic data collected with LiDAR.

  9. Coregistration of quantitative proton magnetic resonance spectroscopic imaging with neuropathological and neurophysiological analyses defines the extent of neuronal impairments in murine human immunodeficiency virus type-1 encephalitis.

    PubMed

    Nelson, J A; Dou, H; Ellison, B; Uberti, M; Xiong, H; Anderson, E; Mellon, M; Gelbard, H A; Boska, M; Gendelman, H E

    2005-05-15

    Relatively few immune-activated and virus-infected mononuclear phagocytes (MP; perivascular macrophages and microglia) may affect widespread neuronal dysfunction during human immunodeficiency virus type 1 (HIV-1)-associated dementia (HAD). Indeed, histopathological evidence of neuronal dropout often belies the extent of cognitive impairment. To define relationships between neuronal function and histopathology, proton magnetic resonance spectroscopic imaging (1H MRSI) and hippocampal long-term potentiation (LTP) were compared with neuronal and glial immunohistology in a murine model of HIV-1 encephalitis (HIVE). HIV-1(ADA)-infected human monocyte-derived macrophages (MDM) were stereotactically injected into the subcortex of severe combined immunodeficient (SCID) mice. Sham-operated and unmanipulated mice served as controls. Seven days after cell injection, brain histological analyses revealed a focal giant cell encephalitis, with reactive astrocytes, microgliosis, and neuronal dropout. Strikingly, significant reductions in N-acetyl aspartate concentration ([NAA]) and LTP levels in HIVE mice were in both injected and contralateral hemispheres and in brain subregions, including the hippocampus, where neuropathology was limited or absent. The data support the importance of 1H MRSI as a tool for assessing neuronal function for HAD. The data also demonstrate that a highly focal encephalitis can produce global deficits for neuronal function and metabolism. PMID:15825192

  10. Integration of CO2 flux and remotely-sensed data for primary production and ecosystem respiration analyses in the Northern Great Plains: potential for quantitative spatial extrapolation

    USGS Publications Warehouse

    Gilmanov, Tagir G.; Tieszen, Larry L.; Wylie, Bruce K.; Flanagan, Larry B.; Frank, Albert B.; Haferkamp, Marshall R.; Meyers, Tilden P.; Morgan, Jack A.

    2005-01-01

    Aim  Extrapolation of tower CO2 fluxes will be greatly facilitated if robust relationships between flux components and remotely sensed factors are established. Long-term measurements at five Northern Great Plains locations were used to obtain relationships between CO2fluxes and photosynthetically active radiation (Q), other on-site factors, and Normalized Difference Vegetation Index (NDVI) from the SPOT VEGETATION data set. Location  CO2 flux data from the following stations and years were analysed: Lethbridge, Alberta 1998–2001; Fort Peck, MT 2000, 2002; Miles City, MT 2000–01; Mandan, ND 1999–2001; and Cheyenne, WY 1997–98. Results  Analyses based on light-response functions allowed partitioning net CO2 flux (F) into gross primary productivity (Pg) and ecosystem respiration (Re). Weekly averages of daytime respiration, γday, estimated from light responses were closely correlated with weekly averages of measured night-time respiration, γnight (R2 0.64 to 0.95). Daytime respiration tended to be higher than night-time respiration, and regressions of γday on γnight for all sites were different from 1 : 1 relationships. Over 13 site-years, gross primary production varied from 459 to 2491 g CO2 m−2 year−1, ecosystem respiration from 996 to 1881 g CO2 m−2 year−1, and net ecosystem exchange from −537 (source) to +610 g CO2 m−2 year−1 (sink). Maximum daily ecological light-use efficiencies, ɛd,max = Pg/Q, were in the range 0.014 to 0.032 mol CO2 (mol incident quanta)−1. Main conclusions  Ten-day average Pg was significantly more highly correlated with NDVI than 10-day average daytime flux, Pd (R2 = 0.46 to 0.77 for Pg-NDVI and 0.05 to 0.58 for Pd-NDVI relationships). Ten-day average Re was also positively correlated with NDVI, with R2values from 0.57 to 0.77. Patterns of the relationships of Pg and Re with NDVI and other factors indicate possibilities for establishing multivariate

  11. Quantitative Analyses of Retinal Vascular Area and Density After Different Methods to Reduce VEGF in a Rat Model of Retinopathy of Prematurity

    PubMed Central

    Wang, Haibo; Yang, Zhihong; Jiang, Yanchao; Flannery, John; Hammond, Scott; Kafri, Tal; Vemuri, Sai Karthik; Jones, Bryan; Hartnett, M. Elizabeth

    2014-01-01

    Purpose. Targeted inhibition of Müller cell (MC)–produced VEGF or broad inhibition of VEGF with an intravitreal anti-VEGF antibody reduces intravitreal neovascularization in a rat model of retinopathy of prematurity (ROP). In this study, we compared the effects of these two approaches on retinal vascular development and capillary density in the inner and deep plexi in the rat ROP model. Methods. In the rat model of ROP, pups received 1 μL of (1) subretinal lentivector-driven short hairpin RNA (shRNA) to knockdown MC-VEGFA (VEGFA.shRNA) or control luciferase shRNA, or (2) intravitreal anti-VEGF antibody (anti-VEGF) or control isotype goat immunoglobulin G (IgG). Analyses of lectin-stained flat mounts at postnatal day 18 (p18) included: vascular/total retinal areas (retinal vascular coverage) and pixels of fluorescence/total retinal area (capillary density) of the inner and deep plexi determined with the Syncroscan microscope, and angles between cleavage planes of mitotic vascular figures labeled with anti-phosphohistone H3 and vessel length. Results. Retinal vascular coverage and density increased in both plexi between p8 and p18 in room air (RA) pups. Compared with RA, p18 ROP pups had reduced vascular coverage and density of both plexi. Compared with respective controls, VEGFA.shRNA treatment significantly increased vascular density in the deep plexus, whereas anti-VEGF reduced vascular density in the inner and deep plexi. Vascular endothelial growth factor-A.shRNA caused more cleavage angles predicting vessel elongation and fewer mitotic figures, whereas anti-VEGF treatment led to patterns of pathologic angiogenesis. Conclusions. Targeted treatment with lentivector-driven VEGFA.shRNA permitted physiologic vascularization of the vascular plexi and restored normal orientation of dividing vascular cells, suggesting that regulation of VEGF signaling by targeted treatment may be beneficial. PMID:24425858

  12. Genetic analyses and quantitative trait loci detection, using a partial genome scan, for intramuscular fatty acid composition in Scottish Blackface sheep.

    PubMed

    Karamichou, E; Richardson, R I; Nute, G R; Gibson, K P; Bishop, S C

    2006-12-01

    Genetic parameters for LM fatty acid composition were estimated in Scottish Blackface sheep, previously divergently selected for carcass lean content (LEAN and FAT lines). Furthermore, QTL were identified for the same fatty acids. Fatty acid phenotypic measurements were made on 350 male lambs, at approximately 8 mo of age, and 300 of these lambs were genotyped across candidate regions on chromosomes 1, 2, 3, 5, 14, 18, 20, and 21. Fatty acid composition measurements included in total 17 fatty acids of 3 categories: saturated, monounsaturated, and polyunsaturated. Total i.m. fat content was estimated as the sum of the fatty acids. The FAT line had a greater i.m. fat content and more oleic acid, but less linoleic acid (18:2 n-6) and docosapentaenoic acid (22:5 n-3) than did the LEAN line. Saturated fatty acids were moderately heritable, ranging from 0.19 to 0.29, and total SFA were highly heritable (0.90). Monounsaturated fatty acids were moderately to highly heritable, with cis-vaccenic acid (18:1 n-7) being the most heritable (0.67), and total MUFA were highly heritable (0.73). Polyunsaturated fatty acids were also moderately to highly heritable; arachidonic acid (20:4 n-6) and CLA were the most heritable, with values of 0.60 and 0.48, respectively. The total PUFA were moderately heritable (0.40). The QTL analyses were performed using regression interval mapping techniques. In total, 21 chromosome-wide QTL were detected in 6 out of 8 chromosomal regions. The chromosome-wide, significant QTL affected 3 SFA, 5 MUFA, and 13 PUFA. The most significant result was a QTL affecting linolenic acid (18:3 n-3) on chromosome 2. This QTL segregated in 2 of the 9 families and explained 37.6% of the phenotypic variance. Also, 10 significant QTL were identified on chromosome 21, where 8 out of 10 QTL were segregating in the same families and detected at the same position. The results of this study demonstrate that altering carcass fatness will simultaneously change i.m. fat

  13. Quantitative aspects of inductively coupled plasma mass spectrometry.

    PubMed

    Bulska, Ewa; Wagner, Barbara

    2016-10-28

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided.This article is part of the themed issue 'Quantitative mass spectrometry'. PMID:27644971

  14. Quantitative aspects of inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bulska, Ewa; Wagner, Barbara

    2016-10-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.

  15. A method to accurately quantitate intensities of (32)P-DNA bands when multiple bands appear in a single lane of a gel is used to study dNTP insertion opposite a benzo[a]pyrene-dG adduct by Sulfolobus DNA polymerases Dpo4 and Dbh.

    PubMed

    Sholder, Gabriel; Loechler, Edward L

    2015-01-01

    Quantitating relative (32)P-band intensity in gels is desired, e.g., to study primer-extension kinetics of DNA polymerases (DNAPs). Following imaging, multiple (32)P-bands are often present in lanes. Though individual bands appear by eye to be simple and well-resolved, scanning reveals they are actually skewed-Gaussian in shape and neighboring bands are overlapping, which complicates quantitation, because slower migrating bands often have considerable contributions from the trailing edges of faster migrating bands. A method is described to accurately quantitate adjacent (32)P-bands, which relies on having a standard: a simple skewed-Gaussian curve from an analogous pure, single-component band (e.g., primer alone). This single-component scan/curve is superimposed on its corresponding band in an experimentally determined scan/curve containing multiple bands (e.g., generated in a primer-extension reaction); intensity exceeding the single-component scan/curve is attributed to other components (e.g., insertion products). Relative areas/intensities are determined via pixel analysis, from which relative molarity of components is computed. Common software is used. Commonly used alternative methods (e.g., drawing boxes around bands) are shown to be less accurate. Our method was used to study kinetics of dNTP primer-extension opposite a benzo[a]pyrene-N(2)-dG-adduct with four DNAPs, including Sulfolobus solfataricus Dpo4 and Sulfolobus acidocaldarius Dbh. Vmax/Km is similar for correct dCTP insertion with Dpo4 and Dbh. Compared to Dpo4, Dbh misinsertion is slower for dATP (∼20-fold), dGTP (∼110-fold) and dTTP (∼6-fold), due to decreases in Vmax. These findings provide support that Dbh is in the same Y-Family DNAP class as eukaryotic DNAP κ and bacterial DNAP IV, which accurately bypass N(2)-dG adducts, as well as establish the scan-method described herein as an accurate method to quantitate relative intensity of overlapping bands in a single lane, whether generated

  16. A method to accurately quantitate intensities of (32)P-DNA bands when multiple bands appear in a single lane of a gel is used to study dNTP insertion opposite a benzo[a]pyrene-dG adduct by Sulfolobus DNA polymerases Dpo4 and Dbh.

    PubMed

    Sholder, Gabriel; Loechler, Edward L

    2015-01-01

    Quantitating relative (32)P-band intensity in gels is desired, e.g., to study primer-extension kinetics of DNA polymerases (DNAPs). Following imaging, multiple (32)P-bands are often present in lanes. Though individual bands appear by eye to be simple and well-resolved, scanning reveals they are actually skewed-Gaussian in shape and neighboring bands are overlapping, which complicates quantitation, because slower migrating bands often have considerable contributions from the trailing edges of faster migrating bands. A method is described to accurately quantitate adjacent (32)P-bands, which relies on having a standard: a simple skewed-Gaussian curve from an analogous pure, single-component band (e.g., primer alone). This single-component scan/curve is superimposed on its corresponding band in an experimentally determined scan/curve containing multiple bands (e.g., generated in a primer-extension reaction); intensity exceeding the single-component scan/curve is attributed to other components (e.g., insertion products). Relative areas/intensities are determined via pixel analysis, from which relative molarity of components is computed. Common software is used. Commonly used alternative methods (e.g., drawing boxes around bands) are shown to be less accurate. Our method was used to study kinetics of dNTP primer-extension opposite a benzo[a]pyrene-N(2)-dG-adduct with four DNAPs, including Sulfolobus solfataricus Dpo4 and Sulfolobus acidocaldarius Dbh. Vmax/Km is similar for correct dCTP insertion with Dpo4 and Dbh. Compared to Dpo4, Dbh misinsertion is slower for dATP (∼20-fold), dGTP (∼110-fold) and dTTP (∼6-fold), due to decreases in Vmax. These findings provide support that Dbh is in the same Y-Family DNAP class as eukaryotic DNAP κ and bacterial DNAP IV, which accurately bypass N(2)-dG adducts, as well as establish the scan-method described herein as an accurate method to quantitate relative intensity of overlapping bands in a single lane, whether generated

  17. Quantitative analyses of empirical fitness landscapes

    NASA Astrophysics Data System (ADS)

    Szendro, Ivan G.; Schenk, Martijn F.; Franke, Jasper; Krug, Joachim; de Visser, J. Arjan G. M.

    2013-01-01

    The concept of a fitness landscape is a powerful metaphor that offers insight into various aspects of evolutionary processes and guidance for the study of evolution. Until recently, empirical evidence on the ruggedness of these landscapes was lacking, but since it became feasible to construct all possible genotypes containing combinations of a limited set of mutations, the number of studies has grown to a point where a classification of landscapes becomes possible. The aim of this review is to identify measures of epistasis that allow a meaningful comparison of fitness landscapes and then apply them to the empirical landscapes in order to discern factors that affect ruggedness. The various measures of epistasis that have been proposed in the literature appear to be equivalent. Our comparison shows that the ruggedness of the empirical landscape is affected by whether the included mutations are beneficial or deleterious and by whether intragenic or intergenic epistasis is involved. Finally, the empirical landscapes are compared to landscapes generated with the rough Mt Fuji model. Despite the simplicity of this model, it captures the features of the experimental landscapes remarkably well.

  18. Quantitative analyses of observing and attending.

    PubMed

    Shahan, Timothy A; Podlesnik, Christopher A

    2008-06-01

    We review recent experiments examining whether simple models of the allocation and persistence of operant behavior are applicable to attending. In one series of experiments, observing responses of pigeons were used as an analog of attending. Maintenance of observing is often attributed to the conditioned reinforcing effects of a food-correlated stimulus (i.e., S+), so these experiments also may inform our understanding of conditioned reinforcement. Rates and allocations of observing were governed by rates of food or S+ delivery in a manner consistent with the matching law. Resistance to change of observing was well described by behavioral momentum theory only when rates of primary reinforcement in the context were considered. Rate and value of S+ deliveries did not affect resistance to change. Thus, persistence of attending to stimuli appears to be governed by primary reinforcement rates in the training context rather than conditioned reinforcing effects of the stimuli. An additional implication of these findings is that conditioned "reinforcers" may affect response rates through some mechanism other than response-strengthening. In a second series of experiments, we examined the applicability of the matching law to the allocation of attending to the elements of compound stimuli in a divided-attention task. The generalized matching law described performance well, and sensitivity to relative reinforcement varied with sample duration. The bias and sensitivity terms of the generalized matching law may provide measures of stimulus-driven and goal-driven control of divided attention. Further application of theories of operant behavior to performance on attention tasks may provide insights into what is referred to variously as endogenous, top-down, or goal-directed control of attention.

  19. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  20. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  1. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  2. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  3. Quantitative Graphics in Newspapers.

    ERIC Educational Resources Information Center

    Tankard, James W., Jr.

    The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…

  4. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry.

    PubMed

    Li, Xiu Qin; Zhang, Feng; Sun, Yan Yan; Yong, Wei; Chu, Xiao Gang; Fang, Yan Yan; Zweigenbaum, Jerry

    2008-02-11

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M+H]+ or the deprotonated molecules [M-H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0mg.kg(-1) concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0mg.kg(-1)-100mg.kg(-1) are 81-106%, with coefficients of variation <7.5%. Limits of detection (LODs) range from 0.0005 to 0.05 mg.kg(-1), which are far below the required maximum residue level (MRL) for these preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff.

  5. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    NASA Astrophysics Data System (ADS)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  6. Sociopolitical Analyses.

    ERIC Educational Resources Information Center

    Van Galen, Jane, Ed.; And Others

    1992-01-01

    This theme issue of the serial "Educational Foundations" contains four articles devoted to the topic of "Sociopolitical Analyses." In "An Interview with Peter L. McLaren," Mary Leach presented the views of Peter L. McLaren on topics of local and national discourses, values, and the politics of difference. Landon E. Beyer's "Educational Studies and…

  7. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  8. Preparation and accurate measurement of pure ozone.

    PubMed

    Janssen, Christof; Simone, Daniela; Guinet, Mickaël

    2011-03-01

    Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766

  9. Bayesian network learning for natural hazard analyses

    NASA Astrophysics Data System (ADS)

    Vogel, K.; Riggelsen, C.; Korup, O.; Scherbaum, F.

    2014-09-01

    Modern natural hazards research requires dealing with several uncertainties that arise from limited process knowledge, measurement errors, censored and incomplete observations, and the intrinsic randomness of the governing processes. Nevertheless, deterministic analyses are still widely used in quantitative hazard assessments despite the pitfall of misestimating the hazard and any ensuing risks. In this paper we show that Bayesian networks offer a flexible framework for capturing and expressing a broad range of uncertainties encountered in natural hazard assessments. Although Bayesian networks are well studied in theory, their application to real-world data is far from straightforward, and requires specific tailoring and adaptation of existing algorithms. We offer suggestions as how to tackle frequently arising problems in this context and mainly concentrate on the handling of continuous variables, incomplete data sets, and the interaction of both. By way of three case studies from earthquake, flood, and landslide research, we demonstrate the method of data-driven Bayesian network learning, and showcase the flexibility, applicability, and benefits of this approach. Our results offer fresh and partly counterintuitive insights into well-studied multivariate problems of earthquake-induced ground motion prediction, accurate flood damage quantification, and spatially explicit landslide prediction at the regional scale. In particular, we highlight how Bayesian networks help to express information flow and independence assumptions between candidate predictors. Such knowledge is pivotal in providing scientists and decision makers with well-informed strategies for selecting adequate predictor variables for quantitative natural hazard assessments.

  10. On study design in neuroimaging heritability analyses

    NASA Astrophysics Data System (ADS)

    Koran, Mary Ellen; Li, Bo; Jahanshad, Neda; Thornton-Wells, Tricia A.; Glahn, David C.; Thompson, Paul M.; Blangero, John; Nichols, Thomas E.; Kochunov, Peter; Landman, Bennett A.

    2014-03-01

    Imaging genetics is an emerging methodology that combines genetic information with imaging-derived metrics to understand how genetic factors impact observable structural, functional, and quantitative phenotypes. Many of the most well-known genetic studies are based on Genome-Wide Association Studies (GWAS), which use large populations of related or unrelated individuals to associate traits and disorders with individual genetic factors. Merging imaging and genetics may potentially lead to improved power of association in GWAS because imaging traits may be more sensitive phenotypes, being closer to underlying genetic mechanisms, and their quantitative nature inherently increases power. We are developing SOLAR-ECLIPSE (SE) imaging genetics software which is capable of performing genetic analyses with both large-scale quantitative trait data and family structures of variable complexity. This program can estimate the contribution of genetic commonality among related subjects to a given phenotype, and essentially answer the question of whether or not the phenotype is heritable. This central factor of interest, heritability, offers bounds on the direct genetic influence over observed phenotypes. In order for a trait to be a good phenotype for GWAS, it must be heritable: at least some proportion of its variance must be due to genetic influences. A variety of family structures are commonly used for estimating heritability, yet the variability and biases for each as a function of the sample size are unknown. Herein, we investigate the ability of SOLAR to accurately estimate heritability models based on imaging data simulated using Monte Carlo methods implemented in R. We characterize the bias and the variability of heritability estimates from SOLAR as a function of sample size and pedigree structure (including twins, nuclear families, and nuclear families with grandparents).

  11. Analyse quantitative des faciès carbonatés et cycles T R de haute fréquence dans le Barrémien du Prébalkan central (Bulgarie)

    NASA Astrophysics Data System (ADS)

    Minkovska, Viara; Peybernès, Bernard; Cugny, Pierre; Nikolov, Todor

    2004-07-01

    Within the Emerici zone-Barremense zone biostratigraphic interval, the Barremian deposits of Central Fore-Balkan (Lovech-Veliko Tarnovo shelf) consist of a succession of several formations where alternate terrigenous argillaceous/sandy-dominated facies (Kormjansko Fm., Balgarene Fm.) and carbonate-dominated ('Urgonian') facies (Krushevo Fm., Emen Fm.). The qualitative and, particularly, quantitative facies analysis of the carbonate successions observed along 13 detailed cross-sections and in one drill hole show the stacking of about 40 fifth-order T-R cycles induced by numerous eustatic jerks contributing to the progressive settlement of this shelf. These high-frequency cycles of about 100 000 years must be regarded as valuable correlation tools for the subsurface hydrocarbon research. To cite this article: V. Minkovska et al., C. R. Geoscience 336 (2004).

  12. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  13. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    PubMed

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid.

  14. Quantitative Thinking.

    ERIC Educational Resources Information Center

    DuBridge, Lee A.

    An appeal for more research to determine how to educate children as effectively as possible is made. Mathematics teachers can readily examine the educational problems of today in their classrooms since learning progress in mathematics can easily be measured and evaluated. Since mathematics teachers have learned to think in quantitative terms and…

  15. QUANTITATIVE MORPHOLOGY

    EPA Science Inventory

    Abstract: In toxicology, the role of quantitative assessment of brain morphology can be understood in the context of two types of treatment-related alterations. One type of alteration is specifically associated with treatment and is not observed in control animals. Measurement ...

  16. Solution structure of the Z-DNA binding domain of PKR-like protein kinase from Carassius auratus and quantitative analyses of the intermediate complex during B-Z transition.

    PubMed

    Lee, Ae-Ree; Park, Chin-Ju; Cheong, Hae-Kap; Ryu, Kyoung-Seok; Park, Jin-Wan; Kwon, Mun-Young; Lee, Janghyun; Kim, Kyeong Kyu; Choi, Byong-Seok; Lee, Joon-Hwa

    2016-04-01

    Z-DNA binding proteins (ZBPs) play important roles in RNA editing, innate immune response and viral infection. Structural and biophysical studies show that ZBPs initially form an intermediate complex with B-DNA for B-Z conversion. However, a comprehensive understanding of the mechanism of Z-DNA binding and B-Z transition is still lacking, due to the absence of structural information on the intermediate complex. Here, we report the solution structure of the Zα domain of the ZBP-containing protein kinase from Carassius auratus(caZαPKZ). We quantitatively determined the binding affinity of caZαPKZ for both B-DNA and Z-DNA and characterized its B-Z transition activity, which is modulated by varying the salt concentration. Our results suggest that the intermediate complex formed by caZαPKZ and B-DNA can be used as molecular ruler, to measure the degree to which DNA transitions to the Z isoform. PMID:26792893

  17. Solution structure of the Z-DNA binding domain of PKR-like protein kinase from Carassius auratus and quantitative analyses of the intermediate complex during B–Z transition

    PubMed Central

    Lee, Ae-Ree; Park, Chin-Ju; Cheong, Hae-Kap; Ryu, Kyoung-Seok; Park, Jin-Wan; Kwon, Mun-Young; Lee, Janghyun; Kim, Kyeong Kyu; Choi, Byong-Seok; Lee, Joon-Hwa

    2016-01-01

    Z-DNA binding proteins (ZBPs) play important roles in RNA editing, innate immune response and viral infection. Structural and biophysical studies show that ZBPs initially form an intermediate complex with B-DNA for B–Z conversion. However, a comprehensive understanding of the mechanism of Z-DNA binding and B–Z transition is still lacking, due to the absence of structural information on the intermediate complex. Here, we report the solution structure of the Zα domain of the ZBP-containing protein kinase from Carassius auratus (caZαPKZ). We quantitatively determined the binding affinity of caZαPKZ for both B-DNA and Z-DNA and characterized its B–Z transition activity, which is modulated by varying the salt concentration. Our results suggest that the intermediate complex formed by caZαPKZ and B-DNA can be used as molecular ruler, to measure the degree to which DNA transitions to the Z isoform. PMID:26792893

  18. Solution structure of the Z-DNA binding domain of PKR-like protein kinase from Carassius auratus and quantitative analyses of the intermediate complex during B-Z transition.

    PubMed

    Lee, Ae-Ree; Park, Chin-Ju; Cheong, Hae-Kap; Ryu, Kyoung-Seok; Park, Jin-Wan; Kwon, Mun-Young; Lee, Janghyun; Kim, Kyeong Kyu; Choi, Byong-Seok; Lee, Joon-Hwa

    2016-04-01

    Z-DNA binding proteins (ZBPs) play important roles in RNA editing, innate immune response and viral infection. Structural and biophysical studies show that ZBPs initially form an intermediate complex with B-DNA for B-Z conversion. However, a comprehensive understanding of the mechanism of Z-DNA binding and B-Z transition is still lacking, due to the absence of structural information on the intermediate complex. Here, we report the solution structure of the Zα domain of the ZBP-containing protein kinase from Carassius auratus(caZαPKZ). We quantitatively determined the binding affinity of caZαPKZ for both B-DNA and Z-DNA and characterized its B-Z transition activity, which is modulated by varying the salt concentration. Our results suggest that the intermediate complex formed by caZαPKZ and B-DNA can be used as molecular ruler, to measure the degree to which DNA transitions to the Z isoform.

  19. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  20. Accurate Stellar Parameters for Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  1. Multiphase Method for Analysing Online Discussions

    ERIC Educational Resources Information Center

    Häkkinen, P.

    2013-01-01

    Several studies have analysed and assessed online performance and discourse using quantitative and qualitative methods. Quantitative measures have typically included the analysis of participation rates and learning outcomes in terms of grades. Qualitative measures of postings, discussions and context features aim to give insights into the nature…

  2. Using Inequality Measures to Incorporate Environmental Justice into Regulatory Analyses

    EPA Science Inventory

    Abstract: Formally evaluating how specific policy measures influence environmental justice is challenging, especially in the context of regulatory analyses in which quantitative comparisons are the norm. However, there is a large literature on developing and applying quantitative...

  3. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  4. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  5. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  6. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  7. miRNA Expression Analyses in Prostate Cancer Clinical Tissues

    PubMed Central

    Bucay, Nathan; Shahryari, Varahram; Majid, Shahana; Yamamura, Soichiro; Mitsui, Yozo; Tabatabai, Z. Laura; Greene, Kirsten; Deng, Guoren; Dahiya, Rajvir; Tanaka, Yuichiro; Saini, Sharanjot

    2015-01-01

    A critical challenge in prostate cancer (PCa) clinical management is posed by the inadequacy of currently used biomarkers for disease screening, diagnosis, prognosis and treatment. In recent years, microRNAs (miRNAs) have emerged as promising alternate biomarkers for prostate cancer diagnosis and prognosis. However, the development of miRNAs as effective biomarkers for prostate cancer heavily relies on their accurate detection in clinical tissues. miRNA analyses in prostate cancer clinical specimens is often challenging owing to tumor heterogeneity, sampling errors, stromal contamination etc. The goal of this article is to describe a simplified workflow for miRNA analyses in archived FFPE or fresh frozen prostate cancer clinical specimens using a combination of quantitative real-time PCR (RT-PCR) and in situ hybridization (ISH). Within this workflow, we optimize the existing methodologies for miRNA extraction from FFPE and frozen prostate tissues and expression analyses by Taqman-probe based miRNA RT-PCR. In addition, we describe an optimized method for ISH analyses formiRNA detection in prostate tissues using locked nucleic acid (LNA)- based probes. Our optimized miRNA ISH protocol can be applied to prostate cancer tissue slides or prostate cancer tissue microarrays (TMA). PMID:26382040

  8. miRNA Expression Analyses in Prostate Cancer Clinical Tissues.

    PubMed

    Bucay, Nathan; Shahryari, Varahram; Majid, Shahana; Yamamura, Soichiro; Mitsui, Yozo; Tabatabai, Z Laura; Greene, Kirsten; Deng, Guoren; Dahiya, Rajvir; Tanaka, Yuichiro; Saini, Sharanjot

    2015-01-01

    A critical challenge in prostate cancer (PCa) clinical management is posed by the inadequacy of currently used biomarkers for disease screening, diagnosis, prognosis and treatment. In recent years, microRNAs (miRNAs) have emerged as promising alternate biomarkers for prostate cancer diagnosis and prognosis. However, the development of miRNAs as effective biomarkers for prostate cancer heavily relies on their accurate detection in clinical tissues. miRNA analyses in prostate cancer clinical specimens is often challenging owing to tumor heterogeneity, sampling errors, stromal contamination etc. The goal of this article is to describe a simplified workflow for miRNA analyses in archived FFPE or fresh frozen prostate cancer clinical specimens using a combination of quantitative real-time PCR (RT-PCR) and in situ hybridization (ISH). Within this workflow, we optimize the existing methodologies for miRNA extraction from FFPE and frozen prostate tissues and expression analyses by Taqman-probe based miRNA RT-PCR. In addition, we describe an optimized method for ISH analyses formiRNA detection in prostate tissues using locked nucleic acid (LNA)- based probes. Our optimized miRNA ISH protocol can be applied to prostate cancer tissue slides or prostate cancer tissue microarrays (TMA). PMID:26382040

  9. miRNA Expression Analyses in Prostate Cancer Clinical Tissues.

    PubMed

    Bucay, Nathan; Shahryari, Varahram; Majid, Shahana; Yamamura, Soichiro; Mitsui, Yozo; Tabatabai, Z Laura; Greene, Kirsten; Deng, Guoren; Dahiya, Rajvir; Tanaka, Yuichiro; Saini, Sharanjot

    2015-09-08

    A critical challenge in prostate cancer (PCa) clinical management is posed by the inadequacy of currently used biomarkers for disease screening, diagnosis, prognosis and treatment. In recent years, microRNAs (miRNAs) have emerged as promising alternate biomarkers for prostate cancer diagnosis and prognosis. However, the development of miRNAs as effective biomarkers for prostate cancer heavily relies on their accurate detection in clinical tissues. miRNA analyses in prostate cancer clinical specimens is often challenging owing to tumor heterogeneity, sampling errors, stromal contamination etc. The goal of this article is to describe a simplified workflow for miRNA analyses in archived FFPE or fresh frozen prostate cancer clinical specimens using a combination of quantitative real-time PCR (RT-PCR) and in situ hybridization (ISH). Within this workflow, we optimize the existing methodologies for miRNA extraction from FFPE and frozen prostate tissues and expression analyses by Taqman-probe based miRNA RT-PCR. In addition, we describe an optimized method for ISH analyses formiRNA detection in prostate tissues using locked nucleic acid (LNA)- based probes. Our optimized miRNA ISH protocol can be applied to prostate cancer tissue slides or prostate cancer tissue microarrays (TMA).

  10. Compilation of Sandia coal char combustion data and kinetic analyses

    SciTech Connect

    Mitchell, R.E.; Hurt, R.H.; Baxter, L.L.; Hardesty, D.R.

    1992-06-01

    An experimental project was undertaken to characterize the physical and chemical processes that govern the combustion of pulverized coal chars. The experimental endeavor establishes a database on the reactivities of coal chars as a function of coal type, particle size, particle temperature, gas temperature, and gas and composition. The project also provides a better understanding of the mechanism of char oxidation, and yields quantitative information on the release rates of nitrogen- and sulfur-containing species during char combustion. An accurate predictive engineering model of the overall char combustion process under technologically relevant conditions in a primary product of this experimental effort. This document summarizes the experimental effort, the approach used to analyze the data, and individual compilations of data and kinetic analyses for each of the parent coals investigates.

  11. MASIC: a software program for fast quantitation and flexible visualization of chromatographic profiles from detected LC-MS(/MS) features

    SciTech Connect

    Monroe, Matthew E.; Shaw, Jason L.; Daly, Don S.; Adkins, Joshua N.; Smith, Richard D.

    2008-06-01

    Quantitative analysis of liquid chromatography (LC)- mass spectrometry (MS) and tandem mass spectrometry (MS/MS) data is essential to many proteomics studies. We have developed MASIC to accurately measure peptide abundances and LC elution times in low-resolution LC-MS/MS analyses. This software program uses an efficient processing algorithm to quickly generate mass specific selected ion chromatograms from a dataset and provides an interactive browser that allows users to examine individual chromatograms in a variety of fashions. The improved elution time estimates afforded by MASIC increase the utility of LC-MS/MS data in the accurate mass and time (AMT) tag approach to proteomics.

  12. Network Class Superposition Analyses

    PubMed Central

    Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  13. Network class superposition analyses.

    PubMed

    Pearson, Carl A B; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30) for the yeast cell cycle process), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  14. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  15. Quantitative spectroscopy of hot stars

    NASA Technical Reports Server (NTRS)

    Kudritzki, R. P.; Hummer, D. G.

    1990-01-01

    A review on the quantitative spectroscopy (QS) of hot stars is presented, with particular attention given to the study of photospheres, optically thin winds, unified model atmospheres, and stars with optically thick winds. It is concluded that the results presented here demonstrate the reliability of Qs as a unique source of accurate values of the global parameters (effective temperature, surface gravity, and elemental abundances) of hot stars.

  16. Object-oriented fault tree evaluation program for quantitative analyses

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  17. Knowledge management for efficient quantitative analyses during regulatory reviews.

    PubMed

    Krudys, Kevin; Li, Fang; Florian, Jeffry; Tornoe, Christoffer; Chen, Ying; Bhattaram, Atul; Jadhav, Pravin; Neal, Lauren; Wang, Yaning; Gobburu, Joga; Lee, Peter I D

    2011-11-01

    Knowledge management comprises the strategies and methods employed to generate and leverage knowledge within an organization. This report outlines the activities within the Division of Pharmacometrics at the US FDA to effectively manage knowledge with the ultimate goal of improving drug development and advancing public health. The infrastructure required for pharmacometric knowledge management includes provisions for data standards, queryable databases, libraries of modeling tools, archiving of analysis results and reporting templates for effective communication. Two examples of knowledge management systems developed within the Division of Pharmacometrics are used to illustrate these principles. The benefits of sound knowledge management include increased productivity, allowing reviewers to focus on research questions spanning new drug applications, such as improved trial design and biomarker development. The future of knowledge management depends on the collaboration between the FDA and industry to implement data and model standards to enhance sharing and dissemination of knowledge.

  18. Quantitative Analyses of Cryptochrome-mBMAL1 Interactions

    PubMed Central

    Czarna, Anna; Breitkreuz, Helena; Mahrenholz, Carsten C.; Arens, Julia; Strauss, Holger M.; Wolf, Eva

    2011-01-01

    The mammalian cryptochromes mCRY1 and mCRY2 act as transcriptional repressors within the 24-h transcription-translational feedback loop of the circadian clock. The C-terminal tail and a preceding predicted coiled coil (CC) of the mCRYs as well as the C-terminal region of the transcription factor mBMAL1 are involved in transcriptional feedback repression. Here we show by fluorescence polarization and isothermal titration calorimetry that purified mCRY1/2CCtail proteins form stable heterodimeric complexes with two C-terminal mBMAL1 fragments. The longer mBMAL1 fragment (BMAL490) includes Lys-537, which is rhythmically acetylated by mCLOCK in vivo. mCRY1 (but not mCRY2) has a lower affinity to BMAL490 than to the shorter mBMAL1 fragment (BMAL577) and a K537Q mutant version of BMAL490. Using peptide scan analysis we identify two mBMAL1 binding epitopes within the coiled coil and tail regions of mCRY1/2 and document the importance of positively charged mCRY1 residues for mBMAL1 binding. A synthetic mCRY coiled coil peptide binds equally well to the short and to the long (wild-type and K537Q mutant) mBMAL1 fragments. In contrast, a peptide including the mCRY1 tail epitope shows a lower affinity to BMAL490 compared with BMAL577 and BMAL490(K537Q). We propose that Lys-537mBMAL1 acetylation enhances mCRY1 binding by affecting electrostatic interactions predominantly with the mCRY1 tail. Our data reveal different molecular interactions of the mCRY1/2 tails with mBMAL1, which may contribute to the non-redundant clock functions of mCRY1 and mCRY2. Moreover, our study suggests the design of peptidic inhibitors targeting the interaction of the mCRY1 tail with mBMAL1. PMID:21521686

  19. Knowledge management for efficient quantitative analyses during regulatory reviews.

    PubMed

    Krudys, Kevin; Li, Fang; Florian, Jeffry; Tornoe, Christoffer; Chen, Ying; Bhattaram, Atul; Jadhav, Pravin; Neal, Lauren; Wang, Yaning; Gobburu, Joga; Lee, Peter I D

    2011-11-01

    Knowledge management comprises the strategies and methods employed to generate and leverage knowledge within an organization. This report outlines the activities within the Division of Pharmacometrics at the US FDA to effectively manage knowledge with the ultimate goal of improving drug development and advancing public health. The infrastructure required for pharmacometric knowledge management includes provisions for data standards, queryable databases, libraries of modeling tools, archiving of analysis results and reporting templates for effective communication. Two examples of knowledge management systems developed within the Division of Pharmacometrics are used to illustrate these principles. The benefits of sound knowledge management include increased productivity, allowing reviewers to focus on research questions spanning new drug applications, such as improved trial design and biomarker development. The future of knowledge management depends on the collaboration between the FDA and industry to implement data and model standards to enhance sharing and dissemination of knowledge. PMID:22111855

  20. Knowledge Discovery in Textual Documentation: Qualitative and Quantitative Analyses.

    ERIC Educational Resources Information Center

    Loh, Stanley; De Oliveira, Jose Palazzo M.; Gastal, Fabio Leite

    2001-01-01

    Presents an application of knowledge discovery in texts (KDT) concerning medical records of a psychiatric hospital. The approach helps physicians to extract knowledge about patients and diseases that may be used for epidemiological studies, for training professionals, and to support physicians to diagnose and evaluate diseases. (Author/AEF)

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  6. Precocious quantitative cognition in monkeys.

    PubMed

    Ferrigno, Stephen; Hughes, Kelly D; Cantlon, Jessica F

    2016-02-01

    Basic quantitative abilities are thought to have an innate basis in humans partly because the ability to discriminate quantities emerges early in child development. If humans and nonhuman primates share this developmentally primitive foundation of quantitative reasoning, then this ability should be present early in development across species and should emerge earlier in monkeys than in humans because monkeys mature faster than humans. We report that monkeys spontaneously make accurate quantity choices by 1 year of age in a task that human children begin to perform only at 2.5 to 3 years of age. Additionally, we report that the quantitative sensitivity of infant monkeys is equal to that of the adult animals in their group and that rates of learning do not differ between infant and adult animals. This novel evidence of precocious quantitative reasoning in infant monkeys suggests that human quantitative reasoning shares its early developing foundation with other primates. The data further suggest that early developing components of primate quantitative reasoning are constrained by maturational factors related to genetic development as opposed to learning experience alone. PMID:26187058

  7. Precocious quantitative cognition in monkeys.

    PubMed

    Ferrigno, Stephen; Hughes, Kelly D; Cantlon, Jessica F

    2016-02-01

    Basic quantitative abilities are thought to have an innate basis in humans partly because the ability to discriminate quantities emerges early in child development. If humans and nonhuman primates share this developmentally primitive foundation of quantitative reasoning, then this ability should be present early in development across species and should emerge earlier in monkeys than in humans because monkeys mature faster than humans. We report that monkeys spontaneously make accurate quantity choices by 1 year of age in a task that human children begin to perform only at 2.5 to 3 years of age. Additionally, we report that the quantitative sensitivity of infant monkeys is equal to that of the adult animals in their group and that rates of learning do not differ between infant and adult animals. This novel evidence of precocious quantitative reasoning in infant monkeys suggests that human quantitative reasoning shares its early developing foundation with other primates. The data further suggest that early developing components of primate quantitative reasoning are constrained by maturational factors related to genetic development as opposed to learning experience alone.

  8. Expanding the Horizons of Quantitative Remote Sensing

    NASA Astrophysics Data System (ADS)

    Christensen, P. R.

    2011-12-01

    Remote sensing of the Earth has made significant progress since its inception in the 1970's. The Landsat, ASTER, MODIS multi-spectral imagers have provided a global, long-term record of the surface at visible through infrared wavelengths, and meter-scale color images can be acquired of regions of interest. However, these systems, and many of the algorithms to analyze them, have advanced surprising little over the past three decades. Very little hyperspectral data are readily available or widely used, and software analysis tools are typically complex or 'black box'. As a result it is often difficult to make quantitative assessments of surface character - for example the accurate mapping of the composition and abundance of surface components. Ironically, planetary observations often have higher spectral resolution, a broader spectral range, and global coverage, with the result that sophisticated tools are routinely applied to these data to make quantitative mineralogy maps. These analyses are driven by the reality that, except for a tiny area explored by rovers, remote sensing provides the only means to determine surface properties. Improved terrestrial hyperspectral imaging systems have long been proposed, and will make great advances. However, these systems remain in the future, and the question exists - what advancements can be made to extract quantitative information from existing data? A case study, inspired by the 1987 work of Sultan et al, was performed to combine all available visible, near-, and thermal-IR multi-spectral data with selected hyperspectral information and limited field verification. Hyperspectral data were obtained from lab observations of collected samples, and the highest spatial resolution images available were used to help interpret the lower-resolution regional imagery. The hyperspectral data were spectrally deconvolved, giving quantitative mineral abundances accurate to 5-10%. These spectra were degraded to the multi-spectral resolution

  9. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  10. Accurate masses for dispersion-supported galaxies

    NASA Astrophysics Data System (ADS)

    Wolf, Joe; Martinez, Gregory D.; Bullock, James S.; Kaplinghat, Manoj; Geha, Marla; Muñoz, Ricardo R.; Simon, Joshua D.; Avedo, Frank F.

    2010-08-01

    We derive an accurate mass estimator for dispersion-supported stellar systems and demonstrate its validity by analysing resolved line-of-sight velocity data for globular clusters, dwarf galaxies and elliptical galaxies. Specifically, by manipulating the spherical Jeans equation we show that the mass enclosed within the 3D deprojected half-light radius r1/2 can be determined with only mild assumptions about the spatial variation of the stellar velocity dispersion anisotropy as long as the projected velocity dispersion profile is fairly flat near the half-light radius, as is typically observed. We find M1/2 = 3 G-1< σ2los > r1/2 ~= 4 G-1< σ2los > Re, where < σ2los > is the luminosity-weighted square of the line-of-sight velocity dispersion and Re is the 2D projected half-light radius. While deceptively familiar in form, this formula is not the virial theorem, which cannot be used to determine accurate masses unless the radial profile of the total mass is known a priori. We utilize this finding to show that all of the Milky Way dwarf spheroidal galaxies (MW dSphs) are consistent with having formed within a halo of a mass of approximately 3 × 109 Msolar, assuming a Λ cold dark matter cosmology. The faintest MW dSphs seem to have formed in dark matter haloes that are at least as massive as those of the brightest MW dSphs, despite the almost five orders of magnitude spread in luminosity between them. We expand our analysis to the full range of observed dispersion-supported stellar systems and examine their dynamical I-band mass-to-light ratios ΥI1/2. The ΥI1/2 versus M1/2 relation for dispersion-supported galaxies follows a U shape, with a broad minimum near ΥI1/2 ~= 3 that spans dwarf elliptical galaxies to normal ellipticals, a steep rise to ΥI1/2 ~= 3200 for ultra-faint dSphs and a more shallow rise to ΥI1/2 ~= 800 for galaxy cluster spheroids.

  11. A fast and accurate method for echocardiography strain rate imaging

    NASA Astrophysics Data System (ADS)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  12. Quantitative analysis of periodontal pathogens by ELISA and real-time polymerase chain reaction.

    PubMed

    Hamlet, Stephen M

    2010-01-01

    The development of analytical methods enabling the accurate identification and enumeration of bacterial species colonizing the oral cavity has led to the identification of a small number of bacterial pathogens that are major factors in the etiology of periodontal disease. Further, these methods also underpin more recent epidemiological analyses of the impact of periodontal disease on general health. Given the complex milieu of over 700 species of microorganisms known to exist within the complex biofilms found in the oral cavity, the identification and enumeration of oral periodontopathogens has not been an easy task. In recent years however, some of the intrinsic limitations of the more traditional microbiological analyses previously used have been overcome with the advent of immunological and molecular analytical methods. Of the plethora of methodologies reported in the literature, the enzyme-linked immunosorbent assay (ELISA), which combines the specificity of antibody with the sensitivity of simple enzyme assays and the polymerase chain reaction (PCR), has been widely utilized in both laboratory and clinical applications. Although conventional PCR does not allow quantitation of the target organism, real-time PCR (rtPCR) has the ability to detect amplicons as they accumulate in "real time" allowing subsequent quantitation. These methods enable the accurate quantitation of as few as 10(2) (using rtPCR) to 10(4) (using ELISA) periodontopathogens in dental plaque samples.

  13. Quantitative characterisation of sedimentary grains

    NASA Astrophysics Data System (ADS)

    Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.

    2016-04-01

    Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.

  14. Multidimensional Genome-wide Analyses Show Accurate FVIII Integration by ZFN in Primary Human Cells

    PubMed Central

    Sivalingam, Jaichandran; Kenanov, Dimitar; Han, Hao; Nirmal, Ajit Johnson; Ng, Wai Har; Lee, Sze Sing; Masilamani, Jeyakumar; Phan, Toan Thang; Maurer-Stroh, Sebastian; Kon, Oi Lian

    2016-01-01

    Costly coagulation factor VIII (FVIII) replacement therapy is a barrier to optimal clinical management of hemophilia A. Therapy using FVIII-secreting autologous primary cells is potentially efficacious and more affordable. Zinc finger nucleases (ZFN) mediate transgene integration into the AAVS1 locus but comprehensive evaluation of off-target genome effects is currently lacking. In light of serious adverse effects in clinical trials which employed genome-integrating viral vectors, this study evaluated potential genotoxicity of ZFN-mediated transgenesis using different techniques. We employed deep sequencing of predicted off-target sites, copy number analysis, whole-genome sequencing, and RNA-seq in primary human umbilical cord-lining epithelial cells (CLECs) with AAVS1 ZFN-mediated FVIII transgene integration. We combined molecular features to enhance the accuracy and activity of ZFN-mediated transgenesis. Our data showed a low frequency of ZFN-associated indels, no detectable off-target transgene integrations or chromosomal rearrangements. ZFN-modified CLECs had very few dysregulated transcripts and no evidence of activated oncogenic pathways. We also showed AAVS1 ZFN activity and durable FVIII transgene secretion in primary human dermal fibroblasts, bone marrow- and adipose tissue-derived stromal cells. Our study suggests that, with close attention to the molecular design of genome-modifying constructs, AAVS1 ZFN-mediated FVIII integration in several primary human cell types may be safe and efficacious. PMID:26689265

  15. Quantitative Decision Support Requires Quantitative User Guidance

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  16. Landslide inventories: The essential part of seismic landslide hazard analyses

    USGS Publications Warehouse

    Harp, E.L.; Keefer, D.K.; Sato, H.P.; Yagi, H.

    2011-01-01

    A detailed and accurate landslide inventory is an essential part of seismic landslide hazard analysis. An ideal inventory would cover the entire area affected by an earthquake and include all of the landslides that are possible to detect down to sizes of 1-5. m in length. The landslides must also be located accurately and mapped as polygons depicting their true shapes. Such mapped landslide distributions can then be used to perform seismic landslide hazard analysis and other quantitative analyses. Detailed inventory maps of landslide triggered by earthquakes began in the early 1960s with the use of aerial photography. In recent years, advances in technology have resulted in the accessibility of satellite imagery with sufficiently high resolution to identify and map all but the smallest of landslides triggered by a seismic event. With this ability to view any area of the globe, we can acquire imagery for any earthquake that triggers significant numbers of landslides. However, a common problem of incomplete coverage of the full distributions of landslides has emerged along with the advent of high resolution satellite imagery. ?? 2010.

  17. Assessing the reproducibility of discriminant function analyses.

    PubMed

    Andrew, Rose L; Albert, Arianne Y K; Renaut, Sebastien; Rennison, Diana J; Bock, Dan G; Vines, Tim

    2015-01-01

    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the

  18. Assessing the reproducibility of discriminant function analyses

    PubMed Central

    Andrew, Rose L.; Albert, Arianne Y.K.; Renaut, Sebastien; Rennison, Diana J.; Bock, Dan G.

    2015-01-01

    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the

  19. A new HPLC method for azithromycin quantitation.

    PubMed

    Zubata, Patricia; Ceresole, Rita; Rosasco, Maria Ana; Pizzorno, Maria Teresa

    2002-02-01

    A simple liquid chromatographic method was developed for the estimation of azithromycin raw material and in pharmaceutical forms. The sample was chromatographed on a reverse phase C18 column and eluants monitored at a wavelength of 215 nm. The method was accurate, precise and sufficiently selective. It is applicable for its quantitation, stability and dissolution tests.

  20. Quantitative Analysis of Radar Returns from Insects

    NASA Technical Reports Server (NTRS)

    Riley, J. R.

    1979-01-01

    When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.

  1. Increasing the quantitative bandwidth of NMR measurements.

    PubMed

    Power, J E; Foroozandeh, M; Adams, R W; Nilsson, M; Coombes, S R; Phillips, A R; Morris, G A

    2016-02-18

    The frequency range of quantitative NMR is increased from tens to hundreds of kHz by a new pulse sequence, CHORUS. It uses chirp pulses to excite uniformly over very large bandwidths, yielding accurate integrals even for nuclei such as (19)F that have very wide spectra. PMID:26789115

  2. Increasing the quantitative bandwidth of NMR measurements.

    PubMed

    Power, J E; Foroozandeh, M; Adams, R W; Nilsson, M; Coombes, S R; Phillips, A R; Morris, G A

    2016-02-18

    The frequency range of quantitative NMR is increased from tens to hundreds of kHz by a new pulse sequence, CHORUS. It uses chirp pulses to excite uniformly over very large bandwidths, yielding accurate integrals even for nuclei such as (19)F that have very wide spectra.

  3. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  4. Accurate vessel segmentation with constrained B-snake.

    PubMed

    Yuanzhi Cheng; Xin Hu; Ji Wang; Yadong Wang; Tamura, Shinichi

    2015-08-01

    We describe an active contour framework with accurate shape and size constraints on the vessel cross-sectional planes to produce the vessel segmentation. It starts with a multiscale vessel axis tracing in a 3D computed tomography (CT) data, followed by vessel boundary delineation on the cross-sectional planes derived from the extracted axis. The vessel boundary surface is deformed under constrained movements on the cross sections and is voxelized to produce the final vascular segmentation. The novelty of this paper lies in the accurate contour point detection of thin vessels based on the CT scanning model, in the efficient implementation of missing contour points in the problematic regions and in the active contour model with accurate shape and size constraints. The main advantage of our framework is that it avoids disconnected and incomplete segmentation of the vessels in the problematic regions that contain touching vessels (vessels in close proximity to each other), diseased portions (pathologic structure attached to a vessel), and thin vessels. It is particularly suitable for accurate segmentation of thin and low contrast vessels. Our method is evaluated and demonstrated on CT data sets from our partner site, and its results are compared with three related methods. Our method is also tested on two publicly available databases and its results are compared with the recently published method. The applicability of the proposed method to some challenging clinical problems, the segmentation of the vessels in the problematic regions, is demonstrated with good results on both quantitative and qualitative experimentations; our segmentation algorithm can delineate vessel boundaries that have level of variability similar to those obtained manually.

  5. On the importance of having accurate data for astrophysical modelling

    NASA Astrophysics Data System (ADS)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  6. Measurement of lentiviral vector titre and copy number by cross-species duplex quantitative PCR.

    PubMed

    Christodoulou, I; Patsali, P; Stephanou, C; Antoniou, M; Kleanthous, M; Lederer, C W

    2016-01-01

    Lentiviruses are the vectors of choice for many preclinical studies and clinical applications of gene therapy. Accurate measurement of biological vector titre before treatment is a prerequisite for vector dosing, and the calculation of vector integration sites per cell after treatment is as critical to the characterisation of modified cell products as it is to long-term follow-up and the assessment of risk and therapeutic efficiency in patients. These analyses are typically based on quantitative real-time PCR (qPCR), but as yet compromise accuracy and comparability between laboratories and experimental systems, the former by using separate simplex reactions for the detection of endogene and lentiviral sequences and the latter by designing different PCR assays for analyses in human cells and animal disease models. In this study, we validate in human and murine cells a qPCR system for the single-tube assessment of lentiviral vector copy numbers that is suitable for analyses in at least 33 different mammalian species, including human and other primates, mouse, pig, cat and domestic ruminants. The established assay combines the accuracy of single-tube quantitation by duplex qPCR with the convenience of one-off assay optimisation for cross-species analyses and with the direct comparability of lentiviral transduction efficiencies in different species. PMID:26202078

  7. NOAA's National Snow Analyses

    NASA Astrophysics Data System (ADS)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  8. Guidance to Achieve Accurate Aggregate Quantitation in Biopharmaceuticals by SV-AUC.

    PubMed

    Arthur, Kelly K; Kendrick, Brent S; Gabrielson, John P

    2015-01-01

    The levels and types of aggregates present in protein biopharmaceuticals must be assessed during all stages of product development, manufacturing, and storage of the finished product. Routine monitoring of aggregate levels in biopharmaceuticals is typically achieved by size exclusion chromatography (SEC) due to its high precision, speed, robustness, and simplicity to operate. However, SEC is error prone and requires careful method development to ensure accuracy of reported aggregate levels. Sedimentation velocity analytical ultracentrifugation (SV-AUC) is an orthogonal technique that can be used to measure protein aggregation without many of the potential inaccuracies of SEC. In this chapter, we discuss applications of SV-AUC during biopharmaceutical development and how characteristics of the technique make it better suited for some applications than others. We then discuss the elements of a comprehensive analytical control strategy for SV-AUC. Successful implementation of these analytical control elements ensures that SV-AUC provides continued value over the long time frames necessary to bring biopharmaceuticals to market.

  9. Application and Quantitative Validation of Computer-Automated Three-Dimensional Counting of Cell Nuclei

    NASA Astrophysics Data System (ADS)

    Shain, William; Kayali, Soraya; Szarowski, Donald; Davis-Cox, Margaret; Ancin, Hakan; Bhattacharjya, Anoop K.; Roysam, Badrinath; Turner, James N.

    1999-03-01

    This study provides a quantitative validation of qualitative automated three-dimensional (3-D) analysis methods reported earlier. It demonstrates the applicability and quantitative accuracy of our method to detect, characterize, and count Feulgen stained cell nuclei in two tissues (hippocampus and testes). A laser-scanned confocal light microscope was used to record 3-D images i which our algorithms automatically identified individual nuclei from the optical sections given an estimate of minimum nuclear size. The hippocampal data sets were also manually counted independently by five trained observers using the STERECON 3-D image reconstruction system. The automated and manual counts were compared. A nucleus-by-nucleus comparison of the manual and automated counts verified that the automated analysis was accurate and reproducible, and permitted additional quantitative analyses not available from manual methods. The algorithms also identified subpopulations of nuclei within the hippocampal samples, and haploid and diploid nuclei in the testes. Our methods were shown to be repeatable, accurate, and more consistent than manual counting.

  10. Using an Educational Electronic Documentation System to Help Nursing Students Accurately Identify Nursing Diagnoses

    ERIC Educational Resources Information Center

    Pobocik, Tamara J.

    2013-01-01

    The use of technology and electronic medical records in healthcare has exponentially increased. This quantitative research project used a pretest/posttest design, and reviewed how an educational electronic documentation system helped nursing students to identify the accurate related to statement of the nursing diagnosis for the patient in the case…

  11. Application of new least-squares methods for the quantitative infrared analysis of multicomponent samples

    SciTech Connect

    Haaland, D.M.; Easterling, R.G.

    1982-11-01

    Improvements have been made in previous least-squares regression analyses of infrared spectra for the quantitative estimation of concentrations of multicomponent mixtures. Spectral baselines are fitted by least-squares methods, and overlapping spectral features are accounted for in the fitting procedure. Selection of peaks above a threshold value reduces computation time and data storage requirements. Four weighted least-squares methods incorporating different baseline assumptions were investigated using FT-IR spectra of the three pure xylene isomers and their mixtures. By fitting only regions of the spectra that follow Beer's Law, accurate results can be obtained using three of the fitting methods even when baselines are not corrected to zero. Accurate results can also be obtained using one of the fits even in the presence of Beer's Law deviations. This is a consequence of pooling the weighted results for each spectral peak such that the greatest weighting is automatically given to those peaks that adhere to Beer's Law. It has been shown with the xylene spectra that semiquantitative results can be obtained even when all the major components are not known or when expected components are not present. This improvement over previous methods greatly expands the utility of quantitative least-squares analyses.

  12. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  13. Accurate object tracking system by integrating texture and depth cues

    NASA Astrophysics Data System (ADS)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  14. Accurate whole human genome sequencing using reversible terminator chemistry.

    PubMed

    Bentley, David R; Balasubramanian, Shankar; Swerdlow, Harold P; Smith, Geoffrey P; Milton, John; Brown, Clive G; Hall, Kevin P; Evers, Dirk J; Barnes, Colin L; Bignell, Helen R; Boutell, Jonathan M; Bryant, Jason; Carter, Richard J; Keira Cheetham, R; Cox, Anthony J; Ellis, Darren J; Flatbush, Michael R; Gormley, Niall A; Humphray, Sean J; Irving, Leslie J; Karbelashvili, Mirian S; Kirk, Scott M; Li, Heng; Liu, Xiaohai; Maisinger, Klaus S; Murray, Lisa J; Obradovic, Bojan; Ost, Tobias; Parkinson, Michael L; Pratt, Mark R; Rasolonjatovo, Isabelle M J; Reed, Mark T; Rigatti, Roberto; Rodighiero, Chiara; Ross, Mark T; Sabot, Andrea; Sankar, Subramanian V; Scally, Aylwyn; Schroth, Gary P; Smith, Mark E; Smith, Vincent P; Spiridou, Anastassia; Torrance, Peta E; Tzonev, Svilen S; Vermaas, Eric H; Walter, Klaudia; Wu, Xiaolin; Zhang, Lu; Alam, Mohammed D; Anastasi, Carole; Aniebo, Ify C; Bailey, David M D; Bancarz, Iain R; Banerjee, Saibal; Barbour, Selena G; Baybayan, Primo A; Benoit, Vincent A; Benson, Kevin F; Bevis, Claire; Black, Phillip J; Boodhun, Asha; Brennan, Joe S; Bridgham, John A; Brown, Rob C; Brown, Andrew A; Buermann, Dale H; Bundu, Abass A; Burrows, James C; Carter, Nigel P; Castillo, Nestor; Chiara E Catenazzi, Maria; Chang, Simon; Neil Cooley, R; Crake, Natasha R; Dada, Olubunmi O; Diakoumakos, Konstantinos D; Dominguez-Fernandez, Belen; Earnshaw, David J; Egbujor, Ugonna C; Elmore, David W; Etchin, Sergey S; Ewan, Mark R; Fedurco, Milan; Fraser, Louise J; Fuentes Fajardo, Karin V; Scott Furey, W; George, David; Gietzen, Kimberley J; Goddard, Colin P; Golda, George S; Granieri, Philip A; Green, David E; Gustafson, David L; Hansen, Nancy F; Harnish, Kevin; Haudenschild, Christian D; Heyer, Narinder I; Hims, Matthew M; Ho, Johnny T; Horgan, Adrian M; Hoschler, Katya; Hurwitz, Steve; Ivanov, Denis V; Johnson, Maria Q; James, Terena; Huw Jones, T A; Kang, Gyoung-Dong; Kerelska, Tzvetana H; Kersey, Alan D; Khrebtukova, Irina; Kindwall, Alex P; Kingsbury, Zoya; Kokko-Gonzales, Paula I; Kumar, Anil; Laurent, Marc A; Lawley, Cynthia T; Lee, Sarah E; Lee, Xavier; Liao, Arnold K; Loch, Jennifer A; Lok, Mitch; Luo, Shujun; Mammen, Radhika M; Martin, John W; McCauley, Patrick G; McNitt, Paul; Mehta, Parul; Moon, Keith W; Mullens, Joe W; Newington, Taksina; Ning, Zemin; Ling Ng, Bee; Novo, Sonia M; O'Neill, Michael J; Osborne, Mark A; Osnowski, Andrew; Ostadan, Omead; Paraschos, Lambros L; Pickering, Lea; Pike, Andrew C; Pike, Alger C; Chris Pinkard, D; Pliskin, Daniel P; Podhasky, Joe; Quijano, Victor J; Raczy, Come; Rae, Vicki H; Rawlings, Stephen R; Chiva Rodriguez, Ana; Roe, Phyllida M; Rogers, John; Rogert Bacigalupo, Maria C; Romanov, Nikolai; Romieu, Anthony; Roth, Rithy K; Rourke, Natalie J; Ruediger, Silke T; Rusman, Eli; Sanches-Kuiper, Raquel M; Schenker, Martin R; Seoane, Josefina M; Shaw, Richard J; Shiver, Mitch K; Short, Steven W; Sizto, Ning L; Sluis, Johannes P; Smith, Melanie A; Ernest Sohna Sohna, Jean; Spence, Eric J; Stevens, Kim; Sutton, Neil; Szajkowski, Lukasz; Tregidgo, Carolyn L; Turcatti, Gerardo; Vandevondele, Stephanie; Verhovsky, Yuli; Virk, Selene M; Wakelin, Suzanne; Walcott, Gregory C; Wang, Jingwen; Worsley, Graham J; Yan, Juying; Yau, Ling; Zuerlein, Mike; Rogers, Jane; Mullikin, James C; Hurles, Matthew E; McCooke, Nick J; West, John S; Oaks, Frank L; Lundberg, Peter L; Klenerman, David; Durbin, Richard; Smith, Anthony J

    2008-11-01

    DNA sequence information underpins genetic research, enabling discoveries of important biological or medical benefit. Sequencing projects have traditionally used long (400-800 base pair) reads, but the existence of reference sequences for the human and many other genomes makes it possible to develop new, fast approaches to re-sequencing, whereby shorter reads are compared to a reference to identify intraspecies genetic variation. Here we report an approach that generates several billion bases of accurate nucleotide sequence per experiment at low cost. Single molecules of DNA are attached to a flat surface, amplified in situ and used as templates for synthetic sequencing with fluorescent reversible terminator deoxyribonucleotides. Images of the surface are analysed to generate high-quality sequence. We demonstrate application of this approach to human genome sequencing on flow-sorted X chromosomes and then scale the approach to determine the genome sequence of a male Yoruba from Ibadan, Nigeria. We build an accurate consensus sequence from >30x average depth of paired 35-base reads. We characterize four million single-nucleotide polymorphisms and four hundred thousand structural variants, many of which were previously unknown. Our approach is effective for accurate, rapid and economical whole-genome re-sequencing and many other biomedical applications.

  15. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  16. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  17. Linkage disequilibrium interval mapping of quantitative trait loci

    PubMed Central

    Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte

    2006-01-01

    Background For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Results Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Conclusion Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates. PMID:16542433

  18. Determining suitable image resolutions for accurate supervised crop classification using remote sensing data

    NASA Astrophysics Data System (ADS)

    Löw, Fabian; Duveiller, Grégory

    2013-10-01

    Mapping the spatial distribution of crops has become a fundamental input for agricultural production monitoring using remote sensing. However, the multi-temporality that is often necessary to accurately identify crops and to monitor crop growth generally comes at the expense of coarser observation supports, and can lead to increasingly erroneous class allocations caused by mixed pixels. For a given application like crop classification, the spatial resolution requirement (e.g. in terms of a maximum tolerable pixel size) differs considerably over different landscapes. To analyse the spatial resolution requirements for accurate crop identification via image classification, this study builds upon and extends a conceptual framework established in a previous work1. This framework allows defining quantitatively the spatial resolution requirements for crop monitoring based on simulating how agricultural landscapes, and more specifically the fields covered by a crop of interest, are seen by instruments with increasingly coarser resolving power. The concept of crop specific pixel purity, defined as the degree of homogeneity of the signal encoded in a pixel with respect to the target crop type, is used to analyse how mixed the pixels can be (as they become coarser), without undermining their capacity to describe the desired surface properties. In this case, this framework has been steered towards answering the question: "What is the spatial resolution requirement for crop identification via supervised image classification, in particular minimum and coarsest acceptable pixel sizes, and how do these requirements change over different landscapes?" The framework is applied over four contrasting agro-ecological landscapes in Middle Asia. Inputs to the experiment were eight multi-temporal images from the RapidEye sensor, the simulated pixel sizes range from 6.5 m to 396.5 m. Constraining parameters for crop identification were defined by setting thresholds for classification

  19. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  20. Automatic and Accurate Shadow Detection Using Near-Infrared Information.

    PubMed

    Rüfenacht, Dominic; Fredembach, Clément; Süsstrunk, Sabine

    2014-08-01

    We present a method to automatically detect shadows in a fast and accurate manner by taking advantage of the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. Dark objects, which confound many shadow detection algorithms, often have much higher reflectance in the NIR. We can thus build an accurate shadow candidate map based on image pixels that are dark both in the visible and NIR representations. We further refine the shadow map by incorporating ratios of the visible to the NIR image, based on the observation that commonly encountered light sources have very distinct spectra in the NIR band. The results are validated on a new database, which contains visible/NIR images for a large variety of real-world shadow creating illuminant conditions, as well as manually labeled shadow ground truth. Both quantitative and qualitative evaluations show that our method outperforms current state-of-the-art shadow detection algorithms in terms of accuracy and computational efficiency.

  1. The SILAC Fly Allows for Accurate Protein Quantification in Vivo*

    PubMed Central

    Sury, Matthias D.; Chen, Jia-Xuan; Selbach, Matthias

    2010-01-01

    Stable isotope labeling by amino acids in cell culture (SILAC) is widely used to quantify protein abundance in tissue culture cells. Until now, the only multicellular organism completely labeled at the amino acid level was the laboratory mouse. The fruit fly Drosophila melanogaster is one of the most widely used small animal models in biology. Here, we show that feeding flies with SILAC-labeled yeast leads to almost complete labeling in the first filial generation. We used these “SILAC flies” to investigate sexual dimorphism of protein abundance in D. melanogaster. Quantitative proteome comparison of adult male and female flies revealed distinct biological processes specific for each sex. Using a tudor mutant that is defective for germ cell generation allowed us to differentiate between sex-specific protein expression in the germ line and somatic tissue. We identified many proteins with known sex-specific expression bias. In addition, several new proteins with a potential role in sexual dimorphism were identified. Collectively, our data show that the SILAC fly can be used to accurately quantify protein abundance in vivo. The approach is simple, fast, and cost-effective, making SILAC flies an attractive model system for the emerging field of in vivo quantitative proteomics. PMID:20525996

  2. SNS shielding analyses overview

    SciTech Connect

    Popova, Irina; Gallmeier, Franz; Iverson, Erik B; Lu, Wei; Remec, Igor

    2015-01-01

    This paper gives an overview on on-going shielding analyses for Spallation Neutron Source. Presently, the most of the shielding work is concentrated on the beam lines and instrument enclosures to prepare for commissioning, save operation and adequate radiation background in the future. There is on-going work for the accelerator facility. This includes radiation-protection analyses for radiation monitors placement, designing shielding for additional facilities to test accelerator structures, redesigning some parts of the facility, and designing test facilities to the main accelerator structure for component testing. Neutronics analyses are required as well to support spent structure management, including waste characterisation analyses, choice of proper transport/storage package and shielding enhancement for the package if required.

  3. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    PubMed

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  4. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  5. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  6. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  7. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  8. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  9. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  10. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  11. Line gas sampling system ensures accurate analysis

    SciTech Connect

    Not Available

    1992-06-01

    Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.

  12. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  13. Recent advances in quantitative neuroproteomics.

    PubMed

    Craft, George E; Chen, Anshu; Nairn, Angus C

    2013-06-15

    The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson's disease and Alzheimer's disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to shed

  14. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  15. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-04-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  16. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  17. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  18. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  19. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  20. Sources of Technical Variability in Quantitative LC-MS Proteomics: Human Brain Tissue Sample Analysis.

    SciTech Connect

    Piehowski, Paul D.; Petyuk, Vladislav A.; Orton, Daniel J.; Xie, Fang; Moore, Ronald J.; Ramirez Restrepo, Manuel; Engel, Anzhelika; Lieberman, Andrew P.; Albin, Roger L.; Camp, David G.; Smith, Richard D.; Myers, Amanda J.

    2013-05-03

    To design a robust quantitative proteomics study, an understanding of both the inherent heterogeneity of the biological samples being studied as well as the technical variability of the proteomics methods and platform is needed. Additionally, accurately identifying the technical steps associated with the largest variability would provide valuable information for the improvement and design of future processing pipelines. We present an experimental strategy that allows for a detailed examination of the variability of the quantitative LC-MS proteomics measurements. By replicating analyses at different stages of processing, various technical components can be estimated and their individual contribution to technical variability can be dissected. This design can be easily adapted to other quantitative proteomics pipelines. Herein, we applied this methodology to our label-free workflow for the processing of human brain tissue. For this application, the pipeline was divided into four critical components: Tissue dissection and homogenization (extraction), protein denaturation followed by trypsin digestion and SPE clean-up (digestion), short-term run-to-run instrumental response fluctuation (instrumental variance), and long-term drift of the quantitative response of the LC-MS/MS platform over the 2 week period of continuous analysis (instrumental stability). From this analysis, we found the following contributions to variability: extraction (72%) >> instrumental variance (16%) > instrumental stability (8.4%) > digestion (3.1%). Furthermore, the stability of the platform and its’ suitability for discovery proteomics studies is demonstrated.

  1. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  2. Spacelab Charcoal Analyses

    NASA Technical Reports Server (NTRS)

    Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.

    1995-01-01

    This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.

  3. Quantitative Spectroscopy of Deneb

    NASA Astrophysics Data System (ADS)

    Schiller, Florian; Przybilla, N.

    We use the visually brightest A-type supergiant Deneb (A2 Ia) as benchmark for testing a spectro- scopic analysis technique developed for quantitative studies of BA-type supergiants. Our NLTE spectrum synthesis technique allows us to derive stellar parameters and elemental abundances with unprecedented accuracy. The study is based on a high-resolution and high-S/N spectrum obtained with the Echelle spectrograph FOCES on the Calar Alto 2.2 m telescope. Practically all inconsistencies reported in earlier studies are resolved. A self-consistent view of Deneb is thus obtained, allowing us to discuss its evolutionary state in detail by comparison with the most recent generation of evolution models for massive stars. The basic atmospheric parameters Teff = 8525 ± 75 K and log g = 1.10 ± 0.05 dex (cgs) and the distance imply the following fundamental parameters for Deneb: M spec = 17 ± 3 M⊙ , L = 1.77 ± 0.29 · 105 L⊙ and R = 192 ± 16 R⊙ . The derived He and CNO abundances indicate mixing with nuclear processed matter. The high N/C ratio of 4.64 ± 1.39 and a N/O ratio of 0.88 ± 0.07 (mass fractions) could in principle be explained by evolutionary models with initially very rapid rotation. A mass of ˜ 22 M⊙ is implied for the progenitor on the zero-age main se- quence, i.e. it was a late O-type star. Significant mass-loss has occurred, probably enhanced by pronounced centrifugal forces. The observational constraints favour a scenario for the evolu- tion of Deneb where the effects of rotational mixing may be amplified by an interaction with a magnetic field. Analogous analyses of such highly luminous BA-type supergiants will allow for precision studies of different galaxies in the Local Group and beyond.

  4. Reference gene selection for quantitative real-time PCR normalization in Quercus suber.

    PubMed

    Marum, Liliana; Miguel, Andreia; Ricardo, Cândido P; Miguel, Célia

    2012-01-01

    The use of reverse transcription quantitative PCR technology to assess gene expression levels requires an accurate normalization of data in order to avoid misinterpretation of experimental results and erroneous analyses. Despite being the focus of several transcriptomics projects, oaks, and particularly cork oak (Quercus suber), have not been investigated regarding the identification of reference genes suitable for the normalization of real-time quantitative PCR data. In this study, ten candidate reference genes (Act, CACs, EF-1α, GAPDH, His3, PsaH, Sand, PP2A, ß-Tub and Ubq) were evaluated to determine the most stable internal reference for quantitative PCR normalization in cork oak. The transcript abundance of these genes was analysed in several tissues of cork oak, including leaves, reproduction cork, and periderm from branches at different developmental stages (1-, 2-, and 3-year old) or collected in different dates (active growth period versus dormancy). The three statistical methods (geNorm, NormFinder, and CV method) used in the evaluation of the most suitable combination of reference genes identified Act and CACs as the most stable candidates when all the samples were analysed together, while ß-Tub and PsaH showed the lowest expression stability. However, when different tissues, developmental stages, and collection dates were analysed separately, the reference genes exhibited some variation in their expression levels. In this study, and for the first time, we have identified and validated reference genes in cork oak that can be used for quantification of target gene expression in different tissues and experimental conditions and will be useful as a starting point for gene expression studies in other oaks.

  5. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  6. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  7. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  8. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  9. Apollo 14 microbial analyses

    NASA Technical Reports Server (NTRS)

    Taylor, G. R.

    1972-01-01

    Extensive microbiological analyses that were performed on the Apollo 14 prime and backup crewmembers and ancillary personnel are discussed. The crewmembers were subjected to four separate and quite different environments during the 137-day monitoring period. The relation between each of these environments and observed changes in the microflora of each astronaut are presented.

  10. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  11. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  12. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  13. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  14. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  15. Information Omitted From Analyses.

    PubMed

    2015-08-01

    In the Original Article titled “Higher- Order Genetic and Environmental Structure of Prevalent Forms of Child and Adolescent Psychopathology” published in the February 2011 issue of JAMA Psychiatry (then Archives of General Psychiatry) (2011;68[2]:181-189), there were 2 errors. Although the article stated that the dimensions of psychopathology were measured using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder, major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder, all dimensional scores used in the reported analyses were actually based on parent reports of symptoms; youth reports were not used. In addition, whereas the article stated that each symptom dimension was residualized on age, sex, age-squared, and age by sex, the dimensions actually were only residualized on age, sex, and age-squared. All analyses were repeated using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder,major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder; these dimensional scores were residualized on age, age-squared, sex, sex by age, and sex by age-squared. The results of the new analyses were qualitatively the same as those reported in the article, with no substantial changes in conclusions. The only notable small difference was that major depression and generalized anxiety disorder dimensions had small but significant loadings on the internalizing factor in addition to their substantial loadings on the general factor in the analyses of both genetic and non-shared covariances in the selected models in the new analyses. Corrections were made to the

  16. Uncertainty in Quantitative Electron Probe Microanalysis

    PubMed Central

    Heinrich, Kurt F. J.

    2002-01-01

    Quantitative electron probe analysis is based on models based on the physics or x-ray generation, empirically adjusted to the analyses of specimens of known composition. Their accuracy can be estimated by applying them to a set of specimens of presumably well-known composition. PMID:27446746

  17. Quantitative DNA Methylation Profiling in Cancer.

    PubMed

    Ammerpohl, Ole; Haake, Andrea; Kolarova, Julia; Siebert, Reiner

    2016-01-01

    Epigenetic mechanisms including DNA methylation are fundamental for the regulation of gene expression. Epigenetic alterations can lead to the development and the evolution of malignant tumors as well as the emergence of phenotypically different cancer cells or metastasis from one single tumor cell. Here we describe bisulfite pyrosequencing, a technology to perform quantitative DNA methylation analyses, to detect aberrant DNA methylation in malignant tumors.

  18. Quantitative MRI Assessment of Leukoencephalopathy

    PubMed Central

    Reddick, Wilburn E.; Glass, John O.; Langston, James W.; Helton, Kathleen J.

    2008-01-01

    Quantitative MRI assessment of leukoencephalopathy is difficult because the MRI properties of leukoencephalopathy significantly overlap those of normal tissue. This report describes the use of an automated procedure for longitudinal measurement of tissue volume and relaxation times to quantify leukoencephalopathy. Images derived by using this procedure in patients undergoing therapy for acute lymphoblastic leukemia (ALL) are presented. Five examinations from each of five volunteers (25 examinations) were used to test the reproducibility of quantitated baseline and subsequent, normal-appearing images; the coefficients of variation were less than 2% for gray and white matter. Regions of leukoencephalopathy in patients were assessed by comparison with manual segmentation. Two radiologists manually segmented images from 15 randomly chosen MRI examinations that exhibited leukoencephalopathy. Kappa analyses showed that the two radiologists’ interpretations were concordant (κ = 0.70) and that each radiologist’s interpretations agreed with the results of the automated procedure (κ = 0.57 and 0.55).The clinical application of this method was illustrated by analysis of images from sequential MR examinations of two patients who developed leukoencephalopathy during treatment for ALL. The ultimate goal is to use these quantitative MR imaging measures to better understand therapy-induced neurotoxicity, which can be limited or even reversed with some combination of therapy adjustments and pharmacological and neurobehavioral interventions. PMID:11979570

  19. Quantitative measures for redox signaling.

    PubMed

    Pillay, Ché S; Eagling, Beatrice D; Driscoll, Scott R E; Rohwer, Johann M

    2016-07-01

    Redox signaling is now recognized as an important regulatory mechanism for a number of cellular processes including the antioxidant response, phosphokinase signal transduction and redox metabolism. While there has been considerable progress in identifying the cellular machinery involved in redox signaling, quantitative measures of redox signals have been lacking, limiting efforts aimed at understanding and comparing redox signaling under normoxic and pathogenic conditions. Here we have outlined some of the accepted principles for redox signaling, including the description of hydrogen peroxide as a signaling molecule and the role of kinetics in conferring specificity to these signaling events. Based on these principles, we then develop a working definition for redox signaling and review a number of quantitative methods that have been employed to describe signaling in other systems. Using computational modeling and published data, we show how time- and concentration- dependent analyses, in particular, could be used to quantitatively describe redox signaling and therefore provide important insights into the functional organization of redox networks. Finally, we consider some of the key challenges with implementing these methods. PMID:27151506

  20. Accurate oscillator strengths for interstellar ultraviolet lines of Cl I

    NASA Technical Reports Server (NTRS)

    Schectman, R. M.; Federman, S. R.; Beideck, D. J.; Ellis, D. J.

    1993-01-01

    Analyses on the abundance of interstellar chlorine rely on accurate oscillator strengths for ultraviolet transitions. Beam-foil spectroscopy was used to obtain f-values for the astrophysically important lines of Cl I at 1088, 1097, and 1347 A. In addition, the line at 1363 A was studied. Our f-values for 1088, 1097 A represent the first laboratory measurements for these lines; the values are f(1088)=0.081 +/- 0.007 (1 sigma) and f(1097) = 0.0088 +/- 0.0013 (1 sigma). These results resolve the issue regarding the relative strengths for 1088, 1097 A in favor of those suggested by astronomical measurements. For the other lines, our results of f(1347) = 0.153 +/- 0.011 (1 sigma) and f(1363) = 0.055 +/- 0.004 (1 sigma) are the most precisely measured values available. The f-values are somewhat greater than previous experimental and theoretical determinations.

  1. Quantitative analysis of blood vessel geometry

    NASA Astrophysics Data System (ADS)

    Fuhrman, Michael G.; Abdul-Karim, Othman; Shah, Sujal; Gilbert, Steven G.; Van Bibber, Richard

    2001-07-01

    Re-narrowing or restenosis of a human coronary artery occurs within six months in one third of balloon angioplasty procedures. Accurate and repeatable quantitative analysis of vessel shape is important to characterize the progression and type of restenosis, and to evaluate effects new therapies might have. A combination of complicated geometry and image variability, and the need for high resolution and large image size makes visual/manual analysis slow, difficult, and prone to error. The image processing and analysis described here was developed to automate feature extraction of the lumen, internal elastic lamina, neointima, external elastic lamina, and tunica adventitia and to enable an objective, quantitative definition of blood vessel geometry. The quantitative geometrical analysis enables the measurement of several features including perimeter, area, and other metrics of vessel damage. Automation of feature extraction creates a high throughput capability that enables analysis of serial sections for more accurate measurement of restenosis dimensions. Measurement results are input into a relational database where they can be statistically analyzed compared across studies. As part of the integrated process, results are also imprinted on the images themselves to facilitate auditing of the results. The analysis is fast, repeatable and accurate while allowing the pathologist to control the measurement process.

  2. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  3. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  4. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  5. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  6. Accurate photometric redshift probability density estimation - method comparison and application

    NASA Astrophysics Data System (ADS)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  7. Quantitative confocal microscopy: beyond a pretty picture.

    PubMed

    Jonkman, James; Brown, Claire M; Cole, Richard W

    2014-01-01

    Quantitative optical microscopy has become the norm, with the confocal laser-scanning microscope being the workhorse of many imaging laboratories. Generating quantitative data requires a greater emphasis on the accurate operation of the microscope itself, along with proper experimental design and adequate controls. The microscope, which is more accurately an imaging system, cannot be treated as a "black box" with the collected data viewed as infallible. There needs to be regularly scheduled performance testing that will ensure that quality data are being generated. This regular testing also allows for the tracking of metrics that can point to issues before they result in instrument malfunction and downtime. In turn, images must be collected in a manner that is quantitative with maximal signal to noise (which can be difficult depending on the application) without data clipping. Images must then be processed to correct for background intensities, fluorophore cross talk, and uneven field illumination. With advanced techniques such as spectral imaging, Förster resonance energy transfer, and fluorescence-lifetime imaging microscopy, experimental design needs to be carefully planned out and include all appropriate controls. Quantitative confocal imaging in all of these contexts and more will be explored within the chapter. PMID:24974025

  8. Accurate, Fully-Automated NMR Spectral Profiling for Metabolomics

    PubMed Central

    Ravanbakhsh, Siamak; Liu, Philip; Bjordahl, Trent C.; Mandal, Rupasri; Grant, Jason R.; Wilson, Michael; Eisner, Roman; Sinelnikov, Igor; Hu, Xiaoyu; Luchinat, Claudio; Greiner, Russell; Wishart, David S.

    2015-01-01

    Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person’s biofluids, which means such diseases can often be readily detected from a person’s “metabolic profile"—i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person’s metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the “signatures” of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively—with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications

  9. Accurate, fully-automated NMR spectral profiling for metabolomics.

    PubMed

    Ravanbakhsh, Siamak; Liu, Philip; Bjorndahl, Trent C; Bjordahl, Trent C; Mandal, Rupasri; Grant, Jason R; Wilson, Michael; Eisner, Roman; Sinelnikov, Igor; Hu, Xiaoyu; Luchinat, Claudio; Greiner, Russell; Wishart, David S

    2015-01-01

    Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of NMR in

  10. Accurate response surface approximations for weight equations based on structural optimization

    NASA Astrophysics Data System (ADS)

    Papila, Melih

    Accurate weight prediction methods are vitally important for aircraft design optimization. Therefore, designers seek weight prediction techniques with low computational cost and high accuracy, and usually require a compromise between the two. The compromise can be achieved by combining stress analysis and response surface (RS) methodology. While stress analysis provides accurate weight information, RS techniques help to transmit effectively this information to the optimization procedure. The focus of this dissertation is structural weight equations in the form of RS approximations and their accuracy when fitted to results of structural optimizations that are based on finite element analyses. Use of RS methodology filters out the numerical noise in structural optimization results and provides a smooth weight function that can easily be used in gradient-based configuration optimization. In engineering applications RS approximations of low order polynomials are widely used, but the weight may not be modeled well by low-order polynomials, leading to bias errors. In addition, some structural optimization results may have high-amplitude errors (outliers) that may severely affect the accuracy of the weight equation. Statistical techniques associated with RS methodology are sought in order to deal with these two difficulties: (1) high-amplitude numerical noise (outliers) and (2) approximation model inadequacy. The investigation starts with reducing approximation error by identifying and repairing outliers. A potential reason for outliers in optimization results is premature convergence, and outliers of such nature may be corrected by employing different convergence settings. It is demonstrated that outlier repair can lead to accuracy improvements over the more standard approach of removing outliers. The adequacy of approximation is then studied by a modified lack-of-fit approach, and RS errors due to the approximation model are reduced by using higher order polynomials. In

  11. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  12. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  13. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  14. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  15. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  16. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  17. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  18. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  19. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  20. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  1. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  2. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  3. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  4. A male-specific quantitative trait locus on 1p21 controlling human stature

    PubMed Central

    Sammalisto, S; Hiekkalinna, T; Suviolahti, E; Sood, K; Metzidis, A; Pajukanta, P; Lilja, H; Soro-Paavonen, A; Taskinen, M; Tuomi, T; Almgren, P; Orho-Melander, M; Groop, L; Peltonen, L; Perola, M

    2005-01-01

    Background: Many genome-wide scans aimed at complex traits have been statistically underpowered due to small sample size. Combining data from several genome-wide screens with comparable quantitative phenotype data should improve statistical power for the localisation of genomic regions contributing to these traits. Objective: To perform a genome-wide screen for loci affecting adult stature by combined analysis of four previously performed genome-wide scans. Methods: We developed a web based computer tool, Cartographer, for combining genetic marker maps which positions genetic markers accurately using the July 2003 release of the human genome sequence and the deCODE genetic map. Using Cartographer, we combined the primary genotype data from four genome-wide scans and performed variance components (VC) linkage analyses for human stature on the pooled dataset of 1417 individuals from 277 families and performed VC analyses for males and females separately. Results: We found significant linkage to stature on 1p21 (multipoint LOD score 4.25) and suggestive linkages on 9p24 and 18q21 (multipoint LOD scores 2.57 and 2.39, respectively) in males-only analyses. We also found suggestive linkage to 4q35 and 22q13 (multipoint LOD scores 2.18 and 2.85, respectively) when we analysed both females and males and to 13q12 (multipoint LOD score 2.66) in females-only analyses. Conclusions: We strengthened the evidence for linkage to previously reported quantitative trait loci (QTL) for stature and also found significant evidence of a novel male-specific QTL on 1p21. Further investigation of several interesting candidate genes in this region will help towards characterisation of this first sex-specific locus affecting human stature. PMID:15827092

  5. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  6. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    Model calculations and analyses have been carried out to compare with several sets of data (dose, induced radioactivity in various experiment samples and spacecraft components, fission foil measurements, and LET spectra) from passive radiation dosimetry on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The calculations and data comparisons are used to estimate the accuracy of current models and methods for predicting the ionizing radiation environment in low earth orbit. The emphasis is on checking the accuracy of trapped proton flux and anisotropy models.

  7. Quantitative optical phase microscopy.

    PubMed

    Barty, A; Nugent, K A; Paganin, D; Roberts, A

    1998-06-01

    We present a new method for the extraction of quantitative phase data from microscopic phase samples by use of partially coherent illumination and an ordinary transmission microscope. The technique produces quantitative images of the phase profile of the sample without phase unwrapping. The technique is able to recover phase even in the presence of amplitude modulation, making it significantly more powerful than existing methods of phase microscopy. We demonstrate the technique by providing quantitatively correct phase images of well-characterized test samples and show that the results obtained for more-complex samples correlate with structures observed with Nomarski differential interference contrast techniques.

  8. Quantitative protein localization signatures reveal an association between spatial and functional divergences of proteins.

    PubMed

    Loo, Lit-Hsin; Laksameethanasan, Danai; Tung, Yi-Ling

    2014-03-01

    Protein subcellular localization is a major determinant of protein function. However, this important protein feature is often described in terms of discrete and qualitative categories of subcellular compartments, and therefore it has limited applications in quantitative protein function analyses. Here, we present Protein Localization Analysis and Search Tools (PLAST), an automated analysis framework for constructing and comparing quantitative signatures of protein subcellular localization patterns based on microscopy images. PLAST produces human-interpretable protein localization maps that quantitatively describe the similarities in the localization patterns of proteins and major subcellular compartments, without requiring manual assignment or supervised learning of these compartments. Using the budding yeast Saccharomyces cerevisiae as a model system, we show that PLAST is more accurate than existing, qualitative protein localization annotations in identifying known co-localized proteins. Furthermore, we demonstrate that PLAST can reveal protein localization-function relationships that are not obvious from these annotations. First, we identified proteins that have similar localization patterns and participate in closely-related biological processes, but do not necessarily form stable complexes with each other or localize at the same organelles. Second, we found an association between spatial and functional divergences of proteins during evolution. Surprisingly, as proteins with common ancestors evolve, they tend to develop more diverged subcellular localization patterns, but still occupy similar numbers of compartments. This suggests that divergence of protein localization might be more frequently due to the development of more specific localization patterns over ancestral compartments than the occupation of new compartments. PLAST enables systematic and quantitative analyses of protein localization-function relationships, and will be useful to elucidate protein

  9. Application of Mixed-Methods Approaches to Higher Education and Intersectional Analyses

    ERIC Educational Resources Information Center

    Griffin, Kimberly A.; Museus, Samuel D.

    2011-01-01

    In this article, the authors discuss the utility of combining quantitative and qualitative methods in conducting intersectional analyses. First, they discuss some of the paradigmatic underpinnings of qualitative and quantitative research, and how these methods can be used in intersectional analyses. They then consider how paradigmatic pragmatism…

  10. Rapid Accurate Identification of Bacterial and Viral Pathogens

    SciTech Connect

    Dunn, John

    2007-03-09

    The goals of this program were to develop two assays for rapid, accurate identification of pathogenic organisms at the strain level. The first assay "Quantitative Genome Profiling or QGP" is a real time PCR assay with a restriction enzyme-based component. Its underlying concept is that certain enzymes should cleave genomic DNA at many sites and that in some cases these cuts will interrupt the connection on the genomic DNA between flanking PCR primer pairs thereby eliminating selected PCR amplifications. When this occurs the appearance of the real-time PCR threshold (Ct) signal during DNA amplification is totally eliminated or, if cutting is incomplete, greatly delayed compared to an uncut control. This temporal difference in appearance of the Ct signal relative to undigested control DNA provides a rapid, high-throughput approach for DNA-based identification of different but closely related pathogens depending upon the nucleotide sequence of the target region. The second assay we developed uses the nucleotide sequence of pairs of shmi identifier tags (-21 bp) to identify DNA molecules. Subtle differences in linked tag pair combinations can also be used to distinguish between closely related isolates..

  11. Accurate multiple network alignment through context-sensitive random walk

    PubMed Central

    2015-01-01

    Background Comparative network analysis can provide an effective means of analyzing large-scale biological networks and gaining novel insights into their structure and organization. Global network alignment aims to predict the best overall mapping between a given set of biological networks, thereby identifying important similarities as well as differences among the networks. It has been shown that network alignment methods can be used to detect pathways or network modules that are conserved across different networks. Until now, a number of network alignment algorithms have been proposed based on different formulations and approaches, many of them focusing on pairwise alignment. Results In this work, we propose a novel multiple network alignment algorithm based on a context-sensitive random walk model. The random walker employed in the proposed algorithm switches between two different modes, namely, an individual walk on a single network and a simultaneous walk on two networks. The switching decision is made in a context-sensitive manner by examining the current neighborhood, which is effective for quantitatively estimating the degree of correspondence between nodes that belong to different networks, in a manner that sensibly integrates node similarity and topological similarity. The resulting node correspondence scores are then used to predict the maximum expected accuracy (MEA) alignment of the given networks. Conclusions Performance evaluation based on synthetic networks as well as real protein-protein interaction networks shows that the proposed algorithm can construct more accurate multiple network alignments compared to other leading methods. PMID:25707987

  12. Subvoxel accurate graph search using non-Euclidean graph space.

    PubMed

    Abràmoff, Michael D; Wu, Xiaodong; Lee, Kyungmoo; Tang, Li

    2014-01-01

    Graph search is attractive for the quantitative analysis of volumetric medical images, and especially for layered tissues, because it allows globally optimal solutions in low-order polynomial time. However, because nodes of graphs typically encode evenly distributed voxels of the volume with arcs connecting orthogonally sampled voxels in Euclidean space, segmentation cannot achieve greater precision than a single unit, i.e. the distance between two adjoining nodes, and partial volume effects are ignored. We generalize the graph to non-Euclidean space by allowing non-equidistant spacing between nodes, so that subvoxel accurate segmentation is achievable. Because the number of nodes and edges in the graph remains the same, running time and memory use are similar, while all the advantages of graph search, including global optimality and computational efficiency, are retained. A deformation field calculated from the volume data adaptively changes regional node density so that node density varies with the inverse of the expected cost. We validated our approach using optical coherence tomography (OCT) images of the retina and 3-D MR of the arterial wall, and achieved statistically significant increased accuracy. Our approach allows improved accuracy in volume data acquired with the same hardware, and also, preserved accuracy with lower resolution, more cost-effective, image acquisition equipment. The method is not limited to any specific imaging modality and readily extensible to higher dimensions.

  13. Personalized Orthodontic Accurate Tooth Arrangement System with Complete Teeth Model.

    PubMed

    Cheng, Cheng; Cheng, Xiaosheng; Dai, Ning; Liu, Yi; Fan, Qilei; Hou, Yulin; Jiang, Xiaotong

    2015-09-01

    The accuracy, validity and lack of relation information between dental root and jaw in tooth arrangement are key problems in tooth arrangement technology. This paper aims to describe a newly developed virtual, personalized and accurate tooth arrangement system based on complete information about dental root and skull. Firstly, a feature constraint database of a 3D teeth model is established. Secondly, for computed simulation of tooth movement, the reference planes and lines are defined by the anatomical reference points. The matching mathematical model of teeth pattern and the principle of the specific pose transformation of rigid body are fully utilized. The relation of position between dental root and alveolar bone is considered during the design process. Finally, the relative pose relationships among various teeth are optimized using the object mover, and a personalized therapeutic schedule is formulated. Experimental results show that the virtual tooth arrangement system can arrange abnormal teeth very well and is sufficiently flexible. The relation of position between root and jaw is favorable. This newly developed system is characterized by high-speed processing and quantitative evaluation of the amount of 3D movement of an individual tooth.

  14. Recapturing Quantitative Biology.

    ERIC Educational Resources Information Center

    Pernezny, Ken; And Others

    1996-01-01

    Presents a classroom activity on estimating animal populations. Uses shoe boxes and candies to emphasize the importance of mathematics in biology while introducing the methods of quantitative ecology. (JRH)

  15. On Quantitative Rorschach Scales.

    ERIC Educational Resources Information Center

    Haggard, Ernest A.

    1978-01-01

    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  16. RECENT ADVANCES IN QUANTITATIVE NEUROPROTEOMICS

    PubMed Central

    Craft, George E; Chen, Anshu; Nairn, Angus C

    2014-01-01

    The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson’s disease and Alzheimer’s disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to

  17. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  18. Accurate free energy calculation along optimized paths.

    PubMed

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  19. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  20. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  1. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  2. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  3. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  4. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  5. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  6. Quantitative receptor autoradiography

    SciTech Connect

    Boast, C.A.; Snowhill, E.W.; Altar, C.A.

    1986-01-01

    Quantitative receptor autoradiography addresses the topic of technical and scientific advances in the sphere of quantitative autoradiography. The volume opens with a overview of the field from a historical and critical perspective. Following is a detailed discussion of in vitro data obtained from a variety of neurotransmitter systems. The next section explores applications of autoradiography, and the final two chapters consider experimental models. Methodological considerations are emphasized, including the use of computers for image analysis.

  7. Accurate measurement of the relative abundance of different DNA species in complex DNA mixtures.

    PubMed

    Jeong, Sangkyun; Yu, Hyunjoo; Pfeifer, Karl

    2012-06-01

    A molecular tool that can compare the abundances of different DNA sequences is necessary for comparing intergenic or interspecific gene expression. We devised and verified such a tool using a quantitative competitive polymerase chain reaction approach. For this approach, we adapted a competitor array, an artificially made plasmid DNA in which all the competitor templates for the target DNAs are arranged with a defined ratio, and melting analysis for allele quantitation for accurate quantitation of the fractional ratios of competitively amplified DNAs. Assays on two sets of DNA mixtures with explicitly known compositional structures of the test sequences were performed. The resultant average relative errors of 0.059 and 0.021 emphasize the highly accurate nature of this method. Furthermore, the method's capability of obtaining biological data is demonstrated by the fact that it can illustrate the tissue-specific quantitative expression signatures of the three housekeeping genes G6pdx, Ubc, and Rps27 by using the forms of the relative abundances of their transcripts, and the differential preferences of Igf2 enhancers for each of the multiple Igf2 promoters for the transcription.

  8. Accurate Measurement of the Relative Abundance of Different DNA Species in Complex DNA Mixtures

    PubMed Central

    Jeong, Sangkyun; Yu, Hyunjoo; Pfeifer, Karl

    2012-01-01

    A molecular tool that can compare the abundances of different DNA sequences is necessary for comparing intergenic or interspecific gene expression. We devised and verified such a tool using a quantitative competitive polymerase chain reaction approach. For this approach, we adapted a competitor array, an artificially made plasmid DNA in which all the competitor templates for the target DNAs are arranged with a defined ratio, and melting analysis for allele quantitation for accurate quantitation of the fractional ratios of competitively amplified DNAs. Assays on two sets of DNA mixtures with explicitly known compositional structures of the test sequences were performed. The resultant average relative errors of 0.059 and 0.021 emphasize the highly accurate nature of this method. Furthermore, the method's capability of obtaining biological data is demonstrated by the fact that it can illustrate the tissue-specific quantitative expression signatures of the three housekeeping genes G6pdx, Ubc, and Rps27 by using the forms of the relative abundances of their transcripts, and the differential preferences of Igf2 enhancers for each of the multiple Igf2 promoters for the transcription. PMID:22334570

  9. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    This report covers work performed by Science Applications International Corporation (SAIC) under contract NAS8-39386 from the NASA Marshall Space Flight Center entitled LDEF Satellite Radiation Analyses. The basic objective of the study was to evaluate the accuracy of present models and computational methods for defining the ionizing radiation environment for spacecraft in Low Earth Orbit (LEO) by making comparisons with radiation measurements made on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The emphasis of the work here is on predictions and comparisons with LDEF measurements of induced radioactivity and Linear Energy Transfer (LET) measurements. These model/data comparisons have been used to evaluate the accuracy of current models for predicting the flux and directionality of trapped protons for LEO missions.

  10. EEG analyses with SOBI.

    SciTech Connect

    Glickman, Matthew R.; Tang, Akaysha

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  11. Accurate and comprehensive sequencing of personal genomes.

    PubMed

    Ajay, Subramanian S; Parker, Stephen C J; Abaan, Hatice Ozel; Fajardo, Karin V Fuentes; Margulies, Elliott H

    2011-09-01

    As whole-genome sequencing becomes commoditized and we begin to sequence and analyze personal genomes for clinical and diagnostic purposes, it is necessary to understand what constitutes a complete sequencing experiment for determining genotypes and detecting single-nucleotide variants. Here, we show that the current recommendation of ∼30× coverage is not adequate to produce genotype calls across a large fraction of the genome with acceptably low error rates. Our results are based on analyses of a clinical sample sequenced on two related Illumina platforms, GAII(x) and HiSeq 2000, to a very high depth (126×). We used these data to establish genotype-calling filters that dramatically increase accuracy. We also empirically determined how the callable portion of the genome varies as a function of the amount of sequence data used. These results help provide a "sequencing guide" for future whole-genome sequencing decisions and metrics by which coverage statistics should be reported.

  12. Laboratory and field validation of a Cry1Ab protein quantitation method for water.

    PubMed

    Strain, Katherine E; Whiting, Sara A; Lydy, Michael J

    2014-10-01

    The widespread planting of crops expressing insecticidal proteins derived from the soil bacterium Bacillus thuringiensis (Bt) has given rise to concerns regarding potential exposure to non-target species. These proteins are released from the plant throughout the growing season into soil and surface runoff and may enter adjacent waterways as runoff, erosion, aerial deposition of particulates, or plant debris. It is crucial to be able to accurately quantify Bt protein concentrations in the environment to aid in risk analyses and decision making. Enzyme-linked immunosorbent assay (ELISA) is commonly used for quantitation of Bt proteins in the environment; however, there are no published methods detailing and validating the extraction and quantitation of Bt proteins in water. The objective of the current study was to optimize the extraction of a Bt protein, Cry1Ab, from three water matrices and validate the ELISA method for specificity, precision, accuracy, stability, and sensitivity. Recovery of the Cry1Ab protein was matrix-dependent and ranged from 40 to 88% in the validated matrices, with an overall method detection limit of 2.1 ng/L. Precision among two plates and within a single plate was confirmed with a coefficient of variation less than 20%. The ELISA method was verified in field and laboratory samples, demonstrating the utility of the validated method. The implementation of a validated extraction and quantitation protocol adds consistency and reliability to field-collected data regarding transgenic products. PMID:25059137

  13. Identification and Quantitation of Flavanols and Proanthocyanidins in Foods: How Good are the Datas?

    PubMed Central

    Kelm, Mark A.; Hammerstone, John F.; Schmitz, Harold H.

    2005-01-01

    Evidence suggesting that dietary polyphenols, flavanols, and proanthocyanidins in particular offer significant cardiovascular health benefits is rapidly increasing. Accordingly, reliable and accurate methods are needed to provide qualitative and quantitative food composition data necessary for high quality epidemiological and clinical research. Measurements for flavonoids and proanthocyanidins have employed a range of analytical techniques, with various colorimetric assays still being popular for estimating total polyphenolic content in foods and other biological samples despite advances made with more sophisticated analyses. More crudely, estimations of polyphenol content as well as antioxidant activity are also reported with values relating to radical scavenging activity. High-performance liquid chromatography (HPLC) is the method of choice for quantitative analysis of individual polyphenols such as flavanols and proanthocyanidins. Qualitative information regarding proanthocyanidin structure has been determined by chemical methods such as thiolysis and by HPLC-mass spectrometry (MS) techniques at present. The lack of appropriate standards is the single most important factor that limits the aforementioned analyses. However, with ever expanding research in the arena of flavanols, proanthocyanidins, and health and the importance of their future inclusion in food composition databases, the need for standards becomes more critical. At present, sufficiently well-characterized standard material is available for selective flavanols and proanthocyanidins, and construction of at least a limited food composition database is feasible. PMID:15712597

  14. Identification and quantitation of flavanols and proanthocyanidins in foods: how good are the datas?

    PubMed

    Kelm, Mark A; Hammerstone, John F; Schmitz, Harold H

    2005-03-01

    Evidence suggesting that dietary polyphenols, flavanols, and proanthocyanidins in particular offer significant cardiovascular health benefits is rapidly increasing. Accordingly, reliable and accurate methods are needed to provide qualitative and quantitative food composition data necessary for high quality epidemiological and clinical research. Measurements for flavonoids and proanthocyanidins have employed a range of analytical techniques, with various colorimetric assays still being popular for estimating total polyphenolic content in foods and other biological samples despite advances made with more sophisticated analyses. More crudely, estimations of polyphenol content as well as antioxidant activity are also reported with values relating to radical scavenging activity. High-performance liquid chromatography (HPLC) is the method of choice for quantitative analysis of individual polyphenols such as flavanols and proanthocyanidins. Qualitative information regarding proanthocyanidin structure has been determined by chemical methods such as thiolysis and by HPLC-mass spectrometry (MS) techniques at present. The lack of appropriate standards is the single most important factor that limits the aforementioned analyses. However, with ever expanding research in the arena of flavanols, proanthocyanidins, and health and the importance of their future inclusion in food composition databases, the need for standards becomes more critical. At present, sufficiently well-characterized standard material is available for selective flavanols and proanthocyanidins, and construction of at least a limited food composition database is feasible.

  15. NOx analyser interefence from alkenes

    NASA Astrophysics Data System (ADS)

    Bloss, W. J.; Alam, M. S.; Lee, J. D.; Vazquez, M.; Munoz, A.; Rodenas, M.

    2012-04-01

    Nitrogen oxides (NO and NO2, collectively NOx) are critical intermediates in atmospheric chemistry. NOx abundance controls the levels of the primary atmospheric oxidants OH, NO3 and O3, and regulates the ozone production which results from the degradation of volatile organic compounds. NOx are also atmospheric pollutants in their own right, and NO2 is commonly included in air quality objectives and regulations. In addition to their role in controlling ozone formation, NOx levels affect the production of other pollutants such as the lachrymator PAN, and the nitrate component of secondary aerosol particles. Consequently, accurate measurement of nitrogen oxides in the atmosphere is of major importance for understanding our atmosphere. The most widely employed approach for the measurement of NOx is chemiluminescent detection of NO2* from the NO + O3 reaction, combined with NO2 reduction by either a heated catalyst or photoconvertor. The reaction between alkenes and ozone is also chemiluminescent; therefore alkenes may contribute to the measured NOx signal, depending upon the instrumental background subtraction cycle employed. This interference has been noted previously, and indeed the effect has been used to measure both alkenes and ozone in the atmosphere. Here we report the results of a systematic investigation of the response of a selection of NOx analysers, ranging from systems used for routine air quality monitoring to atmospheric research instrumentation, to a series of alkenes ranging from ethene to the biogenic monoterpenes, as a function of conditions (co-reactants, humidity). Experiments were performed in the European Photoreactor (EUPHORE) to ensure common calibration, a common sample for the monitors, and to unequivocally confirm the alkene (via FTIR) and NO2 (via DOAS) levels present. The instrument responses ranged from negligible levels up to 10 % depending upon the alkene present and conditions used. Such interferences may be of substantial importance

  16. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  17. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  18. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  19. Accurate Thermal Conductivities from First Principles

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian

    2015-03-01

    In spite of significant research efforts, a first-principles determination of the thermal conductivity at high temperatures has remained elusive. On the one hand, Boltzmann transport techniques that include anharmonic effects in the nuclear dynamics only perturbatively become inaccurate or inapplicable under such conditions. On the other hand, non-equilibrium molecular dynamics (MD) methods suffer from enormous finite-size artifacts in the computationally feasible supercells, which prevent an accurate extrapolation to the bulk limit of the thermal conductivity. In this work, we overcome this limitation by performing ab initio MD simulations in thermodynamic equilibrium that account for all orders of anharmonicity. The thermal conductivity is then assessed from the auto-correlation function of the heat flux using the Green-Kubo formalism. Foremost, we discuss the fundamental theory underlying a first-principles definition of the heat flux using the virial theorem. We validate our approach and in particular the techniques developed to overcome finite time and size effects, e.g., by inspecting silicon, the thermal conductivity of which is particularly challenging to converge. Furthermore, we use this framework to investigate the thermal conductivity of ZrO2, which is known for its high degree of anharmonicity. Our calculations shed light on the heat resistance mechanism active in this material, which eventually allows us to discuss how the thermal conductivity can be controlled by doping and co-doping. This work has been performed in collaboration with R. Ramprasad (University of Connecticut), C. G. Levi and C. G. Van de Walle (University of California Santa Barbara).

  20. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  1. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  2. Population variability complicates the accurate detection of climate change responses.

    PubMed

    McCain, Christy; Szewczyk, Tim; Bracy Knight, Kevin

    2016-06-01

    The rush to assess species' responses to anthropogenic climate change (CC) has underestimated the importance of interannual population variability (PV). Researchers assume sampling rigor alone will lead to an accurate detection of response regardless of the underlying population fluctuations of the species under consideration. Using population simulations across a realistic, empirically based gradient in PV, we show that moderate to high PV can lead to opposite and biased conclusions about CC responses. Between pre- and post-CC sampling bouts of modeled populations as in resurvey studies, there is: (i) A 50% probability of erroneously detecting the opposite trend in population abundance change and nearly zero probability of detecting no change. (ii) Across multiple years of sampling, it is nearly impossible to accurately detect any directional shift in population sizes with even moderate PV. (iii) There is up to 50% probability of detecting a population extirpation when the species is present, but in very low natural abundances. (iv) Under scenarios of moderate to high PV across a species' range or at the range edges, there is a bias toward erroneous detection of range shifts or contractions. Essentially, the frequency and magnitude of population peaks and troughs greatly impact the accuracy of our CC response measurements. Species with moderate to high PV (many small vertebrates, invertebrates, and annual plants) may be inaccurate 'canaries in the coal mine' for CC without pertinent demographic analyses and additional repeat sampling. Variation in PV may explain some idiosyncrasies in CC responses detected so far and urgently needs more careful consideration in design and analysis of CC responses.

  3. Motor equivalence during multi-finger accurate force production

    PubMed Central

    Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2014-01-01

    We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311

  4. Motor equivalence during multi-finger accurate force production.

    PubMed

    Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M; Latash, Mark L

    2015-02-01

    We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The "inverse piano" apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes in neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311

  5. Quantitative aspects of septicemia.

    PubMed Central

    Yagupsky, P; Nolte, F S

    1990-01-01

    For years, quantitative blood cultures found only limited use as aids in the diagnosis and management of septic patients because the available methods were cumbersome, labor intensive, and practical only for relatively small volumes of blood. The development and subsequent commercial availability of lysis-centrifugation direct plating methods for blood cultures have addressed many of the shortcomings of the older methods. The lysis-centrifugation method has demonstrated good performance relative to broth-based blood culture methods. As a result, quantitative blood cultures have found widespread use in clinical microbiology laboratories. Most episodes of clinical significant bacteremia in adults are characterized by low numbers of bacteria per milliliter of blood. In children, the magnitude of bacteremia is generally much higher, with the highest numbers of bacteria found in the blood of septic neonates. The magnitude of bacteremia correlates with the severity of disease in children and with mortality rates in adults, but other factors play more important roles in determining the patient's outcome. Serial quantitative blood cultures have been used to monitor the in vivo efficacy of antibiotic therapy in patients with slowly resolving sepsis, such as disseminated Mycobacterium avium-M. intracellulare complex infections. Quantitative blood culture methods were used in early studies of bacterial endocarditis, and the results significantly contributed to our understanding of the pathophysiology of this disease. Comparison of paired quantitative blood cultures obtained from a peripheral vein and the central venous catheter has been used to help identify patients with catheter-related sepsis and is the only method that does not require removal of the catheter to establish the diagnosis. Quantitation of bacteria in the blood can also help distinguish contaminated from truly positive blood cultures; however, no quantitative criteria can invariably differentiate

  6. ICSH recommendations for assessing automated high-performance liquid chromatography and capillary electrophoresis equipment for the quantitation of HbA2.

    PubMed

    Stephens, A D; Colah, R; Fucharoen, S; Hoyer, J; Keren, D; McFarlane, A; Perrett, D; Wild, B J

    2015-10-01

    Automated high performance liquid chromatography and Capillary electrophoresis are used to quantitate the proportion of Hemoglobin A2 (HbA2 ) in blood samples order to enable screening and diagnosis of carriers of β-thalassemia. Since there is only a very small difference in HbA2 levels between people who are carriers and people who are not carriers such analyses need to be both precise and accurate. This paper examines the different parameters of such equipment and discusses how they should be assessed.

  7. Using Microsoft Office Excel 2007 to conduct generalized matching analyses.

    PubMed

    Reed, Derek D

    2009-01-01

    The generalized matching equation is a robust and empirically supported means of analyzing relations between reinforcement and behavior. Unfortunately, no simple task analysis is available to behavior analysts interested in using the matching equation to evaluate data in clinical or applied settings. This technical article presents a task analysis for the use of Microsoft Excel to analyze and plot the generalized matching equation. Using a data-based case example and a step-by-step guide for completing the analysis, these instructions are intended to promote the use of quantitative analyses by researchers with little to no experience in quantitative analyses or the matching law.

  8. Indian Ocean analyses

    NASA Technical Reports Server (NTRS)

    Meyers, Gary

    1992-01-01

    The background and goals of Indian Ocean thermal sampling are discussed from the perspective of a national project which has research goals relevant to variation of climate in Australia. The critical areas of SST variation are identified. The first goal of thermal sampling at this stage is to develop a climatology of thermal structure in the areas and a description of the annual variation of major currents. The sampling strategy is reviewed. Dense XBT sampling is required to achieve accurate, monthly maps of isotherm-depth because of the high level of noise in the measurements caused by aliasing of small scale variation. In the Indian Ocean ship routes dictate where adequate sampling can be achieved. An efficient sampling rate on available routes is determined based on objective analysis. The statistical structure required for objective analysis is described and compared at 95 locations in the tropical Pacific and 107 in the tropical Indian Oceans. XBT data management and quality control methods at CSIRO are reviewed. Results on the mean and annual variation of temperature and baroclinic structure in the South Equatorial Current and Pacific/Indian Ocean Throughflow are presented for the region between northwest Australia and Java-Timor. The mean relative geostrophic transport (0/400 db) of Throughflow is approximately 5 x 106 m3/sec. A nearly equal volume transport is associated with the reference velocity at 400 db. The Throughflow feeds the South Equatorial Current, which has maximum westward flow in August/September, at the end of the southeasterly Monsoon season. A strong semiannual oscillation in the South Java Current is documented. The results are in good agreement with the Semtner and Chervin (1988) ocean general circulation model. The talk concludes with comments on data inadequacies (insufficient coverage, timeliness) particular to the Indian Ocean and suggestions on the future role that can be played by Data Centers, particularly with regard to quality

  9. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  10. How accurate are Scottish cancer registration data?

    PubMed Central

    Brewster, D.; Crichton, J.; Muir, C.

    1994-01-01

    In order to assess the accuracy of Scottish cancer registration data, a random sample of 2,200 registrations, attributed to the year 1990, was generated. Relevant medical records were available for review in 2,021 (92%) cases. Registration details were reabstracted from available records and compared with data in the registry. Discrepancies in identifying items of data (surname, forename, sex and date of birth) were found in 3.5% of cases. Most were trivial and would not disturb record linkage. Discrepancy rates of 7.1% in post code of residence at the time of diagnosis (excluding differences arising through boundary changes), 11.0% in anniversary date (excluding differences of 6 weeks or less), 7.7% in histological verification status, 5.4% in ICD-9 site codes (the first three digits) and 14.5% in ICD-O morphology codes (excluding 'inferred' morphology codes) were recorded. Overall, serious discrepancies were judged to have occurred in 2.8% of cases. In many respects, therefore, Scottish cancer registration data show a high level of accuracy that compares favourably to the reported accuracy of the few other cancer registries undertaking such analyses. PMID:7947104

  11. Towards Quantitative Spatial Models of Seabed Sediment Composition

    PubMed Central

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom’s parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  12. Quantitative analysis of flagellar proteins in Drosophila sperm tails.

    PubMed

    Mendes Maia, Teresa; Paul-Gilloteaux, Perrine; Basto, Renata

    2015-01-01

    The cilium has a well-defined structure, which can still accommodate some morphological and molecular composition diversity to suit the functional requirements of different cell types. The sperm flagellum of the fruit fly Drosophila melanogaster appears as a good model to study the genetic regulation of axoneme assembly and motility, due to the wealth of genetic tools publically available for this organism. In addition, the fruit fly's sperm flagellum displays quite a long axoneme (∼1.8mm), which may facilitate both histological and biochemical analyses. Here, we present a protocol for imaging and quantitatively analyze proteins, which associate with the fly differentiating, and mature sperm flagella. We will use as an example the quantification of tubulin polyglycylation in wild-type testes and in Bug22 mutant testes, which present defects in the deposition of this posttranslational modification. During sperm biogenesis, flagella appear tightly bundled, which makes it more challenging to get accurate measurements of protein levels from immunostained specimens. The method we present is based on the use of a novel semiautomated, macro installed in the image processing software ImageJ. It allows to measure fluorescence levels in closely associated sperm tails, through an exact distinction between positive and background signals, and provides background-corrected pixel intensity values that can directly be used for data analysis. PMID:25837396

  13. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    PubMed

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  14. Quantitative lipopolysaccharide analysis using HPLC/MS/MS and its combination with the limulus amebocyte lysate assay.

    PubMed

    Pais de Barros, Jean-Paul; Gautier, Thomas; Sali, Wahib; Adrie, Christophe; Choubley, Hélène; Charron, Emilie; Lalande, Caroline; Le Guern, Naig; Deckert, Valérie; Monchi, Mehran; Quenot, Jean-Pierre; Lagrost, Laurent

    2015-07-01

    Quantitation of plasma lipopolysaccharides (LPSs) might be used to document Gram-negative bacterial infection. In the present work, LPS-derived 3-hydroxymyristate was extracted from plasma samples with an organic solvent, separated by reversed phase HPLC, and quantitated by MS/MS. This mass assay was combined with the limulus amebocyte lysate (LAL) bioassay to monitor neutralization of LPS activity in biological samples. The described HPLC/MS/MS method is a reliable, practical, accurate, and sensitive tool to quantitate LPS. The combination of the LAL and HPLC/MS/MS analyses provided new evidence for the intrinsic capacity of plasma lipoproteins and phospholipid transfer protein to neutralize the activity of LPS. In a subset of patients with systemic inflammatory response syndrome, with documented infection but with a negative plasma LAL test, significant amounts of LPS were measured by the HPLC/MS/MS method. Patients with the highest plasma LPS concentration were more severely ill. HPLC/MS/MS is a relevant method to quantitate endotoxin in a sample, to assess the efficacy of LPS neutralization, and to evaluate the proinflammatory potential of LPS in vivo. PMID:26023073

  15. Quantitative lipopolysaccharide analysis using HPLC/MS/MS and its combination with the limulus amebocyte lysate assay[S

    PubMed Central

    Pais de Barros, Jean-Paul; Gautier, Thomas; Sali, Wahib; Adrie, Christophe; Choubley, Hélène; Charron, Emilie; Lalande, Caroline; Le Guern, Naig; Deckert, Valérie; Monchi, Mehran; Quenot, Jean-Pierre; Lagrost, Laurent

    2015-01-01

    Quantitation of plasma lipopolysaccharides (LPSs) might be used to document Gram-negative bacterial infection. In the present work, LPS-derived 3-hydroxymyristate was extracted from plasma samples with an organic solvent, separated by reversed phase HPLC, and quantitated by MS/MS. This mass assay was combined with the limulus amebocyte lysate (LAL) bioassay to monitor neutralization of LPS activity in biological samples. The described HPLC/MS/MS method is a reliable, practical, accurate, and sensitive tool to quantitate LPS. The combination of the LAL and HPLC/MS/MS analyses provided new evidence for the intrinsic capacity of plasma lipoproteins and phospholipid transfer protein to neutralize the activity of LPS. In a subset of patients with systemic inflammatory response syndrome, with documented infection but with a negative plasma LAL test, significant amounts of LPS were measured by the HPLC/MS/MS method. Patients with the highest plasma LPS concentration were more severely ill. HPLC/MS/MS is a relevant method to quantitate endotoxin in a sample, to assess the efficacy of LPS neutralization, and to evaluate the proinflammatory potential of LPS in vivo. PMID:26023073

  16. Quantitative Glycomics Strategies*

    PubMed Central

    Mechref, Yehia; Hu, Yunli; Desantos-Garcia, Janie L.; Hussein, Ahmed; Tang, Haixu

    2013-01-01

    The correlations between protein glycosylation and many biological processes and diseases are increasing the demand for quantitative glycomics strategies enabling sensitive monitoring of changes in the abundance and structure of glycans. This is currently attained through multiple strategies employing several analytical techniques such as capillary electrophoresis, liquid chromatography, and mass spectrometry. The detection and quantification of glycans often involve labeling with ionic and/or hydrophobic reagents. This step is needed in order to enhance detection in spectroscopic and mass spectrometric measurements. Recently, labeling with stable isotopic reagents has also been presented as a very viable strategy enabling relative quantitation. The different strategies available for reliable and sensitive quantitative glycomics are herein described and discussed. PMID:23325767

  17. Estimating quantitative genetic parameters in wild populations: a comparison of pedigree and genomic approaches

    PubMed Central

    Bérénos, Camillo; Ellis, Philip A; Pilkington, Jill G; Pemberton, Josephine M

    2014-01-01

    The estimation of quantitative genetic parameters in wild populations is generally limited by the accuracy and completeness of the available pedigree information. Using relatedness at genomewide markers can potentially remove this limitation and lead to less biased and more precise estimates. We estimated heritability, maternal genetic effects and genetic correlations for body size traits in an unmanaged long-term study population of Soay sheep on St Kilda using three increasingly complete and accurate estimates of relatedness: (i) Pedigree 1, using observation-derived maternal links and microsatellite-derived paternal links; (ii) Pedigree 2, using SNP-derived assignment of both maternity and paternity; and (iii) whole-genome relatedness at 37 037 autosomal SNPs. In initial analyses, heritability estimates were strikingly similar for all three methods, while standard errors were systematically lower in analyses based on Pedigree 2 and genomic relatedness. Genetic correlations were generally strong, differed little between the three estimates of relatedness and the standard errors declined only very slightly with improved relatedness information. When partitioning maternal effects into separate genetic and environmental components, maternal genetic effects found in juvenile traits increased substantially across the three relatedness estimates. Heritability declined compared to parallel models where only a maternal environment effect was fitted, suggesting that maternal genetic effects are confounded with direct genetic effects and that more accurate estimates of relatedness were better able to separate maternal genetic effects from direct genetic effects. We found that the heritability captured by SNP markers asymptoted at about half the SNPs available, suggesting that denser marker panels are not necessarily required for precise and unbiased heritability estimates. Finally, we present guidelines for the use of genomic relatedness in future quantitative genetics

  18. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    farmers carried out quantitative visual observations all independently from each other. All observers assessed five sites, having a sand, peat or clay soil. For almost all quantitative visual observations the spread of observed values was low (coefficient of variation < 1.0), except for the number of biopores and gley mottles. Furthermore, farmers' observed mean values were significantly higher than soil scientists' mean values, for soil structure, amount of gley mottles and compaction. This study showed that VSA could be a valuable tool to assess soil quality. Subjectivity, due to the background of the observer, might influence the outcome of visual assessment of some soil properties. In countries where soil analyses can easily be carried out, VSA might be a good replenishment to available soil chemical analyses, and in countries where it is not feasible to carry out soil analyses, VSA might be a good start to assess soil quality.

  19. Accurate First-Principles Spectra Predictions for Planetological and Astrophysical Applications at Various T-Conditions

    NASA Astrophysics Data System (ADS)

    Rey, M.; Nikitin, A. V.; Tyuterev, V.

    2014-06-01

    Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical

  20. Quantitative photoacoustic tomography

    PubMed Central

    Yuan, Zhen; Jiang, Huabei

    2009-01-01

    In this paper, several algorithms that allow for quantitative photoacoustic reconstruction of tissue optical, acoustic and physiological properties are described in a finite-element method based framework. These quantitative reconstruction algorithms are compared, and the merits and limitations associated with these methods are discussed. In addition, a multispectral approach is presented for concurrent reconstructions of multiple parameters including deoxyhaemoglobin, oxyhaemoglobin and water concentrations as well as acoustic speed. Simulation and in vivo experiments are used to demonstrate the effectiveness of the reconstruction algorithms presented. PMID:19581254

  1. FAME: Software for analysing rock microstructures

    NASA Astrophysics Data System (ADS)

    Hammes, Daniel M.; Peternell, Mark

    2016-05-01

    Determination of rock microstructures leads to a better understanding of the formation and deformation of polycrystalline solids. Here, we present FAME (Fabric Analyser based Microstructure Evaluation), an easy-to-use MATLAB®-based software for processing datasets recorded by an automated fabric analyser microscope. FAME is provided as a MATLAB®-independent Windows® executable with an intuitive graphical user interface. Raw data from the fabric analyser microscope can be automatically loaded, filtered and cropped before analysis. Accurate and efficient rock microstructure analysis is based on an advanced user-controlled grain labelling algorithm. The preview and testing environments simplify the determination of appropriate analysis parameters. Various statistic and plotting tools allow a graphical visualisation of the results such as grain size, shape, c-axis orientation and misorientation. The FAME2elle algorithm exports fabric analyser data to an elle (modelling software)-supported format. FAME supports batch processing for multiple thin section analysis or large datasets that are generated for example during 2D in-situ deformation experiments. The use and versatility of FAME is demonstrated on quartz and deuterium ice samples.

  2. Gas-phase purification enables accurate, large-scale, multiplexed proteome quantification with isobaric tagging

    PubMed Central

    Wenger, Craig D; Lee, M Violet; Hebert, Alexander S; McAlister, Graeme C; Phanstiel, Douglas H; Westphall, Michael S; Coon, Joshua J

    2011-01-01

    We describe a mass spectrometry method, QuantMode, which improves the accuracy of isobaric tag–based quantification by alleviating the pervasive problem of precursor interference—co-isolation of impurities—through gas-phase purification. QuantMode analysis of a yeast sample ‘contaminated’ with interfering human peptides showed substantially improved quantitative accuracy compared to a standard scan, with a small loss of spectral identifications. This technique will allow large-scale, multiplexed quantitative proteomics analyses using isobaric tagging. PMID:21963608

  3. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  4. Extremely accurate sequential verification of RELAP5-3D

    SciTech Connect

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method of manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.

  5. Waste Stream Analyses for Nuclear Fuel Cycles

    SciTech Connect

    N. R. Soelberg

    2010-08-01

    A high-level study was performed in Fiscal Year 2009 for the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE) Advanced Fuel Cycle Initiative (AFCI) to provide information for a range of nuclear fuel cycle options (Wigeland 2009). At that time, some fuel cycle options could not be adequately evaluated since they were not well defined and lacked sufficient information. As a result, five families of these fuel cycle options are being studied during Fiscal Year 2010 by the Systems Analysis Campaign for the DOE NE Fuel Cycle Research and Development (FCRD) program. The quality and completeness of data available to date for the fuel cycle options is insufficient to perform quantitative radioactive waste analyses using recommended metrics. This study has been limited thus far to qualitative analyses of waste streams from the candidate fuel cycle options, because quantitative data for wastes from the front end, fuel fabrication, reactor core structure, and used fuel for these options is generally not yet available.

  6. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...

  7. A quantitative description for efficient financial markets

    NASA Astrophysics Data System (ADS)

    Immonen, Eero

    2015-09-01

    In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.

  8. Evaluating quantitative proton-density-mapping methods.

    PubMed

    Mezer, Aviv; Rokem, Ariel; Berman, Shai; Hastie, Trevor; Wandell, Brian A

    2016-10-01

    Quantitative magnetic resonance imaging (qMRI) aims to quantify tissue parameters by eliminating instrumental bias. We describe qMRI theory, simulations, and software designed to estimate proton density (PD), the apparent local concentration of water protons in the living human brain. First, we show that, in the absence of noise, multichannel coil data contain enough information to separate PD and coil sensitivity, a limiting instrumental bias. Second, we show that, in the presence of noise, regularization by a constraint on the relationship between T1 and PD produces accurate coil sensitivity and PD maps. The ability to measure PD quantitatively has applications in the analysis of in-vivo human brain tissue and enables multisite comparisons between individuals and across instruments. Hum Brain Mapp 37:3623-3635, 2016. © 2016 Wiley Periodicals, Inc.

  9. Uncertainty Quantification for Quantitative Imaging Holdup Measurements

    SciTech Connect

    Bevill, Aaron M; Bledsoe, Keith C

    2016-01-01

    In nuclear fuel cycle safeguards, special nuclear material "held up" in pipes, ducts, and glove boxes causes significant uncertainty in material-unaccounted-for estimates. Quantitative imaging is a proposed non-destructive assay technique with potential to estimate the holdup mass more accurately and reliably than current techniques. However, uncertainty analysis for quantitative imaging remains a significant challenge. In this work we demonstrate an analysis approach for data acquired with a fast-neutron coded aperture imager. The work includes a calibrated forward model of the imager. Cross-validation indicates that the forward model predicts the imager data typically within 23%; further improvements are forthcoming. A new algorithm based on the chi-squared goodness-of-fit metric then uses the forward model to calculate a holdup confidence interval. The new algorithm removes geometry approximations that previous methods require, making it a more reliable uncertainty estimator.

  10. Evaluating quantitative proton-density-mapping methods.

    PubMed

    Mezer, Aviv; Rokem, Ariel; Berman, Shai; Hastie, Trevor; Wandell, Brian A

    2016-10-01

    Quantitative magnetic resonance imaging (qMRI) aims to quantify tissue parameters by eliminating instrumental bias. We describe qMRI theory, simulations, and software designed to estimate proton density (PD), the apparent local concentration of water protons in the living human brain. First, we show that, in the absence of noise, multichannel coil data contain enough information to separate PD and coil sensitivity, a limiting instrumental bias. Second, we show that, in the presence of noise, regularization by a constraint on the relationship between T1 and PD produces accurate coil sensitivity and PD maps. The ability to measure PD quantitatively has applications in the analysis of in-vivo human brain tissue and enables multisite comparisons between individuals and across instruments. Hum Brain Mapp 37:3623-3635, 2016. © 2016 Wiley Periodicals, Inc. PMID:27273015

  11. Quantitative Decision Making.

    ERIC Educational Resources Information Center

    Baldwin, Grover H.

    The use of quantitative decision making tools provides the decision maker with a range of alternatives among which to decide, permits acceptance and use of the optimal solution, and decreases risk. Training line administrators in the use of these tools can help school business officials obtain reliable information upon which to base district…

  12. Quantitative Simulation Games

    NASA Astrophysics Data System (ADS)

    Černý, Pavol; Henzinger, Thomas A.; Radhakrishna, Arjun

    While a boolean notion of correctness is given by a preorder on systems and properties, a quantitative notion of correctness is defined by a distance function on systems and properties, where the distance between a system and a property provides a measure of "fit" or "desirability." In this article, we explore several ways how the simulation preorder can be generalized to a distance function. This is done by equipping the classical simulation game between a system and a property with quantitative objectives. In particular, for systems that satisfy a property, a quantitative simulation game can measure the "robustness" of the satisfaction, that is, how much the system can deviate from its nominal behavior while still satisfying the property. For systems that violate a property, a quantitative simulation game can measure the "seriousness" of the violation, that is, how much the property has to be modified so that it is satisfied by the system. These distances can be computed in polynomial time, since the computation reduces to the value problem in limit average games with constant weights. Finally, we demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.

  13. Nanoliter high throughput quantitative PCR

    PubMed Central

    Morrison, Tom; Hurley, James; Garcia, Javier; Yoder, Karl; Katz, Arrin; Roberts, Douglas; Cho, Jamie; Kanigan, Tanya; Ilyin, Sergey E.; Horowitz, Daniel; Dixon, James M.; Brenan, Colin J.H.

    2006-01-01

    Understanding biological complexity arising from patterns of gene expression requires accurate and precise measurement of RNA levels across large numbers of genes simultaneously. Real time PCR (RT-PCR) in a microtiter plate is the preferred method for quantitative transcriptional analysis but scaling RT-PCR to higher throughputs in this fluidic format is intrinsically limited by cost and logistic considerations. Hybridization microarrays measure the transcription of many thousands of genes simultaneously yet are limited by low sensitivity, dynamic range, accuracy and sample throughput. The hybrid approach described here combines the superior accuracy, precision and dynamic range of RT-PCR with the parallelism of a microarray in an array of 3072 real time, 33 nl polymerase chain reactions (RT-PCRs) the size of a microscope slide. RT-PCR is demonstrated with an accuracy and precision equivalent to the same assay in a 384-well microplate but in a 64-fold smaller reaction volume, a 24-fold higher analytical throughput and a workflow compatible with standard microplate protocols. PMID:17000636

  14. Critical Quantitative Inquiry in Context

    ERIC Educational Resources Information Center

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  15. DNA DAMAGE QUANTITATION BY ALKALINE GEL ELECTROPHORESIS.

    SciTech Connect

    SUTHERLAND,B.M.; BENNETT,P.V.; SUTHERLAND, J.C.

    2004-03-24

    Physical and chemical agents in the environment, those used in clinical applications, or encountered during recreational exposures to sunlight, induce damages in DNA. Understanding the biological impact of these agents requires quantitation of the levels of such damages in laboratory test systems as well as in field or clinical samples. Alkaline gel electrophoresis provides a sensitive (down to {approx} a few lesions/5Mb), rapid method of direct quantitation of a wide variety of DNA damages in nanogram quantities of non-radioactive DNAs from laboratory, field, or clinical specimens, including higher plants and animals. This method stems from velocity sedimentation studies of DNA populations, and from the simple methods of agarose gel electrophoresis. Our laboratories have developed quantitative agarose gel methods, analytical descriptions of DNA migration during electrophoresis on agarose gels (1-6), and electronic imaging for accurate determinations of DNA mass (7-9). Although all these components improve sensitivity and throughput of large numbers of samples (7,8,10), a simple version using only standard molecular biology equipment allows routine analysis of DNA damages at moderate frequencies. We present here a description of the methods, as well as a brief description of the underlying principles, required for a simplified approach to quantitation of DNA damages by alkaline gel electrophoresis.

  16. Quantitative transverse flow measurement using OCT speckle decorrelation analysis

    PubMed Central

    Liu, Xuan; Huang, Yong; Ramella-Roman, Jessica C.; Mathews, Scott A.; Kang, Jin U.

    2014-01-01

    We propose an inter-Ascan speckle decorrelation based method that can quantitatively assess blood flow normal to the direction of the OCT imaging beam. To validate this method, we performed a systematic study using both phantom and in vivo animal models. Results show that our speckle analysis method can accurately extract transverse flow speed with high spatial and temporal resolution. PMID:23455305

  17. Accurate phase measurements for thick spherical objects using optical quadrature microscopy

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; DiMarzio, Charles A.

    2009-02-01

    In vitro fertilization (IVF) procedures have resulted in the birth of over three million babies since 1978. Yet the live birth rate in the United States was only 34% in 2005, with 32% of the successful pregnancies resulting in multiple births. These multiple pregnancies were directly attributed to the transfer of multiple embryos to increase the probability that a single, healthy embryo was included. Current viability markers used for IVF, such as the cell number, symmetry, size, and fragmentation, are analyzed qualitatively with differential interference contrast (DIC) microscopy. However, this method is not ideal for quantitative measures beyond the 8-cell stage of development because the cells overlap and obstruct the view within and below the cluster of cells. We have developed the phase-subtraction cell-counting method that uses the combination of DIC and optical quadrature microscopy (OQM) to count the number of cells accurately in live mouse embryos beyond the 8-cell stage. We have also created a preliminary analysis to measure the cell symmetry, size, and fragmentation quantitatively by analyzing the relative dry mass from the OQM image in conjunction with the phase-subtraction count. In this paper, we will discuss the characterization of OQM with respect to measuring the phase accurately for spherical samples that are much larger than the depth of field. Once fully characterized and verified with human embryos, this methodology could provide the means for a more accurate method to score embryo viability.

  18. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments

    PubMed Central

    Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  19. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution

    PubMed Central

    Nielsen, Rasmus

    2015-01-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  20. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.

    PubMed

    Rohlfs, Rori V; Nielsen, Rasmus

    2015-09-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  1. Note on the chromatographic analyses of marine polyunsaturated fatty acids

    USGS Publications Warehouse

    Schultz, D.M.; Quinn, J.G.

    1977-01-01

    Gas-liquid chromatography was used to study the effects of saponification/methylation and thin-layer chromatographic isolation on the analyses of polyunsaturated fatty acids. Using selected procedures, the qualitative and quantitative distribution of these acids in marine organisms can be determined with a high degree of accuracy. ?? 1977 Springer-Verlag.

  2. DEMOGRAPHY AND VIABILITY ANALYSES OF A DIAMONDBACK TERRAPIN POPULATION

    EPA Science Inventory

    The diamondback terrapin Malaclemys terrapin is a long-lived species with special management requirements, but quantitative analyses to support management are lacking. I analyzed mark-recapture data and constructed an age-classified matrix population model to determine the status...

  3. Guidelines for Meta-Analyses of Counseling Psychology Research

    ERIC Educational Resources Information Center

    Quintana, Stephen M.; Minami, Takuya

    2006-01-01

    This article conceptually describes the steps in conducting quantitative meta-analyses of counseling psychology research with minimal reliance on statistical formulas. The authors identify sources that describe necessary statistical formula for various meta-analytic calculations and describe recent developments in meta-analytic techniques. The…

  4. A Rapid and Accurate Extraction Procedure for Analysing Free Amino Acids in Meat Samples by GC-MS

    PubMed Central

    Barroso, Miguel A.; Ruiz, Jorge; Antequera, Teresa

    2015-01-01

    This study evaluated the use of a mixer mill as the homogenization tool for the extraction of free amino acids in meat samples, with the main goal of analyzing a large number of samples in the shortest time and minimizing sample amount and solvent volume. Ground samples (0.2 g) were mixed with 1.5 mL HCl 0.1 M and homogenized in the mixer mill. The final biphasic system was separated by centrifugation. The supernatant was deproteinized, derivatized and analyzed by gas chromatography. This procedure showed a high extracting ability, especially in samples with high free amino acid content (recovery = 88.73–104.94%). It also showed a low limit of detection and quantification (3.8 · 10−4–6.6 · 10−4 μg μL−1 and 1.3 · 10−3–2.2 · 10−2 μg μL−1, resp.) for most amino acids, an adequate precision (2.15–20.15% for run-to-run), and a linear response for all amino acids (R2 = 0.741–0.998) in the range of 1–100 µg mL−1. Moreover, it takes less time and requires lower amount of sample and solvent than conventional techniques. Thus, this is a cost and time efficient tool for homogenizing in the extraction procedure of free amino acids from meat samples, being an adequate option for routine analysis. PMID:25873963

  5. Leadership and Culture-Building in Schools: Quantitative and Qualitative Understandings.

    ERIC Educational Resources Information Center

    Sashkin, Marshall; Sashkin, Molly G.

    Understanding effective school leadership as a function of culture building through quantitative and qualitative analyses is the purpose of this paper. The two-part quantitative phase of the research focused on statistical measures of culture and leadership behavior directed toward culture building in the school. The first quantitative part…

  6. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR

  7. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,‑26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated

  8. Berkeley Quantitative Genome Browser

    2008-02-29

    The Berkeley Quantitative Genome Browser provides graphical browsing functionality for genomic data organized, at a minimum, by sequence and position. While supporting the annotation browsing features typical of many other genomic browsers, additional emphasis is placed on viewing and utilizing quantitative data. Data may be read from GFF, SGR, FASTA or any column delimited format. Once the data has been read into the browser's buffer, it may be searched. filtered or subjected to mathematical transformation.more » The browser also supplies some graphical design manipulation functionality geared towards preparing figures for presentations or publication. A plug-in mechanism enables development outside the core functionality that adds more advanced or esoteric analysis capabilities. BBrowse's development and distribution is open-source and has been built to run on Linux, OSX and MS Windows operating systems.« less

  9. Berkeley Quantitative Genome Browser

    SciTech Connect

    Hechmer, Aaron

    2008-02-29

    The Berkeley Quantitative Genome Browser provides graphical browsing functionality for genomic data organized, at a minimum, by sequence and position. While supporting the annotation browsing features typical of many other genomic browsers, additional emphasis is placed on viewing and utilizing quantitative data. Data may be read from GFF, SGR, FASTA or any column delimited format. Once the data has been read into the browser's buffer, it may be searched. filtered or subjected to mathematical transformation. The browser also supplies some graphical design manipulation functionality geared towards preparing figures for presentations or publication. A plug-in mechanism enables development outside the core functionality that adds more advanced or esoteric analysis capabilities. BBrowse's development and distribution is open-source and has been built to run on Linux, OSX and MS Windows operating systems.

  10. Primary enzyme quantitation

    DOEpatents

    Saunders, G.C.

    1982-03-04

    The disclosure relates to the quantitation of a primary enzyme concentration by utilizing a substrate for the primary enzyme labeled with a second enzyme which is an indicator enzyme. Enzyme catalysis of the substrate occurs and results in release of the indicator enzyme in an amount directly proportional to the amount of primary enzyme present. By quantifying the free indicator enzyme one determines the amount of primary enzyme present.

  11. Quantitative social science

    NASA Astrophysics Data System (ADS)

    Weidlich, W.

    1987-03-01

    General concepts for the quantitative description of the dynamics of social processes are introduced. They allow for embedding social science into the conceptual framework of synergetics. Equations of motion for the socioconfiguration are derived on the stochastic and quasideterministic level. As an application the migration of interacting human populations is treated. The solutions of the nonlinear migratory equations include limit cycles and strange attractors. The empiric evaluation of interregional migratory dynamics is exemplified in the case of Germany.

  12. Computational vaccinology: quantitative approaches.

    PubMed

    Flower, Darren R; McSparron, Helen; Blythe, Martin J; Zygouri, Christianna; Taylor, Debra; Guan, Pingping; Wan, Shouzhan; Coveney, Peter V; Walshe, Valerie; Borrow, Persephone; Doytchinova, Irini A

    2003-01-01

    The immune system is hierarchical and has many levels, exhibiting much emergent behaviour. However, at its heart are molecular recognition events that are indistinguishable from other types of biomacromolecular interaction. These can be addressed well by quantitative experimental and theoretical biophysical techniques, and particularly by methods from drug design. We review here our approach to computational immunovaccinology. In particular, we describe the JenPep database and two new techniques for T cell epitope prediction. One is based on quantitative structure-activity relationships (a 3D-QSAR method based on CoMSIA and another 2D method based on the Free-Wilson approach) and the other on atomistic molecular dynamic simulations using high performance computing. JenPep (http://www.jenner.ar.uk/ JenPep) is a relational database system supporting quantitative data on peptide binding to major histocompatibility complexes, TAP transporters, TCR-pMHC complexes, and an annotated list of B cell and T cell epitopes. Our 2D-QSAR method factors the contribution to peptide binding from individual amino acids as well as 1-2 and 1-3 residue interactions. In the 3D-QSAR approach, the influence of five physicochemical properties (volume, electrostatic potential, hydrophobicity, hydrogen-bond donor and acceptor abilities) on peptide affinity were considered. Both methods are exemplified through their application to the well-studied problem of peptide binding to the human class I MHC molecule HLA-A*0201. PMID:14712934

  13. Modern quantitative schlieren techniques

    NASA Astrophysics Data System (ADS)

    Hargather, Michael; Settles, Gary

    2010-11-01

    Schlieren optical techniques have traditionally been used to qualitatively visualize refractive flowfields in transparent media. Modern schlieren optics, however, are increasingly focused on obtaining quantitative information such as temperature and density fields in a flow -- once the sole purview of interferometry -- without the need for coherent illumination. Quantitative data are obtained from schlieren images by integrating the measured refractive index gradient to obtain the refractive index field in an image. Ultimately this is converted to a density or temperature field using the Gladstone-Dale relationship, an equation of state, and geometry assumptions for the flowfield of interest. Several quantitative schlieren methods are reviewed here, including background-oriented schlieren (BOS), schlieren using a weak lens as a "standard," and "rainbow schlieren." Results are presented for the application of these techniques to measure density and temperature fields across a supersonic turbulent boundary layer and a low-speed free-convection boundary layer in air. Modern equipment, including digital cameras, LED light sources, and computer software that make this possible are also discussed.

  14. Quantitative analysis of myocardial tissue with digital autofluorescence microscopy

    PubMed Central

    Jensen, Thomas; Holten-Rossing, Henrik; Svendsen, Ida M H; Jacobsen, Christina; Vainer, Ben

    2016-01-01

    Background: The opportunity offered by whole slide scanners of automated histological analysis implies an ever increasing importance of digital pathology. To go beyond the importance of conventional pathology, however, digital pathology may need a basic histological starting point similar to that of hematoxylin and eosin staining in conventional pathology. This study presents an automated fluorescence-based microscopy approach providing highly detailed morphological data from unstained microsections. This data may provide a basic histological starting point from which further digital analysis including staining may benefit. Methods: This study explores the inherent tissue fluorescence, also known as autofluorescence, as a mean to quantitate cardiac tissue components in histological microsections. Data acquisition using a commercially available whole slide scanner and an image-based quantitation algorithm are presented. Results: It is shown that the autofluorescence intensity of unstained microsections at two different wavelengths is a suitable starting point for automated digital analysis of myocytes, fibrous tissue, lipofuscin, and the extracellular compartment. The output of the method is absolute quantitation along with accurate outlines of above-mentioned components. The digital quantitations are verified by comparison to point grid quantitations performed on the microsections after Van Gieson staining. Conclusion: The presented method is amply described as a prestain multicomponent quantitation and outlining tool for histological sections of cardiac tissue. The main perspective is the opportunity for combination with digital analysis of stained microsections, for which the method may provide an accurate digital framework. PMID:27141321

  15. Accurate identification of waveform of evoked potentials by component decomposition using discrete cosine transform modeling.

    PubMed

    Bai, O; Nakamura, M; Kanda, M; Nagamine, T; Shibasaki, H

    2001-11-01

    This study introduces a method for accurate identification of the waveform of the evoked potentials by decomposing the component responses. The decomposition was achieved by zero-pole modeling of the evoked potentials in the discrete cosine transform (DCT) domain. It was found that the DCT coefficients of a component response in the evoked potentials could be modeled sufficiently by a second order transfer function in the DCT domain. The decomposition of the component responses was approached by using partial expansion of the estimated model for the evoked potentials, and the effectiveness of the decomposition method was evaluated both qualitatively and quantitatively. Because of the overlap of the different component responses, the proposed method enables an accurate identification of the evoked potentials, which is useful for clinical and neurophysiological investigations.

  16. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    NASA Astrophysics Data System (ADS)

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    Monitoring sulphur chemistry is thought to be of great importance for exoplanets. Doing this requires detailed knowledge of the spectroscopic properties of sulphur containing molecules such as hydrogen sulphide (H2S) [1], sulphur dioxide (SO2), and sulphur trioxide (SO3). Each of these molecules can be found in terrestrial environments, produced in volcano emissions on Earth, and analysis of their spectroscopic data can prove useful to the characterisation of exoplanets, as well as the study of planets in our own solar system, with both having a possible presence on Venus. A complete, high temperature list of line positions and intensities for H32 2 S is presented. The DVR3D program suite is used to calculate the bound ro-vibration energy levels, wavefunctions, and dipole transition intensities using Radau coordinates. The calculations are based on a newly determined, spectroscopically refined potential energy surface (PES) and a new, high accuracy, ab initio dipole moment surface (DMS). Tests show that the PES enables us to calculate the line positions accurately and the DMS gives satisfactory results for line intensities. Comparisons with experiment as well as with previous theoretical spectra will be presented. The results of this study will form an important addition to the databases which are considered as sources of information for space applications; especially, in analysing the spectra of extrasolar planets, and remote sensing studies for Venus and Earth, as well as laboratory investigations and pollution studies. An ab initio line list for SO3 was previously computed using the variational nuclear motion program TROVE [2], and was suitable for modelling room temperature SO3 spectra. The calculations considered transitions in the region of 0-4000 cm-1 with rotational states up to J = 85, and includes 174,674,257 transitions. A list of 10,878 experimental transitions had relative intensities placed on an absolute scale, and were provided in a form suitable

  17. Efficient design, accurate fabrication and effective characterization of plasmonic quasicrystalline arrays of nano-spherical particles

    PubMed Central

    Namin, Farhad A.; Yuwen, Yu A.; Liu, Liu; Panaretos, Anastasios H.; Werner, Douglas H.; Mayer, Theresa S.

    2016-01-01

    In this paper, the scattering properties of two-dimensional quasicrystalline plasmonic lattices are investigated. We combine a newly developed synthesis technique, which allows for accurate fabrication of spherical nanoparticles, with a recently published variation of generalized multiparticle Mie theory to develop the first quantitative model for plasmonic nano-spherical arrays based on quasicrystalline morphologies. In particular, we study the scattering properties of Penrose and Ammann- Beenker gold spherical nanoparticle array lattices. We demonstrate that by using quasicrystalline lattices, one can obtain multi-band or broadband plasmonic resonances which are not possible in periodic structures. Unlike previously published works, our technique provides quantitative results which show excellent agreement with experimental measurements. PMID:26911709

  18. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams. PMID:27505659

  19. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams.

  20. How accurate are the weather forecasts for Bierun (southern Poland)?

    NASA Astrophysics Data System (ADS)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why

  1. PCaAnalyser: a 2D-image analysis based module for effective determination of prostate cancer progression in 3D culture.

    PubMed

    Hoque, Md Tamjidul; Windus, Louisa C E; Lovitt, Carrie J; Avery, Vicky M

    2013-01-01

    Three-dimensional (3D) in vitro cell based assays for Prostate Cancer (PCa) research are rapidly becoming the preferred alternative to that of conventional 2D monolayer cultures. 3D assays more precisely mimic the microenvironment found in vivo, and thus are ideally suited to evaluate compounds and their suitability for progression in the drug discovery pipeline. To achieve the desired high throughput needed for most screening programs, automated quantification of 3D cultures is required. Towards this end, this paper reports on the development of a prototype analysis module for an automated high-content-analysis (HCA) system, which allows for accurate and fast investigation of in vitro 3D cell culture models for PCa. The Java based program, which we have named PCaAnalyser, uses novel algorithms that allow accurate and rapid quantitation of protein expression in 3D cell culture. As currently configured, the PCaAnalyser can quantify a range of biological parameters including: nuclei-count, nuclei-spheroid membership prediction, various function based classification of peripheral and non-peripheral areas to measure expression of biomarkers and protein constituents known to be associated with PCa progression, as well as defining segregate cellular-objects effectively for a range of signal-to-noise ratios. In addition, PCaAnalyser architecture is highly flexible, operating as a single independent analysis, as well as in batch mode; essential for High-Throughput-Screening (HTS). Utilising the PCaAnalyser, accurate and rapid analysis in an automated high throughput manner is provided, and reproducible analysis of the distribution and intensity of well-established markers associated with PCa progression in a range of metastatic PCa cell-lines (DU145 and PC3) in a 3D model demonstrated.

  2. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  3. Accurate description of calcium solvation in concentrated aqueous solutions.

    PubMed

    Kohagen, Miriam; Mason, Philip E; Jungwirth, Pavel

    2014-07-17

    Calcium is one of the biologically most important ions; however, its accurate description by classical molecular dynamics simulations is complicated by strong electrostatic and polarization interactions with surroundings due to its divalent nature. Here, we explore the recently suggested approach for effectively accounting for polarization effects via ionic charge rescaling and develop a new and accurate parametrization of the calcium dication. Comparison to neutron scattering and viscosity measurements demonstrates that our model allows for an accurate description of concentrated aqueous calcium chloride solutions. The present model should find broad use in efficient and accurate modeling of calcium in aqueous environments, such as those encountered in biological and technological applications.

  4. Quantitative SPECT techniques.

    PubMed

    Watson, D D

    1999-07-01

    Quantitative imaging involves first, a set of measurements that characterize an image. There are several variations of technique, but the basic measurements that are used for single photon emission computed tomography (SPECT) perfusion images are reasonably standardized. Quantification currently provides only relative tracer activity within the myocardial regions defined by an individual SPECT acquisition. Absolute quantification is still a work in progress. Quantitative comparison of absolute changes in tracer uptake comparing a stress and rest study or preintervention and postintervention study would be useful and could be done, but most commercial systems do not maintain the data normalization that is necessary for this. Measurements of regional and global function are now possible with electrocardiography (ECG) gating, and this provides clinically useful adjunctive data. Techniques for measuring ventricular function are evolving and promise to provide clinically useful accuracy. The computer can classify images as normal or abnormal by comparison with a normal database. The criteria for this classification involve more than just checking the normal limits. The images should be analyzed to measure how far they deviate from normal, and this information can be used in conjunction with pretest likelihood to indicate the level of statistical certainty that an individual patient has a true positive or true negative test. The interface between the computer and the clinician interpreter is an important part of the process. Especially when both perfusion and function are being determined, the ability of the interpreter to correctly assimilate the data is essential to the use of the quantitative process. As we become more facile with performing and recording objective measurements, the significance of the measurements in terms of risk evaluation, viability assessment, and outcome should be continually enhanced. PMID:10433336

  5. Quantitative rainbow schlieren deflectometry

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Klimek, Robert B.; Buchele, Donald R.

    1995-01-01

    In the rainbow schlieren apparatus, a continuously graded rainbow filter is placed in the back focal plane of the decollimating lens. Refractive-index gradients in the test section thus appear as gradations in hue rather than irradiance. A simple system is described wherein a conventional color CCD array and video digitizer are used to quantify accurately the color attributes of the resulting image, and hence the associated ray deflections. The present system provides a sensitivity comparable with that of conventional interferometry, while being simpler to implement and less sensitive to mechanical misalignment.

  6. Quantitative non-destructive testing

    NASA Technical Reports Server (NTRS)

    Welch, C. S.

    1985-01-01

    The work undertaken during this period included two primary efforts. The first is a continuation of theoretical development from the previous year of models and data analyses for NDE using the Optical Thermal Infra-Red Measurement System (OPTITHIRMS) system, which involves heat injection with a laser and observation of the resulting thermal pattern with an infrared imaging system. The second is an investigation into the use of the thermoelastic effect as an effective tool for NDE. As in the past, the effort is aimed towards NDE techniques applicable to composite materials in structural applications. The theoretical development described produced several models of temperature patterns over several geometries and material types. Agreement between model data and temperature observations was obtained. A model study with one of these models investigated some fundamental difficulties with the proposed method (the primitive equation method) for obtaining diffusivity values in plates of thickness and supplied guidelines for avoiding these difficulties. A wide range of computing speeds was found among the various models, with a one-dimensional model based on Laplace's integral solution being both very fast and very accurate.

  7. Quantitative imaging with a mobile phone microscope.

    PubMed

    Skandarajah, Arunan; Reber, Clay D; Switz, Neil A; Fletcher, Daniel A

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications.

  8. Quantitative imaging with a mobile phone microscope.

    PubMed

    Skandarajah, Arunan; Reber, Clay D; Switz, Neil A; Fletcher, Daniel A

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  9. Quantitative Imaging with a Mobile Phone Microscope

    PubMed Central

    Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  10. Project analysis and integration economic analyses summary

    NASA Technical Reports Server (NTRS)

    Macomber, H. L.

    1986-01-01

    An economic-analysis summary was presented for the manufacture of crystalline-silicon modules involving silicon ingot/sheet, growth, slicing, cell manufacture, and module assembly. Economic analyses provided: useful quantitative aspects for complex decision-making to the Flat-plate Solar Array (FSA) Project; yardsticks for design and performance to industry; and demonstration of how to evaluate and understand the worth of research and development both to JPL and other government agencies and programs. It was concluded that future research and development funds for photovoltaics must be provided by the Federal Government because the solar industry today does not reap enough profits from its present-day sales of photovoltaic equipment.

  11. Reference map for liquid chromatography-mass spectrometry-based quantitative proteomics.

    PubMed

    Kim, Yeoun Jin; Feild, Brian; Fitzhugh, William; Heidbrink, Jenny L; Duff, James W; Heil, Jeremy; Ruben, Steven M; He, Tao

    2009-10-15

    The accurate mass and time (AMT) tag strategy has been recognized as a powerful tool for high-throughput analysis in liquid chromatography-mass spectrometry (LC-MS)-based proteomics. Due to the complexity of the human proteome, this strategy requires highly accurate mass measurements for confident identifications. We have developed a method of building a reference map that allows relaxed criteria for mass errors yet delivers high confidence for peptide identifications. The samples used for generating the peptide database were produced by collecting cysteine-containing peptides from T47D cells and then fractionating the peptides using strong cationic exchange chromatography (SCX). LC-tandem mass spectrometry (MS/MS) data from the SCX fractions were combined to create a comprehensive reference map. After the reference map was built, it was possible to skip the SCX step in further proteomic analyses. We found that the reference-driven identification increases the overall throughput and proteomic coverage by identifying peptides with low intensity or complex interference. The use of the reference map also facilitates the quantitation process by allowing extraction of peptide intensities of interest and incorporating models of theoretical isotope distribution.

  12. Sample normalization methods in quantitative metabolomics.

    PubMed

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts.

  13. A Quantitative System-Scale Characterization of the Metabolism of Clostridium acetobutylicum

    PubMed Central

    Yoo, Minyeong; Bestel-Corre, Gwenaelle; Croux, Christian; Riviere, Antoine; Meynial-Salles, Isabelle

    2015-01-01

    ABSTRACT Engineering industrial microorganisms for ambitious applications, for example, the production of second-generation biofuels such as butanol, is impeded by a lack of knowledge of primary metabolism and its regulation. A quantitative system-scale analysis was applied to the biofuel-producing bacterium Clostridium acetobutylicum, a microorganism used for the industrial production of solvent. An improved genome-scale model, iCac967, was first developed based on thorough biochemical characterizations of 15 key metabolic enzymes and on extensive literature analysis to acquire accurate fluxomic data. In parallel, quantitative transcriptomic and proteomic analyses were performed to assess the number of mRNA molecules per cell for all genes under acidogenic, solventogenic, and alcohologenic steady-state conditions as well as the number of cytosolic protein molecules per cell for approximately 700 genes under at least one of the three steady-state conditions. A complete fluxomic, transcriptomic, and proteomic analysis applied to different metabolic states allowed us to better understand the regulation of primary metabolism. Moreover, this analysis enabled the functional characterization of numerous enzymes involved in primary metabolism, including (i) the enzymes involved in the two different butanol pathways and their cofactor specificities, (ii) the primary hydrogenase and its redox partner, (iii) the major butyryl coenzyme A (butyryl-CoA) dehydrogenase, and (iv) the major glyceraldehyde-3-phosphate dehydrogenase. This study provides important information for further metabolic engineering of C. acetobutylicum to develop a commercial process for the production of n-butanol. PMID:26604256

  14. Evaluation of the quantitative performances of supercritical fluid chromatography: from method development to validation.

    PubMed

    Dispas, Amandine; Lebrun, Pierre; Ziemons, Eric; Marini, Roland; Rozet, Eric; Hubert, Philippe

    2014-08-01

    Recently, the number of papers about SFC increased drastically but scientists did not truly focus their work on quantitative performances of this technique. In order to prove the potential of UHPSFC, the present work discussed about the different steps of the analytical life cycle of a method: from development to validation and application. Moreover, the UHPSFC quantitative performances were evaluated in comparison with UHPLC, which is the main technique used for quality control in the pharmaceutical industry and then could be considered as a reference. The methods were developed using Design Space strategy, leading to the optimization of robust method. In this context, when the Design Space optimization shows guarantee of quality, no more robustness study is required prior to the validation. Then, the methods were geometrically transferred in order to reduce the analysis time. The UHPSFC and UHPLC methods were validated based on the total error approach using accuracy profile. Even if UHPLC showed better precision and sensitivity, UHPSFC method is able to give accurate results in a dosing range larger than the 80-120% range required by the European Medicines Agency. Consequently, UHPSFC results are valid and could be used for the control of active substance in a finished pharmaceutical product. Finally, UHPSFC validated method was used to analyse real samples and gave similar results than the reference method (UHPLC). PMID:24513349

  15. Validating quantitative precipitation forecast for the Flood Meteorological Office, Patna region during 2011-2014

    NASA Astrophysics Data System (ADS)

    Giri, R. K.; Panda, Jagabandhu; Rath, Sudhansu S.; Kumar, Ravindra

    2016-06-01

    In order to issue an accurate warning for flood, a better or appropriate quantitative forecasting of precipitation is required. In view of this, the present study intends to validate the quantitative precipitation forecast (QPF) issued during southwest monsoon season for six river catchments (basin) under the flood meteorological office, Patna region. The forecast is analysed statistically by computing various skill scores of six different precipitation ranges during the years 2011-2014. The analysis of QPF validation indicates that the multi-model ensemble (MME) based forecasting is more reliable in the precipitation ranges of 1-10 and 11-25 mm. However, the reliability decreases for higher ranges of rainfall and also for the lowest range, i.e., below 1 mm. In order to testify synoptic analogue method based MME forecasting for QPF during an extreme weather event, a case study of tropical cyclone Phailin is performed. It is realized that in case of extreme events like cyclonic storms, the MME forecasting is qualitatively useful for issue of warning for the occurrence of floods, though it may not be reliable for the QPF. However, QPF may be improved using satellite and radar products.

  16. Quantitative Hyperspectral Reflectance Imaging

    PubMed Central

    Klein, Marvin E.; Aalderink, Bernard J.; Padoan, Roberto; de Bruin, Gerrit; Steemers, Ted A.G.

    2008-01-01

    Hyperspectral imaging is a non-destructive optical analysis technique that can for instance be used to obtain information from cultural heritage objects unavailable with conventional colour or multi-spectral photography. This technique can be used to distinguish and recognize materials, to enhance the visibility of faint or obscured features, to detect signs of degradation and study the effect of environmental conditions on the object. We describe the basic concept, working principles, construction and performance of a laboratory instrument specifically developed for the analysis of historical documents. The instrument measures calibrated spectral reflectance images at 70 wavelengths ranging from 365 to 1100 nm (near-ultraviolet, visible and near-infrared). By using a wavelength tunable narrow-bandwidth light-source, the light energy used to illuminate the measured object is minimal, so that any light-induced degradation can be excluded. Basic analysis of the hyperspectral data includes a qualitative comparison of the spectral images and the extraction of quantitative data such as mean spectral reflectance curves and statistical information from user-defined regions-of-interest. More sophisticated mathematical feature extraction and classification techniques can be used to map areas on the document, where different types of ink had been applied or where one ink shows various degrees of degradation. The developed quantitative hyperspectral imager is currently in use by the Nationaal Archief (National Archives of The Netherlands) to study degradation effects of artificial samples and original documents, exposed in their permanent exhibition area or stored in their deposit rooms.

  17. EEO Implications of Job Analyses.

    ERIC Educational Resources Information Center

    Lacy, D. Patrick, Jr.

    1979-01-01

    Discusses job analyses as they relate to the requirements of Title VII of the Civil Rights Act of 1964, the Equal Pay Act of 1963, and the Rehabilitation Act of 1973. Argues that job analyses can establish the job-relatedness of entrance requirements and aid in defenses against charges of discrimination. Journal availability: see EA 511 615.

  18. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  19. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  20. 78 FR 34604 - Submitting Complete and Accurate Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  1. The effects of AVIRIS atmospheric calibration methodology on identification and quantitative mapping of surface mineralogy, Drum Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Dwyer, John L.

    1993-01-01

    The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.

  2. Nonspectroscopic imaging for quantitative chlorophyll sensing

    NASA Astrophysics Data System (ADS)

    Kim, Taehoon; Kim, Jeong-Im; Visbal-Onufrak, Michelle A.; Chapple, Clint; Kim, Young L.

    2016-01-01

    Nondestructive imaging of physiological changes in plants has been intensively used as an invaluable tool for visualizing heterogeneous responses to various types of abiotic and biotic stress. However, conventional approaches often have intrinsic limitations for quantitative analyses, requiring bulky and expensive optical instruments for capturing full spectral information. We report a spectrometerless (or spectrometer-free) reflectance imaging method that allows for nondestructive and quantitative chlorophyll imaging in individual leaves in situ in a handheld device format. The combination of a handheld-type imaging system and a hyperspectral reconstruction algorithm from an RGB camera offers simple instrumentation and operation while avoiding the use of an imaging spectrograph or tunable color filter. This platform could potentially be integrated into a compact, inexpensive, and portable system, while being of great value in high-throughput phenotyping facilities and laboratory settings.

  3. Using GPS To Teach More Than Accurate Positions.

    ERIC Educational Resources Information Center

    Johnson, Marie C.; Guth, Peter L.

    2002-01-01

    Undergraduate science majors need practice in critical thinking, quantitative analysis, and judging whether their calculated answers are physically reasonable. Develops exercises using handheld Global Positioning System (GPS) receivers. Reinforces students' abilities to think quantitatively, make realistic "back of the envelope" assumptions, and…

  4. Validating carbonation parameters of alkaline solid wastes via integrated thermal analyses: Principles and applications.

    PubMed

    Pan, Shu-Yuan; Chang, E-E; Kim, Hyunook; Chen, Yi-Hung; Chiang, Pen-Chi

    2016-04-15

    Accelerated carbonation of alkaline solid wastes is an attractive method for CO2 capture and utilization. However, the evaluation criteria of CaCO3 content in solid wastes and the way to interpret thermal analysis profiles were found to be quite different among the literature. In this investigation, an integrated thermal analyses for determining carbonation parameters in basic oxygen furnace slag (BOFS) were proposed based on thermogravimetric (TG), derivative thermogravimetric (DTG), and differential scanning calorimetry (DSC) analyses. A modified method of TG-DTG interpretation was proposed by considering the consecutive weight loss of sample with 200-900°C because the decomposition of various hydrated compounds caused variances in estimates by using conventional methods of TG interpretation. Different quantities of reference CaCO3 standards, carbonated BOFS samples and synthetic CaCO3/BOFS mixtures were prepared for evaluating the data quality of the modified TG-DTG interpretation, in terms of precision and accuracy. The quantitative results of the modified TG-DTG method were also validated by DSC analysis. In addition, to confirm the TG-DTG results, the evolved gas analysis was performed by mass spectrometer and Fourier transform infrared spectroscopy for detection of the gaseous compounds released during heating. Furthermore, the decomposition kinetics and thermodynamics of CaCO3 in BOFS was evaluated using Arrhenius equation and Kissinger equation. The proposed integrated thermal analyses for determining CaCO3 content in alkaline wastes was precise and accurate, thereby enabling to effectively assess the CO2 capture capacity of alkaline wastes for mineral carbonation. PMID:26785217

  5. Dissolved methane profiles in marine sediments observed in situ differ greatly from analyses of recovered cores

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Brewer, P. G.; Hester, K.; Ussler, W.; Walz, P. M.; Peltzer, E. T.; Ripmeester, J.

    2009-12-01

    The flux of dissolved methane through continental margin sediments is of importance in marine geochemistry due to its role in massive hydrate formation with enigmatic climate consequences, and for the huge and complex microbial assemblage it supports. Yet the actual dissolved methane concentration driving this flux is poorly known since strong degassing during sample recovery from depth is commonplace. Thus, pore water analyses from high CH4 environments typically show values clustered around the one-atmosphere equilibrium value of 1-2 mM, erasing the original pore water profile and frustrating model calculations. We show that accurate measurement of pore water profiles of dissolved CH4, SO4, and H2S can be made rapidly in situ using a Raman-based probe. While Raman spectra were formerly believed to yield only qualitative data we show that by using a peak area ratio technique to known H2O bands and a form of Beer’s Law quantitative data may be readily obtained. Results from Hydrate Ridge, Oregon clearly show coherent profiles of all three species in this high flux environment, and while in situ Raman and conventional analyses of SO4 in recovered cores agree well, very large differences in CH4 are found. The in situ CH4 results show up to 35 mM in the upper 30cm of seafloor sediments and are inversely correlated with SO4. This is below the methane hydrate saturation value, yet disturbing the sediments clearly released hydrate fragments suggesting that true saturation values may exist only in the hydrate molecular boundary layer, and that lower values may typically characterize the bulk pore fluid of hydrate-hosting sediments. The in situ Raman measurement protocols developed take only a few minutes. Profiles obtained in situ showed minimal fluorescence while pore water samples from recovered cores quickly developed strong fluorescence making laboratory analyses using Raman spectroscopy challenging and raising questions over the reaction sequence responsible for

  6. Validating carbonation parameters of alkaline solid wastes via integrated thermal analyses: Principles and applications.

    PubMed

    Pan, Shu-Yuan; Chang, E-E; Kim, Hyunook; Chen, Yi-Hung; Chiang, Pen-Chi

    2016-04-15

    Accelerated carbonation of alkaline solid wastes is an attractive method for CO2 capture and utilization. However, the evaluation criteria of CaCO3 content in solid wastes and the way to interpret thermal analysis profiles were found to be quite different among the literature. In this investigation, an integrated thermal analyses for determining carbonation parameters in basic oxygen furnace slag (BOFS) were proposed based on thermogravimetric (TG), derivative thermogravimetric (DTG), and differential scanning calorimetry (DSC) analyses. A modified method of TG-DTG interpretation was proposed by considering the consecutive weight loss of sample with 200-900°C because the decomposition of various hydrated compounds caused variances in estimates by using conventional methods of TG interpretation. Different quantities of reference CaCO3 standards, carbonated BOFS samples and synthetic CaCO3/BOFS mixtures were prepared for evaluating the data quality of the modified TG-DTG interpretation, in terms of precision and accuracy. The quantitative results of the modified TG-DTG method were also validated by DSC analysis. In addition, to confirm the TG-DTG results, the evolved gas analysis was performed by mass spectrometer and Fourier transform infrared spectroscopy for detection of the gaseous compounds released during heating. Furthermore, the decomposition kinetics and thermodynamics of CaCO3 in BOFS was evaluated using Arrhenius equation and Kissinger equation. The proposed integrated thermal analyses for determining CaCO3 content in alkaline wastes was precise and accurate, thereby enabling to effectively assess the CO2 capture capacity of alkaline wastes for mineral carbonation.

  7. Accuracy of finite element analyses of CT scans in predictions of vertebral failure patterns under axial compression and anterior flexion.

    PubMed

    Jackman, Timothy M; DelMonaco, Alex M; Morgan, Elise F

    2016-01-25

    Finite element (FE) models built from quantitative computed tomography (QCT) scans can provide patient-specific estimates of bone strength and fracture risk in the spine. While prior studies demonstrate accurate QCT-based FE predictions of vertebral stiffness and strength, the accuracy of the predicted failure patterns, i.e., the locations where failure occurs within the vertebra and the way in which the vertebra deforms as failure progresses, is less clear. This study used digital volume correlation (DVC) analyses of time-lapse micro-computed tomography (μCT) images acquired during mechanical testing (compression and anterior flexion) of thoracic spine segments (T7-T9, n=28) to measure displacements occurring throughout the T8 vertebral body at the ultimate point. These displacements were compared to those simulated by QCT-based FE analyses of T8. We hypothesized that the FE predictions would be more accurate when the boundary conditions are based on measurements of pressure distributions within intervertebral discs of similar level of disc degeneration vs. boundary conditions representing rigid platens. The FE simulations captured some of the general, qualitative features of the failure patterns; however, displacement errors ranged 12-279%. Contrary to our hypothesis, no differences in displacement errors were found when using boundary conditions representing measurements of disc pressure vs. rigid platens. The smallest displacement errors were obtained using boundary conditions that were measured directly by DVC at the T8 endplates. These findings indicate that further work is needed to develop methods of identifying physiological loading conditions for the vertebral body, for the purpose of achieving robust, patient-specific FE analyses of failure mechanisms.

  8. The accurate assessment of small-angle X-ray scattering data

    SciTech Connect

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.

    2015-01-01

    A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  9. Quantitative radionuclide angiocardiography

    SciTech Connect

    Scholz, P.M.; Rerych, S.K.; Moran, J.F.; Newman, G.E.; Douglas, J.M.; Sabiston, D.C. Jr.; Jones, R.H.

    1980-01-01

    This study introduces a new method for calculating actual left ventricular volumes and cardiac output from data recorded during a single transit of a radionuclide bolus through the heart, and describes in detail current radionuclide angiocardiography methodology. A group of 64 healthy adults with a wide age range were studied to define the normal range of hemodynamic parameters determined by the technique. Radionuclide angiocardiograms were performed in patients undergoing cardiac catherization to validate the measurements. In 33 patients studied by both techniques on the same day, a close correlation was documented for measurement of ejection fraction and end-diastolic volume. To validate the method of volumetric cardiac output calcuation, 33 simultaneous radionuclide and indocyanine green dye determinations of cardiac output were performed in 18 normal young adults. These independent comparisons of radionuclide measurements with two separate methods document that initial transit radionuclide angiocardiography accurately assesses left ventricular function.

  10. Quantitative velocity modulation spectroscopy

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; McCall, Benjamin J.

    2016-05-01

    Velocity Modulation Spectroscopy (VMS) is arguably the most important development in the 20th century for spectroscopic study of molecular ions. For decades, interpretation of VMS lineshapes has presented challenges due to the intrinsic covariance of fit parameters including velocity modulation amplitude, linewidth, and intensity. This limitation has stifled the growth of this technique into the quantitative realm. In this work, we show that subtle changes in the lineshape can be used to help address this complexity. This allows for determination of the linewidth, intensity relative to other transitions, velocity modulation amplitude, and electric field strength in the positive column of a glow discharge. Additionally, we explain the large homogeneous component of the linewidth that has been previously described. Using this component, the ion mobility can be determined.

  11. Feed analyses and their interpretation.

    PubMed

    Hall, Mary Beth

    2014-11-01

    Compositional analysis is central to determining the nutritional value of feedstuffs for use in ration formulation. The utility of the values and how they should be used depends on how representative the feed subsample is, the nutritional relevance and analytical variability of the assays, and whether an analysis is suitable to be applied to a particular feedstuff. Commercial analyses presently available for carbohydrates, protein, and fats have improved nutritionally pertinent description of feed fractions. Factors affecting interpretation of feed analyses and the nutritional relevance and application of currently available analyses are discussed.

  12. Quantitative MRI techniques of cartilage composition

    PubMed Central

    Matzat, Stephen J.; van Tiel, Jasper; Gold, Garry E.

    2013-01-01

    Due to aging populations and increasing rates of obesity in the developed world, the prevalence of osteoarthritis (OA) is continually increasing. Decreasing the societal and patient burden of this disease motivates research in prevention, early detection of OA, and novel treatment strategies against OA. One key facet of this effort is the need to track the degradation of tissues within joints, especially cartilage. Currently, conventional imaging techniques provide accurate means to detect morphological deterioration of cartilage in the later stages of OA, but these methods are not sensitive to the subtle biochemical changes during early disease stages. Novel quantitative techniques with magnetic resonance imaging (MRI) provide direct and indirect assessments of cartilage composition, and thus allow for earlier detection and tracking of OA. This review describes the most prominent quantitative MRI techniques to date—dGEMRIC, T2 mapping, T1rho mapping, and sodium imaging. Other, less-validated methods for quantifying cartilage composition are also described—Ultrashort echo time (UTE), gagCEST, and diffusion-weighted imaging (DWI). For each technique, this article discusses the proposed biochemical correlates, as well its advantages and limitations for clinical and research use. The article concludes with a detailed discussion of how the field of quantitative MRI has progressed to provide information regarding two specific patient populations through clinical research—patients with anterior cruciate ligament rupture and patients with impingement in the hip. While quantitative imaging techniques continue to rapidly evolve, specific challenges for each technique as well as challenges to clinical applications remain. PMID:23833729

  13. [A novel quantitative PCR with fluorogenic probe].

    PubMed

    Isono, K

    1997-03-01

    The polymerase chain reaction(PCR) is a powerful tool to amplify small amounts of DNA or RNA for various molecular analysis. However, in these analyses, PCR only provides qualitative results. The availability of quantitative PCR provides valuable additional information in various applications. It is difficult to establish absolute quantitation, because PCR amplification is a complicated reaction process of exponential growth. To trace the amplification process, the initial amount of template and the efficiency of amplification in each cycle, has to be determined. Conventional methods have not achieved absolute quantitative analysis. The ABI PRISM 7700 Sequence Detection System has solved these problems with real-time monitoring of the PCR process. The real-time detection system provides essential information to quantify the initial target copy number, because it can draw an amplification curve. Using the 5' nuclease assay, a specific fluorescent signal is generated and measured at every cycle during a run. This system can perform a variety of applications including, quantitation, allele discrimination, PCR optimization and viral screening. Using the ABI PRISM 7700 Sequence Detection System, the rice genome has been quantitatively analyzed. To monitor maturation of the chloroplast genome from proplastid during germ development, 5' nuclease assay set up for Cab and rbcL genes which are located in the nuclear genome and chloroplast genome, respectively. Cab was used as an internal standard for normalization of cell numbers. The maturation process of chloroplast was estimated using the ratio of gene dosage, [rbcL]/[Cab]. After development of cotyledon, a significant increase in copy numbers of the chloroplast was observed. These results indicate that a light-induced chloroplast maturation process is coupled with an increase in chloroplast genome copy numbers.

  14. Accurate calculation of diffraction-limited encircled and ensquared energy.

    PubMed

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  15. Nonlinear shell analyses of the space shuttle solid rocket boosters

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Gillian, Ronnie E.; Nemeth, Michael P.

    1989-01-01

    A variety of structural analyses have been performed on the Solid Rocket Boosters (SRB's) to provide information that would contribute to the understanding of the failure which destroyed the Space Shuttle Challenger. This paper describes nonlinear shell analyses that were performed to characterize the behavior of an overall SRB structure and a segment of the SRB in the vicinity of the External Tank Attachment (ETA) ring. Shell finite element models were used that would accurately reflect the global load transfer in an SRB in a manner such that nonlinear shell collapse and ovalization could be assessed. The purpose of these analyses was to calculate the overall deflection and stress distributions for these SRB models when subjected to mechanical loads corresponding to critical times during the launch sequence. Static analyses of these SRB models were performed using a snapshot picture of the loads. Analytical results obtained using these models show no evidence of nonlinear shell collapse for the pre-liftoff loading cases considered.

  16. An Accurate ab initio Quartic Force Field and Vibrational Frequencies for CH4 and Isotopomers

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Martin, Jan M. L.; Taylor, Peter R.

    1995-01-01

    A very accurate ab initio quartic force field for CH4 and its isotopomers is presented. The quartic force field was determined with the singles and doubles coupled-cluster procedure that includes a quasiperturbative estimate of the effects of connected triple excitations, CCSD(T), using the correlation consistent polarized valence triple zeta, cc-pVTZ, basis set. Improved quadratic force constants were evaluated with the correlation consistent polarized valence quadruple zeta, cc-pVQZ, basis set. Fundamental vibrational frequencies are determined using second-order perturbation theory anharmonic analyses. All fundamentals of CH4 and isotopomers for which accurate experimental values exist and for which there is not a large Fermi resonance, are predicted to within +/- 6 cm(exp -1). It is thus concluded that our predictions for the harmonic frequencies and the anharmonic constants are the most accurate estimates available. It is also shown that using cubic and quartic force constants determined with the correlation consistent polarized double zeta, cc-pVDZ, basis set in conjunction with the cc-pVQZ quadratic force constants and equilibrium geometry leads to accurate predictions for the fundamental vibrational frequencies of methane, suggesting that this approach may be a viable alternative for larger molecules. Using CCSD(T), core correlation is found to reduce the CH4 r(e), by 0.0015 A. Our best estimate for r, is 1.0862 +/- 0.0005 A.

  17. Fast and accurate mock catalogue generation for low-mass galaxies

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe

    2016-06-01

    We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.

  18. Quantitative Analysis of Venus Radar Backscatter Data in ArcGIS

    NASA Technical Reports Server (NTRS)

    Long, S. M.; Grosfils, E. B.

    2005-01-01

    Ongoing mapping of the Ganiki Planitia (V14) quadrangle of Venus and definition of material units has involved an integrated but qualitative analysis of Magellan radar backscatter images and topography using standard geomorphological mapping techniques. However, such analyses do not take full advantage of the quantitative information contained within the images. Analysis of the backscatter coefficient allows a much more rigorous statistical comparison between mapped units, permitting first order selfsimilarity tests of geographically separated materials assigned identical geomorphological labels. Such analyses cannot be performed directly on pixel (DN) values from Magellan backscatter images, because the pixels are scaled to the Muhleman law for radar echoes on Venus and are not corrected for latitudinal variations in incidence angle. Therefore, DN values must be converted based on pixel latitude back to their backscatter coefficient values before accurate statistical analysis can occur. Here we present a method for performing the conversions and analysis of Magellan backscatter data using commonly available ArcGIS software and illustrate the advantages of the process for geological mapping.

  19. Stable isotopic analyses in paleoclimatic reconstruction

    SciTech Connect

    Wigand, P.E.

    1995-09-01

    Most traditional paleoclimatic proxy data have inherent time lags between climatic input and system response that constrain their use in accurate reconstruction of paleoclimate chronology, scaling of its variability, and the elucidation of the processes that determine its impact on the biotic and abiotic environment. With the exception of dendroclimatology, and studies of short-lived organisms and pollen recovered from annually varved lacustrine sediments, significant periods of time ranging from years, to centuries, to millennia may intervene between climate change and its first manifestation in paleoclimatic proxy data records. Reconstruction of past climate through changes in plant community composition derived from pollen sequences and plant remains from ancient woodrat middens, wet environments and dry caves all suffer from these lags. However, stable isotopic analyses can provide more immediate indication of biotic response to climate change. Evidence of past physiological response of organisms to changes in effective precipitation as climate varies can be provided by analyses of the stable isotopic content of plant macrofossils from various contexts. These analyses consider variation in the stable isotopic (hydrogen, oxygen and carbon) content of plant tissues as it reflects (1) past global or local temperature through changes in meteoric (rainfall) water chemistry in the case of the first two isotopes, and (2) plant stress through changes in plant respiration/transpiration processes under differing water availability, and varying atmospheric CO, composition (which itself may actually be a net result of biotic response to climate change). Studies currently being conducted in the Intermountain West indicate both long- and short-term responses that when calibrated with modem analogue studies have the potential of revealing not only the timing of climate events, but their direction, magnitude and rapidity.

  20. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.