Science.gov

Sample records for accurate quantitative analyses

  1. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  2. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  3. Fast and Accurate Detection of Multiple Quantitative Trait Loci

    PubMed Central

    Nettelblad, Carl; Holmgren, Sverker

    2013-01-01

    Abstract We present a new computational scheme that enables efficient and reliable quantitative trait loci (QTL) scans for experimental populations. Using a standard brute-force exhaustive search effectively prohibits accurate QTL scans involving more than two loci to be performed in practice, at least if permutation testing is used to determine significance. Some more elaborate global optimization approaches, for example, DIRECT have been adopted earlier to QTL search problems. Dramatic speedups have been reported for high-dimensional scans. However, since a heuristic termination criterion must be used in these types of algorithms, the accuracy of the optimization process cannot be guaranteed. Indeed, earlier results show that a small bias in the significance thresholds is sometimes introduced. Our new optimization scheme, PruneDIRECT, is based on an analysis leading to a computable (Lipschitz) bound on the slope of a transformed objective function. The bound is derived for both infinite- and finite-size populations. Introducing a Lipschitz bound in DIRECT leads to an algorithm related to classical Lipschitz optimization. Regions in the search space can be permanently excluded (pruned) during the optimization process. Heuristic termination criteria can thus be avoided. Hence, PruneDIRECT has a well-defined error bound and can in practice be guaranteed to be equivalent to a corresponding exhaustive search. We present simulation results that show that for simultaneous mapping of three QTLS using permutation testing, PruneDIRECT is typically more than 50 times faster than exhaustive search. The speedup is higher for stronger QTL. This could be used to quickly detect strong candidate eQTL networks. PMID:23919387

  4. Quantitative DNA Analyses for Airborne Birch Pollen

    PubMed Central

    Müller-Germann, Isabell; Vogel, Bernhard; Vogel, Heike; Pauling, Andreas; Fröhlich-Nowoisky, Janine; Pöschl, Ulrich; Després, Viviane R.

    2015-01-01

    Birch trees produce large amounts of highly allergenic pollen grains that are distributed by wind and impact human health by causing seasonal hay fever, pollen-related asthma, and other allergic diseases. Traditionally, pollen forecasts are based on conventional microscopic counting techniques that are labor-intensive and limited in the reliable identification of species. Molecular biological techniques provide an alternative approach that is less labor-intensive and enables identification of any species by its genetic fingerprint. A particularly promising method is quantitative Real-Time polymerase chain reaction (qPCR), which can be used to determine the number of DNA copies and thus pollen grains in air filter samples. During the birch pollination season in 2010 in Mainz, Germany, we collected air filter samples of fine (<3 μm) and coarse air particulate matter. These were analyzed by qPCR using two different primer pairs: one for a single-copy gene (BP8) and the other for a multi-copy gene (ITS). The BP8 gene was better suitable for reliable qPCR results, and the qPCR results obtained for coarse particulate matter were well correlated with the birch pollen forecasting results of the regional air quality model COSMO-ART. As expected due to the size of birch pollen grains (~23 μm), the concentration of DNA in fine particulate matter was lower than in the coarse particle fraction. For the ITS region the factor was 64, while for the single-copy gene BP8 only 51. The possible presence of so-called sub-pollen particles in the fine particle fraction is, however, interesting even in low concentrations. These particles are known to be highly allergenic, reach deep into airways and cause often severe health problems. In conclusion, the results of this exploratory study open up the possibility of predicting and quantifying the pollen concentration in the atmosphere more precisely in the future. PMID:26492534

  5. Are Patient-Specific Joint and Inertial Parameters Necessary for Accurate Inverse Dynamics Analyses of Gait?

    PubMed Central

    Reinbolt, Jeffrey A.; Haftka, Raphael T.; Chmielewski, Terese L.; Fregly, Benjamin J.

    2013-01-01

    Variations in joint parameter values (axis positions and orientations in body segments) and inertial parameter values (segment masses, mass centers, and moments of inertia) as well as kinematic noise alter the results of inverse dynamics analyses of gait. Three-dimensional linkage models with joint constraints have been proposed as one way to minimize the effects of noisy kinematic data. Such models can also be used to perform gait optimizations to predict post-treatment function given pre-treatment gait data. This study evaluates whether accurate patient-specific joint and inertial parameter values are needed in three-dimensional linkage models to produce accurate inverse dynamics results for gait. The study was performed in two stages. First, we used optimization analyses to evaluate whether patient-specific joint and inertial parameter values can be calibrated accurately from noisy kinematic data, and second, we used Monte Carlo analyses to evaluate how errors in joint and inertial parameter values affect inverse dynamics calculations. Both stages were performed using a dynamic, 27 degree-of-freedom, full-body linkage model and synthetic (i.e., computer generated) gait data corresponding to a nominal experimental gait motion. In general, joint but not inertial parameter values could be found accurately from noisy kinematic data. Root-mean-square (RMS) errors were 3° and 4 mm for joint parameter values and 1 kg, 22 mm, and 74,500 kg*mm2 for inertial parameter values. Furthermore, errors in joint but not inertial parameter values had a significant effect on calculated lower-extremity inverse dynamics joint torques. The worst RMS torque error averaged 4% bodyweight*height (BW*H) due to joint parameter variations but less than 0.25% BW*H due to inertial parameter variations. These results suggest that inverse dynamics analyses of gait utilizing linkage models with joint constraints should calibrate the model’s joint parameter values to obtain accurate joint

  6. Labeling of virus components for advanced, quantitative imaging analyses.

    PubMed

    Sakin, Volkan; Paci, Giulia; Lemke, Edward A; Müller, Barbara

    2016-07-01

    In recent years, investigation of virus-cell interactions has moved from ensemble measurements to imaging analyses at the single-particle level. Advanced fluorescence microscopy techniques provide single-molecule sensitivity and subdiffraction spatial resolution, allowing observation of subviral details and individual replication events to obtain detailed quantitative information. To exploit the full potential of these techniques, virologists need to employ novel labeling strategies, taking into account specific constraints imposed by viruses, as well as unique requirements of microscopic methods. Here, we compare strengths and limitations of various labeling methods, exemplify virological questions that were successfully addressed, and discuss challenges and future potential of novel approaches in virus imaging. PMID:26987299

  7. Designer cantilevers for even more accurate quantitative measurements of biological systems with multifrequency AFM

    NASA Astrophysics Data System (ADS)

    Contera, S.

    2016-04-01

    Multifrequency excitation/monitoring of cantilevers has made it possible both to achieve fast, relatively simple, nanometre-resolution quantitative mapping of mechanical of biological systems in solution using atomic force microscopy (AFM), and single molecule resolution detection by nanomechanical biosensors. A recent paper by Penedo et al [2015 Nanotechnology 26 485706] has made a significant contribution by developing simple methods to improve the signal to noise ratio in liquid environments, by selectively enhancing cantilever modes, which will lead to even more accurate quantitative measurements.

  8. Qualitative and quantitative fracture analyses of high-strength ceramics.

    PubMed

    Øilo, Marit; Tvinnereim, Helene M; Gjerdet, Nils R

    2009-04-01

    The aims of this study were to assess the applicability and repeatability of qualitative and quantitative analyses of the fracture patterns of four different high-strength ceramics. Ten bar-shaped specimens of four high-strength ceramics with different material composition and fabrication methods had been fractured by three-point bending in water (n = 40). Commonly used fractographic patterns for brittle materials, such as mirror and mist, were used to characterize and quantify the fractured surfaces of these specimens. The analyses were performed twice, on separate occasions, by the same operator. Assessment of the association between fractographic patterns and fracture stress was carried out, and repeatability assessments of the measurements were performed. The fracture initiator site and the common fractographic markers surrounding this site were found in all specimens. Statistically significant correlations were found between certain fracture patterns and stress at fracture. The repeatability of the measurements of the different fractographic patterns varied among the materials. Fracture analyses seem applicable as a tool to determine the fracture initiation site and to estimate the force vectors involved in the fracture of dental high-strength ceramics. PMID:19320729

  9. Fractal and Lacunarity Analyses: Quantitative Characterization of Hierarchical Surface Topographies.

    PubMed

    Ling, Edwin J Y; Servio, Phillip; Kietzig, Anne-Marie

    2016-02-01

    Biomimetic hierarchical surface structures that exhibit features having multiple length scales have been used in many technological and engineering applications. Their surface topographies are most commonly analyzed using scanning electron microscopy (SEM), which only allows for qualitative visual assessments. Here we introduce fractal and lacunarity analyses as a method of characterizing the SEM images of hierarchical surface structures in a quantitative manner. Taking femtosecond laser-irradiated metals as an example, our results illustrate that, while the fractal dimension is a poor descriptor of surface complexity, lacunarity analysis can successfully quantify the spatial texture of an SEM image; this, in turn, provides a convenient means of reporting changes in surface topography with respect to changes in processing parameters. Furthermore, lacunarity plots are shown to be sensitive to the different length scales present within a hierarchical structure due to the reversal of lacunarity trends at specific magnifications where new features become resolvable. Finally, we have established a consistent method of detecting pattern sizes in an image from the oscillation of lacunarity plots. Therefore, we promote the adoption of lacunarity analysis as a powerful tool for quantitative characterization of, but not limited to, multi-scale hierarchical surface topographies. PMID:26758776

  10. A correlative imaging based methodology for accurate quantitative assessment of bone formation in additive manufactured implants.

    PubMed

    Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D

    2016-06-01

    A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants. PMID:27153828

  11. Quantitative analyses for elucidating mechanisms of cell fate commitment in the mouse blastocyst

    NASA Astrophysics Data System (ADS)

    Saiz, Néstor; Kang, Minjung; Puliafito, Alberto; Schrode, Nadine; Xenopoulos, Panagiotis; Lou, Xinghua; Di Talia, Stefano; Hadjantonakis, Anna-Katerina

    2015-03-01

    In recent years we have witnessed a shift from qualitative image analysis towards higher resolution, quantitative analyses of imaging data in developmental biology. This shift has been fueled by technological advances in both imaging and analysis software. We have recently developed a tool for accurate, semi-automated nuclear segmentation of imaging data from early mouse embryos and embryonic stem cells. We have applied this software to the study of the first lineage decisions that take place during mouse development and established analysis pipelines for both static and time-lapse imaging experiments. In this paper we summarize the conclusions from these studies to illustrate how quantitative, single-cell level analysis of imaging data can unveil biological processes that cannot be revealed by traditional qualitative studies.

  12. Quantitative analyses of matching-to-sample performance.

    PubMed Central

    Jones, B M

    2003-01-01

    Six pigeons performed a simultaneous matching-to-sample (MTS) task involving patterns of dots on a liquid-crystal display. Two samples and two comparisons differed in terms of the density of pixels visible through pecking keys mounted in front of the display. Selections of Comparison 1 after Sample 1, and of Comparison 2 after Sample 2, produced intermittent access to food, and errors always produced a time-out. The disparity between the samples and between the comparisons varied across sets of conditions. The ratio of food deliveries for the two correct responses varied over a wide range within each set of conditions, and one condition arranged extinction for correct responses following Sample 1. The quantitative models proposed by Davison and Tustin (1978), Alsop (1991), and Davison (1991) failed to predict performance in some extreme reinforcer-ratio conditions because comparison choice approached indifference (and strong position biases emerged) when the sample clearly signaled a low (or zero) rate of reinforcement. An alternative conceptualization of the reinforcement contingencies operating in MTS tasks is advanced and was supported by further analyses of the data. This model relates the differential responding between the comparisons following each sample to the differential reinforcement for correct responses following that sample. PMID:12908761

  13. A quantitative approach to analysing cortisol response in the horse.

    PubMed

    Ekstrand, C; Ingvast-Larsson, C; Olsén, L; Hedeland, M; Bondesson, U; Gabrielsson, J

    2016-06-01

    The cortisol response to glucocorticoid intervention has, in spite of several studies in horses, not been fully characterized with regard to the determinants of onset, intensity and duration of response. Therefore, dexamethasone and cortisol response data were collected in a study applying a constant rate infusion regimen of dexamethasone (0.17, 1.7 and 17 μg/kg) to six Standardbreds. Plasma was analysed for dexamethasone and cortisol concentrations using UHPLC-MS/MS. Dexamethasone displayed linear kinetics within the concentration range studied. A turnover model of oscillatory behaviour accurately mimicked cortisol data. The mean baseline concentration range was 34-57 μg/L, the fractional turnover rate 0.47-1.5 1/h, the amplitude parameter 6.8-24 μg/L, the maximum inhibitory capacity 0.77-0.97, the drug potency 6-65 ng/L and the sigmoidicity factor 0.7-30. This analysis provided a better understanding of the time course of the cortisol response in horses. This includes baseline variability within and between horses and determinants of the equilibrium concentration-response relationship. The analysis also challenged a protocol for a dexamethasone suppression test design and indicated future improvement to increase the predictability of the test. PMID:26542753

  14. Accurate and molecular-size-tolerant NMR quantitation of diverse components in solution

    PubMed Central

    Okamura, Hideyasu; Nishimura, Hiroshi; Nagata, Takashi; Kigawa, Takanori; Watanabe, Takashi; Katahira, Masato

    2016-01-01

    Determining the amount of each component of interest in a mixture is a fundamental first step in characterizing the nature of the solution and to develop possible means of utilization of its components. Similarly, determining the composition of units in complex polymers, or polymer mixtures, is crucial. Although NMR is recognized as one of the most powerful methods to achieve this and is widely used in many fields, variation in the molecular sizes or the relative mobilities of components skews quantitation due to the size-dependent decay of magnetization. Here, a method to accurately determine the amount of each component by NMR was developed. This method was validated using a solution that contains biomass-related components in which the molecular sizes greatly differ. The method is also tolerant of other factors that skew quantitation such as variation in the one-bond C–H coupling constant. The developed method is the first and only way to reliably overcome the skewed quantitation caused by several different factors to provide basic information on the correct amount of each component in a solution. PMID:26883279

  15. Quantitation and accurate mass analysis of pesticides in vegetables by LC/TOF-MS.

    PubMed

    Ferrer, Imma; Thurman, E Michael; Fernández-Alba, Amadeo R

    2005-05-01

    A quantitative method consisting of solvent extraction followed by liquid chromatography/time-of-flight mass spectrometry (LC/TOF-MS) analysis was developed for the identification and quantitation of three chloronicotinyl pesticides (imidacloprid, acetamiprid, thiacloprid) commonly used on salad vegetables. Accurate mass measurements within 3 ppm error were obtained for all the pesticides studied in various vegetable matrixes (cucumber, tomato, lettuce, pepper), which allowed an unequivocal identification of the target pesticides. Calibration curves covering 2 orders of magnitude were linear over the concentration range studied, thus showing the quantitative ability of TOF-MS as a monitoring tool for pesticides in vegetables. Matrix effects were also evaluated using matrix-matched standards showing no significant interferences between matrixes and clean extracts. Intraday reproducibility was 2-3% relative standard deviation (RSD) and interday values were 5% RSD. The precision (standard deviation) of the mass measurements was evaluated and it was less than 0.23 mDa between days. Detection limits of the chloronicotinyl insecticides in salad vegetables ranged from 0.002 to 0.01 mg/kg. These concentrations are equal to or better than the EU directives for controlled pesticides in vegetables showing that LC/TOF-MS analysis is a powerful tool for identification of pesticides in vegetables. Robustness and applicability of the method was validated for the analysis of market vegetable samples. Concentrations found in these samples were in the range of 0.02-0.17 mg/kg of vegetable. PMID:15859598

  16. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  17. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    PubMed Central

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  18. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull.

    PubMed

    Gröning, Flora; Jones, Marc E H; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E; Fagan, Michael J

    2013-07-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  19. Empirically derived phenotypic subgroups – qualitative and quantitative trait analyses

    PubMed Central

    Wilcox, Marsha A; Wyszynski, Diego F; Panhuysen, Carolien I; Ma, Qianli; Yip, Agustin; Farrell, John; Farrer, Lindsay A

    2003-01-01

    Background The Framingham Heart Study has contributed a great deal to advances in medicine. Most of the phenotypes investigated have been univariate traits (quantitative or qualitative). The aims of this study are to derive multivariate traits by identifying homogeneous groups of people and assigning both qualitative and quantitative trait scores; to assess the heritability of the derived traits; and to conduct both qualitative and quantitative linkage analysis on one of the heritable traits. Methods Multiple correspondence analysis, a nonparametric analogue of principal components analysis, was used for data reduction. Two-stage clustering, using both k-means and agglomerative hierarchical clustering, was used to cluster individuals based upon axes (factor) scores obtained from the data reduction. Probability of cluster membership was calculated using binary logistic regression. Heritability was calculated using SOLAR, which was also used for the quantitative trait analysis. GENEHUNTER-PLUS was used for the qualitative trait analysis. Results We found four phenotypically distinct groups. Membership in the smallest group was heritable (38%, p < 1 × 10-6) and had characteristics consistent with atherogenic dyslipidemia. We found both qualitative and quantitative LOD scores above 3 on chromosomes 11 and 14 (11q13, 14q23, 14q31). There were two Kong & Cox LOD scores above 1.0 on chromosome 6 (6p21) and chromosome 11 (11q23). Conclusion This approach may be useful for the identification of genetic heterogeneity in complex phenotypes by clarifying the phenotype definition prior to linkage analysis. Some of our findings are in regions linked to elements of atherogenic dyslipidemia and related diagnoses, some may be novel, or may be false positives. PMID:14975083

  20. Bright-field quantitative phase microscopy (BFQPM) for accurate phase imaging using conventional microscopy hardware

    NASA Astrophysics Data System (ADS)

    Jenkins, Micah; Gaylord, Thomas K.

    2015-03-01

    Most quantitative phase microscopy methods require the use of custom-built or modified microscopic configurations which are not typically available to most bio/pathologists. There are, however, phase retrieval algorithms which utilize defocused bright-field images as input data and are therefore implementable in existing laboratory environments. Among these, deterministic methods such as those based on inverting the transport-of-intensity equation (TIE) or a phase contrast transfer function (PCTF) are particularly attractive due to their compatibility with Köhler illuminated systems and numerical simplicity. Recently, a new method has been proposed, called multi-filter phase imaging with partially coherent light (MFPI-PC), which alleviates the inherent noise/resolution trade-off in solving the TIE by utilizing a large number of defocused bright-field images spaced equally about the focal plane. Despite greatly improving the state-ofthe- art, the method has many shortcomings including the impracticality of high-speed acquisition, inefficient sampling, and attenuated response at high frequencies due to aperture effects. In this report, we present a new method, called bright-field quantitative phase microscopy (BFQPM), which efficiently utilizes a small number of defocused bright-field images and recovers frequencies out to the partially coherent diffraction limit. The method is based on a noiseminimized inversion of a PCTF derived for each finite defocus distance. We present simulation results which indicate nanoscale optical path length sensitivity and improved performance over MFPI-PC. We also provide experimental results imaging live bovine mesenchymal stem cells at sub-second temporal resolution. In all, BFQPM enables fast and accurate phase imaging with unprecedented spatial resolution using widely available bright-field microscopy hardware.

  1. Cultivation and quantitative proteomic analyses of acidophilic microbial communities

    SciTech Connect

    Belnap, Christopher P.; Pan, Chongle; Verberkmoes, Nathan C; Power, Mary E.; Samatova, Nagiza F; Carver, Rudolf L.; Hettich, Robert {Bob} L; Banfield, Jillian F.

    2010-01-01

    Acid mine drainage (AMD), an extreme environment characterized by low pH and high metal concentrations, can support dense acidophilic microbial biofilm communities that rely on chemoautotrophic production based on iron oxidation. Field determined production rates indicate that, despite the extreme conditions, these communities are sufficiently well adapted to their habitats to achieve primary production rates comparable to those of microbial communities occurring in some non-extreme environments. To enable laboratory studies of growth, production and ecology of AMD microbial communities, a culturing system was designed to reproduce natural biofilms, including organisms recalcitrant to cultivation. A comprehensive metabolic labeling-based quantitative proteomic analysis was used to verify that natural and laboratory communities were comparable at the functional level. Results confirmed that the composition and core metabolic activities of laboratory-grown communities were similar to a natural community, including the presence of active, low abundance bacteria and archaea that have not yet been isolated. However, laboratory growth rates were slow compared with natural communities, and this correlated with increased abundance of stress response proteins for the dominant bacteria in laboratory communities. Modification of cultivation conditions reduced the abundance of stress response proteins and increased laboratory community growth rates. The research presented here represents the first description of the application of a metabolic labeling-based quantitative proteomic analysis at the community level and resulted in a model microbial community system ideal for testing physiological and ecological hypotheses.

  2. Performance Assessment in Fingerprinting and Multi Component Quantitative NMR Analyses.

    PubMed

    Gallo, Vito; Intini, Nicola; Mastrorilli, Piero; Latronico, Mario; Scapicchio, Pasquale; Triggiani, Maurizio; Bevilacqua, Vitoantonio; Fanizzi, Paolo; Acquotti, Domenico; Airoldi, Cristina; Arnesano, Fabio; Assfalg, Michael; Benevelli, Francesca; Bertelli, Davide; Cagliani, Laura R; Casadei, Luca; Cesare Marincola, Flaminia; Colafemmina, Giuseppe; Consonni, Roberto; Cosentino, Cesare; Davalli, Silvia; De Pascali, Sandra A; D'Aiuto, Virginia; Faccini, Andrea; Gobetto, Roberto; Lamanna, Raffaele; Liguori, Francesca; Longobardi, Francesco; Mallamace, Domenico; Mazzei, Pierluigi; Menegazzo, Ileana; Milone, Salvatore; Mucci, Adele; Napoli, Claudia; Pertinhez, Thelma; Rizzuti, Antonino; Rocchigiani, Luca; Schievano, Elisabetta; Sciubba, Fabio; Sobolev, Anatoly; Tenori, Leonardo; Valerio, Mariacristina

    2015-07-01

    An interlaboratory comparison (ILC) was organized with the aim to set up quality control indicators suitable for multicomponent quantitative analysis by nuclear magnetic resonance (NMR) spectroscopy. A total of 36 NMR data sets (corresponding to 1260 NMR spectra) were produced by 30 participants using 34 NMR spectrometers. The calibration line method was chosen for the quantification of a five-component model mixture. Results show that quantitative NMR is a robust quantification tool and that 26 out of 36 data sets resulted in statistically equivalent calibration lines for all considered NMR signals. The performance of each laboratory was assessed by means of a new performance index (named Qp-score) which is related to the difference between the experimental and the consensus values of the slope of the calibration lines. Laboratories endowed with a Qp-score falling within the suitable acceptability range are qualified to produce NMR spectra that can be considered statistically equivalent in terms of relative intensities of the signals. In addition, the specific response of nuclei to the experimental excitation/relaxation conditions was addressed by means of the parameter named NR. NR is related to the difference between the theoretical and the consensus slopes of the calibration lines and is specific for each signal produced by a well-defined set of acquisition parameters. PMID:26020452

  3. Quantitative Proteome Analysis of Human Plasma Following in vivo Lipopolysaccharide Administration using 16O/18O Labeling and the Accurate Mass and Time Tag Approach

    PubMed Central

    Qian, Wei-Jun; Monroe, Matthew E.; Liu, Tao; Jacobs, Jon M.; Anderson, Gordon A.; Shen, Yufeng; Moore, Ronald J.; Anderson, David J.; Zhang, Rui; Calvano, Steve E.; Lowry, Stephen F.; Xiao, Wenzhong; Moldawer, Lyle L.; Davis, Ronald W.; Tompkins, Ronald G.; Camp, David G.; Smith, Richard D.

    2007-01-01

    SUMMARY Identification of novel diagnostic or therapeutic biomarkers from human blood plasma would benefit significantly from quantitative measurements of the proteome constituents over a range of physiological conditions. Herein we describe an initial demonstration of proteome-wide quantitative analysis of human plasma. The approach utilizes post-digestion trypsin-catalyzed 16O/18O peptide labeling, two-dimensional liquid chromatography (LC)-Fourier transform ion cyclotron resonance ((FTICR) mass spectrometry, and the accurate mass and time (AMT) tag strategy to identify and quantify peptides/proteins from complex samples. A peptide accurate mass and LC-elution time AMT tag database was initially generated using tandem mass spectrometry (MS/MS) following extensive multidimensional LC separations to provide the basis for subsequent peptide identifications. The AMT tag database contains >8,000 putative identified peptides, providing 938 confident plasma protein identifications. The quantitative approach was applied without depletion for high abundant proteins for comparative analyses of plasma samples from an individual prior to and 9 h after lipopolysaccharide (LPS) administration. Accurate quantification of changes in protein abundance was demonstrated by both 1:1 labeling of control plasma and the comparison between the plasma samples following LPS administration. A total of 429 distinct plasma proteins were quantified from the comparative analyses and the protein abundances for 25 proteins, including several known inflammatory response mediators, were observed to change significantly following LPS administration. PMID:15753121

  4. Understanding silicate hydration from quantitative analyses of hydrating tricalcium silicates

    PubMed Central

    Pustovgar, Elizaveta; Sangodkar, Rahul P.; Andreev, Andrey S.; Palacios, Marta; Chmelka, Bradley F.; Flatt, Robert J.; d'Espinose de Lacaillerie, Jean-Baptiste

    2016-01-01

    Silicate hydration is prevalent in natural and technological processes, such as, mineral weathering, glass alteration, zeolite syntheses and cement hydration. Tricalcium silicate (Ca3SiO5), the main constituent of Portland cement, is amongst the most reactive silicates in water. Despite its widespread industrial use, the reaction of Ca3SiO5 with water to form calcium-silicate-hydrates (C-S-H) still hosts many open questions. Here, we show that solid-state nuclear magnetic resonance measurements of 29Si-enriched triclinic Ca3SiO5 enable the quantitative monitoring of the hydration process in terms of transient local molecular composition, extent of silicate hydration and polymerization. This provides insights on the relative influence of surface hydroxylation and hydrate precipitation on the hydration rate. When the rate drops, the amount of hydroxylated Ca3SiO5 decreases, thus demonstrating the partial passivation of the surface during the deceleration stage. Moreover, the relative quantities of monomers, dimers, pentamers and octamers in the C-S-H structure are measured. PMID:27009966

  5. Understanding silicate hydration from quantitative analyses of hydrating tricalcium silicates.

    PubMed

    Pustovgar, Elizaveta; Sangodkar, Rahul P; Andreev, Andrey S; Palacios, Marta; Chmelka, Bradley F; Flatt, Robert J; d'Espinose de Lacaillerie, Jean-Baptiste

    2016-01-01

    Silicate hydration is prevalent in natural and technological processes, such as, mineral weathering, glass alteration, zeolite syntheses and cement hydration. Tricalcium silicate (Ca3SiO5), the main constituent of Portland cement, is amongst the most reactive silicates in water. Despite its widespread industrial use, the reaction of Ca3SiO5 with water to form calcium-silicate-hydrates (C-S-H) still hosts many open questions. Here, we show that solid-state nuclear magnetic resonance measurements of (29)Si-enriched triclinic Ca3SiO5 enable the quantitative monitoring of the hydration process in terms of transient local molecular composition, extent of silicate hydration and polymerization. This provides insights on the relative influence of surface hydroxylation and hydrate precipitation on the hydration rate. When the rate drops, the amount of hydroxylated Ca3SiO5 decreases, thus demonstrating the partial passivation of the surface during the deceleration stage. Moreover, the relative quantities of monomers, dimers, pentamers and octamers in the C-S-H structure are measured. PMID:27009966

  6. Understanding silicate hydration from quantitative analyses of hydrating tricalcium silicates

    NASA Astrophysics Data System (ADS)

    Pustovgar, Elizaveta; Sangodkar, Rahul P.; Andreev, Andrey S.; Palacios, Marta; Chmelka, Bradley F.; Flatt, Robert J.; D'Espinose de Lacaillerie, Jean-Baptiste

    2016-03-01

    Silicate hydration is prevalent in natural and technological processes, such as, mineral weathering, glass alteration, zeolite syntheses and cement hydration. Tricalcium silicate (Ca3SiO5), the main constituent of Portland cement, is amongst the most reactive silicates in water. Despite its widespread industrial use, the reaction of Ca3SiO5 with water to form calcium-silicate-hydrates (C-S-H) still hosts many open questions. Here, we show that solid-state nuclear magnetic resonance measurements of 29Si-enriched triclinic Ca3SiO5 enable the quantitative monitoring of the hydration process in terms of transient local molecular composition, extent of silicate hydration and polymerization. This provides insights on the relative influence of surface hydroxylation and hydrate precipitation on the hydration rate. When the rate drops, the amount of hydroxylated Ca3SiO5 decreases, thus demonstrating the partial passivation of the surface during the deceleration stage. Moreover, the relative quantities of monomers, dimers, pentamers and octamers in the C-S-H structure are measured.

  7. Multiobjective optimization in quantitative structure-activity relationships: deriving accurate and interpretable QSARs.

    PubMed

    Nicolotti, Orazio; Gillet, Valerie J; Fleming, Peter J; Green, Darren V S

    2002-11-01

    Deriving quantitative structure-activity relationship (QSAR) models that are accurate, reliable, and easily interpretable is a difficult task. In this study, two new methods have been developed that aim to find useful QSAR models that represent an appropriate balance between model accuracy and complexity. Both methods are based on genetic programming (GP). The first method, referred to as genetic QSAR (or GPQSAR), uses a penalty function to control model complexity. GPQSAR is designed to derive a single linear model that represents an appropriate balance between the variance and the number of descriptors selected for the model. The second method, referred to as multiobjective genetic QSAR (MoQSAR), is based on multiobjective GP and represents a new way of thinking of QSAR. Specifically, QSAR is considered as a multiobjective optimization problem that comprises a number of competitive objectives. Typical objectives include model fitting, the total number of terms, and the occurrence of nonlinear terms. MoQSAR results in a family of equivalent QSAR models where each QSAR represents a different tradeoff in the objectives. A practical consideration often overlooked in QSAR studies is the need for the model to promote an understanding of the biochemical response under investigation. To accomplish this, chemically intuitive descriptors are needed but do not always give rise to statistically robust models. This problem is addressed by the addition of a further objective, called chemical desirability, that aims to reward models that consist of descriptors that are easily interpretable by chemists. GPQSAR and MoQSAR have been tested on various data sets including the Selwood data set and two different solubility data sets. The study demonstrates that the MoQSAR method is able to find models that are at least as good as models derived using standard statistical approaches and also yields models that allow a medicinal chemist to trade statistical robustness for chemical

  8. Quantitative Analyse der Korallenbesiedlung eines Vorriffareals bei Aqaba (Rotes Meer)

    NASA Astrophysics Data System (ADS)

    Mergner, H.; Schuhmacher, H.

    1981-09-01

    Previous descriptions of the ecology and zonation of Aqaba reefs (Mergner & Schuhmacher, 1974) are supplemented by this quantitative study of a test quadrat (5×5 m in size), randomly chosen in some 10 m depth in the middle fore reef of a coastal fringing reef. Of the total surface of 25 m2 Cnidaria represent 42.31%, sponges 0.17%, calcareous algae 0.20%, dead coral rock and pebble 30.27% and sand and coral debris 26.15%. The cnidarian cover is roughly equally contributed by 50.86% Scleractinia and 48.61% Alcyonaria, mainly Xeniidae (35.81%). For each species the percentage of the total cover (measured as vertical projection), colony number, average and maximal colony size are given. A total number of 104 cnidarian species was recorded, among which the 78 scleractinian species represent 34 of the 55 coral genera known from the Red Sea. The well balanced regime of moderate light and current conditions which are tolerated both by shallow and deep water species may account for the high species number. Disturbances such as occasional sedimentation, grazing of sea urchins (Diadema setosum) and overgrowth of stony corals by xeniids result in continuous fluctuations of the coral community, in small colony size and in high colony number. Abiotic factors and biotic interactions maintain a diversity (H=3.67) which ranks among the greatest ever found in reef communities. The data obtained from the fore reef square are compared with those of a similar test square in the lagoon of the same reef and with results from transect zonations on the opposite coast of the Gulf of Aqaba. These comparisons indicate that the fore reef harbours the richest coral fauna in the reef. The inventory of coral species at the northern end of the Gulf of Aqaba, one of the northernmost outposts of the coral reef belt, is only little reduced when compared with that of the central Red Sea; this great species diversity is in contrast to the worldwide decrease of species number towards the periphery of

  9. Quantitative Proteome Analysis of Human Plasma Following in vivo Lipopolysaccharide Administration using O-16/O-18 Labeling and the Accurate Mass and Time Tag Approach

    SciTech Connect

    Qian, Weijun; Monroe, Matthew E.; Liu, Tao; Jacobs, Jon M.; Anderson, Gordon A.; Shen, Yufeng; Moore, Ronald J.; Anderson, David J.; Zhang, Rui; Calvano, Steven E.; Lowry, Stephen F.; Xiao, Wenzhong; Moldawer, Lyle L.; Davis, Ronald W.; Tompkins, Ronald G.; Camp, David G.; Smith, Richard D.

    2005-05-01

    Identification of novel diagnostic or therapeutic biomarkers from human blood plasma would benefit significantly from quantitative measurements of the proteome constituents over a range of physiological conditions. We describe here an initial demonstration of proteome-wide quantitative analysis of human plasma. The approach utilizes post-digestion trypsin-catalyzed 16O/18O labeling, two-dimensional liquid chromatography (LC)-Fourier transform ion cyclotron resonance ((FTICR) mass spectrometry, and the accurate mass and time (AMT) tag strategy for identification and quantification of peptides/proteins from complex samples. A peptide mass and time tag database was initially generated using tandem mass spectrometry (MS/MS) following extensive multidimensional LC separations and the database serves as a ‘look-up’ table for peptide identification. The mass and time tag database contains >8,000 putative identified peptides, which yielded 938 confident plasma protein identifications. The quantitative approach was applied to the comparative analyses of plasma samples from an individual prior to and 9 hours after lipopolysaccharide (LPS) administration without depletion of high abundant proteins. Accurate quantification of changes in protein abundance was demonstrated with both 1:1 labeling of control plasma and the comparison between the plasma samples following LPS administration. A total of 429 distinct plasma proteins were quantified from the comparative analyses and the protein abundances for 28 proteins were observed to be significantly changed following LPS administration, including several known inflammatory response mediators.

  10. Development of phantom for quantitative analyses of human dentin mineral density.

    PubMed

    Hayashi-Sakai, Sachiko; Kondo, Tatsuya; Kasuga, Yuto; Sakamoto, Makoto; Endo, Hideaki; Sakai, Jun

    2015-01-01

    The purpose of the present study was to develop a novel-designed phantom that could be scanned with a sample in the same image, that specialize for quantitative analyses of human dentin mineral density using the X-ray attenuation method. A further attempt was made to demonstrate the intracoronal dentin mineral density using this phantom in mandibular incisors. The phantom prepared with a 15 mm hole in the center of an acrylic resin bar having an outside diameter of 25 mm and 8 small holes (diameter, 3 mm) were made at equal intervals around the center. Liquid dipotassium hydrogen phosphate (K2HPO4) solutions were established at 0.4, 0.6, 0.8 and 1.0 g/cm3, and were arranged to these holes. The mean value of the intracoronal dentin mineral density was 1.486 ± 0.016 g/cm3 in the present study. As the results of the present study corresponded to previous reports, this new phantom was considered to be useful. This phantom enables the analysis of samples that are not readily available by conventional mechanical tests and may facilitate biomechanical investigations using X-ray images. It was suggested that this system is a simple, accurate and novel mineralization measuring system. PMID:26484556

  11. Qualitative and quantitative comparative analyses of 3D lidar landslide displacement field measurements

    NASA Astrophysics Data System (ADS)

    Haugen, Benjamin D.

    Landslide ground surface displacements vary at all spatial scales and are an essential component of kinematic and hazards analyses. Unfortunately, survey-based displacement measurements require personnel to enter unsafe terrain and have limited spatial resolution. And while recent advancements in LiDAR technology provide the ability remotely measure 3D landslide displacements at high spatial resolution, no single method is widely accepted. A series of qualitative metrics for comparing 3D landslide displacement field measurement methods were developed. The metrics were then applied to nine existing LiDAR techniques, and the top-ranking methods --Iterative Closest Point (ICP) matching and 3D Particle Image Velocimetry (3DPIV) -- were quantitatively compared using synthetic displacement and control survey data from a slow-moving translational landslide in north-central Colorado. 3DPIV was shown to be the most accurate and reliable point cloud-based 3D landslide displacement field measurement method, and the viability of LiDAR-based techniques for measuring 3D motion on landslides was demonstrated.

  12. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  13. Mass Spectrometry Provides Accurate and Sensitive Quantitation of A2E

    PubMed Central

    Gutierrez, Danielle B.; Blakeley, Lorie; Goletz, Patrice W.; Schey, Kevin L.; Hanneken, Anne; Koutalos, Yiannis; Crouch, Rosalie K.; Ablonczy, Zsolt

    2010-01-01

    Summary Orange autofluorescence from lipofuscin in the lysosomes of the retinal pigment epithelium (RPE) is a hallmark of aging in the eye. One of the major components of lipofuscin is A2E, the levels of which increase with age and in pathologic conditions, such as Stargardt disease or age-related macular degeneration. In vitro studies have suggested that A2E is highly phototoxic and, more specifically, that A2E and its oxidized derivatives contribute to RPE damage and subsequent photoreceptor cell death. To date, absorption spectroscopy has been the primary method to identify and quantitate A2E. Here, a new mass spectrometric method was developed for the specific detection of low levels of A2E and compared to a traditional method of analysis. The new mass spectrometry method allows the detection and quantitation of approximately 10,000-fold less A2E than absorption spectroscopy and the detection and quantitation of low levels of oxidized A2E, with localization of the oxidation sites. This study suggests that identification and quantitation of A2E from tissue extracts by chromatographic absorption spectroscopyoverestimates the amount of A2E. This mass spectrometry approach makes it possible to detect low levels of A2E and its oxidized metabolites with greater accuracy than traditional methods, thereby facilitating a more exact analysis of bis-retinoids in animal models of inherited retinal degeneration as well as in normal and diseased human eyes. PMID:20931136

  14. Recycling and Ambivalence: Quantitative and Qualitative Analyses of Household Recycling among Young Adults

    ERIC Educational Resources Information Center

    Ojala, Maria

    2008-01-01

    Theories about ambivalence, as well as quantitative and qualitative empirical approaches, are applied to obtain an understanding of recycling among young adults. A questionnaire was mailed to 422 Swedish young people. Regression analyses showed that a mix of negative emotions (worry) and positive emotions (hope and joy) about the environmental…

  15. Challenges in Higher Education Research: The Use of Quantitative Tools in Comparative Analyses

    ERIC Educational Resources Information Center

    Reale, Emanuela

    2014-01-01

    Despite the value of the comparative perspective for the study of higher education is widely recognised, there is little consensus about the specific methodological approaches. Quantitative tools outlined their relevance for addressing comparative analyses since they are supposed to reducing the complexity, finding out and graduating similarities…

  16. Quantitative spectroscopy of hot stars: accurate atomic data applied on a large scale as driver of recent breakthroughs

    NASA Astrophysics Data System (ADS)

    Przybilla, Norbert; Schaffenroth, Veronika; Nieva, Maria-Fernanda

    2015-08-01

    OB-type stars present hotbeds for non-LTE physics because of their strong radiation fields that drive the atmospheric plasma out of local thermodynamic equilibrium. We report on recent breakthroughs in the quantitative analysis of the optical and UV-spectra of OB-type stars that were facilitated by application of accurate and precise atomic data on a large scale. An astophysicist's dream has come true, by bringing observed and model spectra into close match over wide parts of the observed wavelength ranges. This facilitates tight observational constraints to be derived from OB-type stars for wide applications in astrophysics. However, despite the progress made, many details of the modelling may be improved further. We discuss atomic data needs in terms of laboratory measurements and also ab-initio calculations. Particular emphasis is given to quantitative spectroscopy in the near-IR, which will be in focus in the era of the upcoming extremely large telescopes.

  17. There's plenty of gloom at the bottom: the many challenges of accurate quantitation in size-based oligomeric separations.

    PubMed

    Striegel, André M

    2013-11-01

    There is a variety of small-molecule species (e.g., tackifiers, plasticizers, oligosaccharides) the size-based characterization of which is of considerable scientific and industrial importance. Likewise, quantitation of the amount of oligomers in a polymer sample is crucial for the import and export of substances into the USA and European Union (EU). While the characterization of ultra-high molar mass macromolecules by size-based separation techniques is generally considered a challenge, it is this author's contention that a greater challenge is encountered when trying to perform, for quantitation purposes, separations in and of the oligomeric region. The latter thesis is expounded herein, by detailing the various obstacles encountered en route to accurate, quantitative oligomeric separations by entropically dominated techniques such as size-exclusion chromatography, hydrodynamic chromatography, and asymmetric flow field-flow fractionation, as well as by methods which are, principally, enthalpically driven such as liquid adsorption and temperature gradient interaction chromatography. These obstacles include, among others, the diminished sensitivity of static light scattering (SLS) detection at low molar masses, the non-constancy of the response of SLS and of commonly employed concentration-sensitive detectors across the oligomeric region, and the loss of oligomers through the accumulation wall membrane in asymmetric flow field-flow fractionation. The battle is not lost, however, because, with some care and given a sufficient supply of sample, the quantitation of both individual oligomeric species and of the total oligomeric region is often possible. PMID:23887277

  18. Restriction Site Tiling Analysis: accurate discovery and quantitative genotyping of genome-wide polymorphisms using nucleotide arrays

    PubMed Central

    2010-01-01

    High-throughput genotype data can be used to identify genes important for local adaptation in wild populations, phenotypes in lab stocks, or disease-related traits in human medicine. Here we advance microarray-based genotyping for population genomics with Restriction Site Tiling Analysis. The approach simultaneously discovers polymorphisms and provides quantitative genotype data at 10,000s of loci. It is highly accurate and free from ascertainment bias. We apply the approach to uncover genomic differentiation in the purple sea urchin. PMID:20403197

  19. Highly accurate thermal flow microsensor for continuous and quantitative measurement of cerebral blood flow.

    PubMed

    Li, Chunyan; Wu, Pei-ming; Wu, Zhizhen; Limnuson, Kanokwan; Mehan, Neal; Mozayan, Cameron; Golanov, Eugene V; Ahn, Chong H; Hartings, Jed A; Narayan, Raj K

    2015-10-01

    Cerebral blood flow (CBF) plays a critical role in the exchange of nutrients and metabolites at the capillary level and is tightly regulated to meet the metabolic demands of the brain. After major brain injuries, CBF normally decreases and supporting the injured brain with adequate CBF is a mainstay of therapy after traumatic brain injury. Quantitative and localized measurement of CBF is therefore critically important for evaluation of treatment efficacy and also for understanding of cerebral pathophysiology. We present here an improved thermal flow microsensor and its operation which provides higher accuracy compared to existing devices. The flow microsensor consists of three components, two stacked-up thin film resistive elements serving as composite heater/temperature sensor and one remote resistive element for environmental temperature compensation. It operates in constant-temperature mode (~2 °C above the medium temperature) providing 20 ms temporal resolution. Compared to previous thermal flow microsensor based on self-heating and self-sensing design, the sensor presented provides at least two-fold improvement in accuracy in the range from 0 to 200 ml/100 g/min. This is mainly achieved by using the stacked-up structure, where the heating and sensing are separated to improve the temperature measurement accuracy by minimization of errors introduced by self-heating. PMID:26256480

  20. Quantitative calcium resistivity based method for accurate and scalable water vapor transmission rate measurement.

    PubMed

    Reese, Matthew O; Dameron, Arrelaine A; Kempe, Michael D

    2011-08-01

    The development of flexible organic light emitting diode displays and flexible thin film photovoltaic devices is dependent on the use of flexible, low-cost, optically transparent and durable barriers to moisture and/or oxygen. It is estimated that this will require high moisture barriers with water vapor transmission rates (WVTR) between 10(-4) and 10(-6) g/m(2)/day. Thus there is a need to develop a relatively fast, low-cost, and quantitative method to evaluate such low permeation rates. Here, we demonstrate a method where the resistance changes of patterned Ca films, upon reaction with moisture, enable one to calculate a WVTR between 10 and 10(-6) g/m(2)/day or better. Samples are configured with variable aperture size such that the sensitivity and/or measurement time of the experiment can be controlled. The samples are connected to a data acquisition system by means of individual signal cables permitting samples to be tested under a variety of conditions in multiple environmental chambers. An edge card connector is used to connect samples to the measurement wires enabling easy switching of samples in and out of test. This measurement method can be conducted with as little as 1 h of labor time per sample. Furthermore, multiple samples can be measured in parallel, making this an inexpensive and high volume method for measuring high moisture barriers. PMID:21895269

  1. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping.

    PubMed

    Lee, Han B; Schwab, Tanya L; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L; Cervera, Roberto Lopez; McNulty, Melissa S; Bostwick, Hannah S; Clark, Karl J

    2016-06-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98-100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  2. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping

    PubMed Central

    Lee, Han B.; Schwab, Tanya L.; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L.; Cervera, Roberto Lopez; McNulty, Melissa S.; Bostwick, Hannah S.; Clark, Karl J.

    2016-01-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98–100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  3. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis.

    PubMed

    Hilbert, David W; Smith, William L; Chadwick, Sean G; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E; Aguin, Tina J; Sobel, Jack D; Gygax, Scott E

    2016-04-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease:Gardnerella vaginalis,Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the orderClostridiales),Megasphaeraphylotype 1 or 2,Lactobacillus iners,Lactobacillus crispatus,Lactobacillus gasseri, andLactobacillus jensenii We generated a logistic regression model that identifiedG. vaginalis,A. vaginae, andMegasphaeraphylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion ofLactobacillusspp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV. PMID:26818677

  4. Proteome Analyses Using Accurate Mass and Elution Time Peptide Tags with Capillary LC Time-of-Flight Mass Spectrometry

    SciTech Connect

    Strittmatter, Eric F.; Ferguson, Patrick L.; Tang, Keqi; Smith, Richard D.

    2003-09-01

    We describe the application of capillary liquid chromatography (LC) time-of-flight (TOF) mass spectrometric instrumentation for the rapid characterization of microbial proteomes. Previously (Lipton et al. Proc. Natl Acad. Sci. USA, 99, 2002, 11049) the peptides from a series of growth conditions of Deinococcus radiodurans have been characterized using capillary LC MS/MS and accurate mass measurements which are logged in an accurate mass and time (AMT) tag database. Using this AMT tag database, detected peptides can be assigned using measurements obtained on a TOF due to the additional use of elution time data as a constraint. When peptide matches are obtained using AMT tags (i.e. using both constraints) unique matches of a mass spectral peak occurs 88% of the time. Not only are AMT tag matches unique in most cases, the coverage of the proteome is high; {approx}3500 unique peptide AMT tags are found on average per capillary LC run. From the results of the AMT tag database search, {approx}900 ORFs detected using LC-TOFMS, with {approx}500 ORFs covered by at least two AMT tags. These results indicate that AMT databases searches with modest mass and elution time criteria can provide proteomic information for approximately one thousand proteins in a single run of <3 hours. The advantage of this method over using MS/MS based techniques is the large number of identifications that occur in a single experiment as well as the basis for improved quantitation. For MS/MS experiments, the number of peptide identifications is severely restricted because of the time required to dissociate the peptides individually. These results demonstrate the utility of the AMT tag approach using capillary LC-TOF MS instruments, and also show that AMT tags developed using other instrumentation can be effectively utilized.

  5. Quantitative determination and pattern recognition analyses of bioactive marker compounds from Dipsaci Radix by HPLC.

    PubMed

    Zhao, Bing Tian; Jeong, Su Yang; Moon, Dong Cheul; Son, Kun Ho; Son, Jong Keun; Woo, Mi Hee

    2013-11-01

    In this study, quantitative and pattern recognition analyses were developed using HPLC/UV for the quality evaluation of Dipsaci Radix. For quantitative analysis, five major bioactive compounds were assessed. The separation conditions employed for HPLC/UV were optimized using ODS C18 column (250 × 4.6 mm, 5 μm) with a gradient of acetonitrile and water as the mobile phase at a flow rate of 1.0 mL/min and a detection wavelength of 212 nm. These methods were fully validated with respect to linearity, accuracy, precision, recovery, and robustness. The HPLC/UV method was applied successfully to the quantification of five major compounds in the extract of Dipsaci Radix. The HPLC analytical method for pattern recognition analysis was validated by repeated analysis of 17 Dipsaci Radix and four Phlomidis Radix samples. The results indicate that the established HPLC/UV method is suitable for quantitative analysis. PMID:23877237

  6. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples

    PubMed Central

    Mackie, David M.; Jahnke, Justin P.; Benyamin, Marcus S.; Sumner, James J.

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users’ purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells. PMID:26977411

  7. Method for accurate quantitation of background tissue optical properties in the presence of emission from a strong fluorescence marker

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.

    2015-03-01

    Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.

  8. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait

    PubMed Central

    Golestan Hashemi, Farahnaz Sadat; Rafii, Mohd Y.; Ismail, Mohd Razi; Mohamed, Mahmud Tengku Muda; Rahim, Harun A.; Latif, Mohammad Abdul; Aslani, Farzad

    2015-01-01

    When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL) analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP), the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance) was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM). These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice. PMID:26061689

  9. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait.

    PubMed

    Golestan Hashemi, Farahnaz Sadat; Rafii, Mohd Y; Ismail, Mohd Razi; Mohamed, Mahmud Tengku Muda; Rahim, Harun A; Latif, Mohammad Abdul; Aslani, Farzad

    2015-01-01

    When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL) analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP), the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance) was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM). These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice. PMID:26061689

  10. A simple and accurate protocol for absolute polar metabolite quantification in cell cultures using quantitative nuclear magnetic resonance.

    PubMed

    Goldoni, Luca; Beringhelli, Tiziana; Rocchia, Walter; Realini, Natalia; Piomelli, Daniele

    2016-05-15

    Absolute analyte quantification by nuclear magnetic resonance (NMR) spectroscopy is rarely pursued in metabolomics, even though this would allow researchers to compare results obtained using different techniques. Here we report on a new protocol that permits, after pH-controlled serum protein removal, the sensitive quantification (limit of detection [LOD] = 5-25 μM) of hydrophilic nutrients and metabolites in the extracellular medium of cells in cultures. The method does not require the use of databases and uses PULCON (pulse length-based concentration determination) quantitative NMR to obtain results that are significantly more accurate and reproducible than those obtained by CPMG (Carr-Purcell-Meiboom-Gill) sequence or post-processing filtering approaches. Three practical applications of the method highlight its flexibility under different cell culture conditions. We identified and quantified (i) metabolic differences between genetically engineered human cell lines, (ii) alterations in cellular metabolism induced by differentiation of mouse myoblasts into myotubes, and (iii) metabolic changes caused by activation of neurotransmitter receptors in mouse myoblasts. Thus, the new protocol offers an easily implementable, efficient, and versatile tool for the investigation of cellular metabolism and signal transduction. PMID:26898303

  11. Development and evaluation of a liquid chromatography-mass spectrometry method for rapid, accurate quantitation of malondialdehyde in human plasma.

    PubMed

    Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H

    2016-09-01

    Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618

  12. Importance of housekeeping gene selection for accurate reverse transcription-quantitative polymerase chain reaction in a wound healing model.

    PubMed

    Turabelidze, Anna; Guo, Shujuan; DiPietro, Luisa A

    2010-01-01

    Studies in the field of wound healing have utilized a variety of different housekeeping genes for reverse transcription-quantitative polymerase chain reaction (RT-qPCR) analysis. However, nearly all of these studies assume that the selected normalization gene is stably expressed throughout the course of the repair process. The purpose of our current investigation was to identify the most stable housekeeping genes for studying gene expression in mouse wound healing using RT-qPCR. To identify which housekeeping genes are optimal for studying gene expression in wound healing, we examined all articles published in Wound Repair and Regeneration that cited RT-qPCR during the period of January/February 2008 until July/August 2009. We determined that ACTβ, GAPDH, 18S, and β2M were the most frequently used housekeeping genes in human, mouse, and pig studies. We also investigated nine commonly used housekeeping genes that are not generally used in wound healing models: GUS, TBP, RPLP2, ATP5B, SDHA, UBC, CANX, CYC1, and YWHAZ. We observed that wounded and unwounded tissues have contrasting housekeeping gene expression stability. The results demonstrate that commonly used housekeeping genes must be validated as accurate normalizing genes for each individual experimental condition. PMID:20731795

  13. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    PubMed Central

    Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341

  14. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases.

    PubMed

    Hollingsworth, T Déirdre; Adams, Emily R; Anderson, Roy M; Atkins, Katherine; Bartsch, Sarah; Basáñez, María-Gloria; Behrend, Matthew; Blok, David J; Chapman, Lloyd A C; Coffeng, Luc; Courtenay, Orin; Crump, Ron E; de Vlas, Sake J; Dobson, Andy; Dyson, Louise; Farkas, Hajnal; Galvani, Alison P; Gambhir, Manoj; Gurarie, David; Irvine, Michael A; Jervis, Sarah; Keeling, Matt J; Kelly-Hope, Louise; King, Charles; Lee, Bruce Y; Le Rutte, Epke A; Lietman, Thomas M; Ndeffo-Mbah, Martial; Medley, Graham F; Michael, Edwin; Pandey, Abhishek; Peterson, Jennifer K; Pinsent, Amy; Porco, Travis C; Richardus, Jan Hendrik; Reimer, Lisa; Rock, Kat S; Singh, Brajendra K; Stolk, Wilma; Swaminathan, Subramanian; Torr, Steve J; Townsend, Jeffrey; Truscott, James; Walker, Martin; Zoueva, Alexandra

    2015-01-01

    Quantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an overview of a collection of novel model-based analyses which aim to address key questions on the dynamics of transmission and control of nine NTDs: Chagas disease, visceral leishmaniasis, human African trypanosomiasis, leprosy, soil-transmitted helminths, schistosomiasis, lymphatic filariasis, onchocerciasis and trachoma. Several common themes resonate throughout these analyses, including: the importance of epidemiological setting on the success of interventions; targeting groups who are at highest risk of infection or re-infection; and reaching populations who are not accessing interventions and may act as a reservoir for infection,. The results also highlight the challenge of maintaining elimination 'as a public health problem' when true elimination is not reached. The models elucidate the factors that may be contributing most to persistence of disease and discuss the requirements for eventually achieving true elimination, if that is possible. Overall this collection presents new analyses to inform current control initiatives. These papers form a base from which further development of the models and more rigorous validation against a variety of datasets can help to give more detailed advice. At the moment, the models' predictions are being considered as the world prepares for a final push towards control or elimination of neglected tropical diseases by 2020. PMID:26652272

  15. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    PubMed

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes. PMID:22340891

  16. Quantitative proteomic analyses of the response of acidophilic microbial communities to different pH conditions

    SciTech Connect

    Belnap, Christopher P.; Pan, Chongle; Denef, Vincent; Samatova, Nagiza F; Hettich, Robert {Bob} L; Banfield, Jillian F.

    2011-01-01

    Extensive genomic characterization of multi-species acid mine drainage microbial consortia combined with laboratory cultivation has enabled the application of quantitative proteomic analyses at the community level. In this study, quantitative proteomic comparisons were used to functionally characterize laboratory-cultivated acidophilic communities sustained in pH 1.45 or 0.85 conditions. The distributions of all proteins identified for individual organisms indicated biases for either high or low pH, and suggests pH-specific niche partitioning for low abundance bacteria and archaea. Although the proteome of the dominant bacterium, Leptospirillum group II, was largely unaffected by pH treatments, analysis of functional categories indicated proteins involved in amino acid and nucleotide metabolism, as well as cell membrane/envelope biogenesis were overrepresented at high pH. Comparison of specific protein abundances indicates higher pH conditions favor Leptospirillum group III, whereas low pH conditions promote the growth of certain archaea. Thus, quantitative proteomic comparisons revealed distinct differences in community composition and metabolic function of individual organisms during different pH treatments. Proteomic analysis revealed other aspects of community function. Different numbers of phage proteins were identified across biological replicates, indicating stochastic spatial heterogeneity of phage outbreaks. Additionally, proteomic data were used to identify a previously unknown genotypic variant of Leptospirillum group II, an indication of selection for a specific Leptospirillum group II population in laboratory communities. Our results confirm the importance of pH and related geochemical factors in fine-tuning acidophilic microbial community structure and function at the species and strain level, and demonstrate the broad utility of proteomics in laboratory community studies.

  17. Quantitative analyses of glass via laser-induced breakdown spectroscopy in argon

    NASA Astrophysics Data System (ADS)

    Gerhard, C.; Hermann, J.; Mercadier, L.; Loewenthal, L.; Axente, E.; Luculescu, C. R.; Sarnet, T.; Sentis, M.; Viöl, W.

    2014-11-01

    We demonstrate that elemental analysis of glass with a measurement precision of about 10% can be performed via calibration-free laser-induced breakdown spectroscopy. Therefore, plasma emission spectra recorded during ultraviolet laser ablation of different glasses are compared to the spectral radiance computed for a plasma in local thermodynamic equilibrium. Using an iterative calculation algorithm, we deduce the relative elemental fractions and the plasma properties from the best agreement between measured and computed spectra. The measurement method is validated in two ways. First, the LIBS measurements are performed on fused silica composed of more than 99.9% of SiO2. Second, the oxygen fractions measured for heavy flint and barite crown glasses are compared to the values expected from the glass composing oxides. The measured compositions are furthermore compared with those obtained by X-ray photoelectron spectroscopy and energy-dispersive X-ray spectroscopy. It is shown that accurate LIBS analyses require spectra recording with short enough delays between laser pulse and detector gate, when the electron density is larger than 1017 cm- 3. The results show that laser-induced breakdown spectroscopy based on accurate plasma modeling is suitable for elemental analysis of complex materials such as glasses, with an analytical performance comparable or even better than that obtained with standard techniques.

  18. Improved centroid moment tensor analyses in the NIED AQUA (Accurate and QUick Analysis system for source parameters)

    NASA Astrophysics Data System (ADS)

    Kimura, H.; Asano, Y.; Matsumoto, T.

    2012-12-01

    The rapid determination of hypocentral parameters and their transmission to the public are valuable components of disaster mitigation. We have operated an automatic system for this purpose—termed the Accurate and QUick Analysis system for source parameters (AQUA)—since 2005 (Matsumura et al., 2006). In this system, the initial hypocenter, the moment tensor (MT), and the centroid moment tensor (CMT) solutions are automatically determined and posted on the NIED Hi-net Web site (www.hinet.bosai.go.jp). This paper describes improvements made to the AQUA to overcome limitations that became apparent after the 2011 Tohoku Earthquake (05:46:17, March 11, 2011 in UTC). The improvements included the processing of NIED F-net velocity-type strong motion records, because NIED F-net broadband seismographs are saturated for great earthquakes such as the 2011 Tohoku Earthquake. These velocity-type strong motion seismographs provide unsaturated records not only for the 2011 Tohoku Earthquake, but also for recording stations located close to the epicenters of M>7 earthquakes. We used 0.005-0.020 Hz records for M>7.5 earthquakes, in contrast to the 0.01-0.05 Hz records employed in the original system. The initial hypocenters determined based on arrival times picked by using seismograms recorded by NIED Hi-net stations can have large errors in terms of magnitude and hypocenter location, especially for great earthquakes or earthquakes located far from the onland Hi-net network. The size of the 2011 Tohoku Earthquake was initially underestimated in the AQUA to be around M5 at the initial stage of rupture. Numerous aftershocks occurred at the outer rise east of the Japan trench, where a great earthquake is anticipated to occur. Hence, we modified the system to repeat the MT analyses assuming a larger size, for all earthquakes for which the magnitude was initially underestimated. We also broadened the search range of centroid depth for earthquakes located far from the onland Hi

  19. Insight in Genome-Wide Association of Metabolite Quantitative Traits by Exome Sequence Analyses

    PubMed Central

    Verhoeven, Aswin; Dharuri, Harish; Amin, Najaf; van Klinken, Jan Bert; Karssen, Lennart C.; de Vries, Boukje; Meissner, Axel; Göraler, Sibel; van den Maagdenberg, Arn M. J. M.; Deelder, André M.; C ’t Hoen, Peter A.; van Duijn, Cornelia M.; van Dijk, Ko Willems

    2015-01-01

    Metabolite quantitative traits carry great promise for epidemiological studies, and their genetic background has been addressed using Genome-Wide Association Studies (GWAS). Thus far, the role of less common variants has not been exhaustively studied. Here, we set out a GWAS for metabolite quantitative traits in serum, followed by exome sequence analysis to zoom in on putative causal variants in the associated genes. 1H Nuclear Magnetic Resonance (1H-NMR) spectroscopy experiments yielded successful quantification of 42 unique metabolites in 2,482 individuals from The Erasmus Rucphen Family (ERF) study. Heritability of metabolites were estimated by SOLAR. GWAS was performed by linear mixed models, using HapMap imputations. Based on physical vicinity and pathway analyses, candidate genes were screened for coding region variation using exome sequence data. Heritability estimates for metabolites ranged between 10% and 52%. GWAS replicated three known loci in the metabolome wide significance: CPS1 with glycine (P-value  = 1.27×10−32), PRODH with proline (P-value  = 1.11×10−19), SLC16A9 with carnitine level (P-value  = 4.81×10−14) and uncovered a novel association between DMGDH and dimethyl-glycine (P-value  = 1.65×10−19) level. In addition, we found three novel, suggestively significant loci: TNP1 with pyruvate (P-value  = 1.26×10−8), KCNJ16 with 3-hydroxybutyrate (P-value  = 1.65×10−8) and 2p12 locus with valine (P-value  = 3.49×10−8). Exome sequence analysis identified potentially causal coding and regulatory variants located in the genes CPS1, KCNJ2 and PRODH, and revealed allelic heterogeneity for CPS1 and PRODH. Combined GWAS and exome analyses of metabolites detected by high-resolution 1H-NMR is a robust approach to uncover metabolite quantitative trait loci (mQTL), and the likely causative variants in these loci. It is anticipated that insight in the genetics of intermediate phenotypes will provide additional

  20. Computer-assisted qualitative and quantitative analyses of energy-related complex mixtures

    SciTech Connect

    Stamoudis, V.C.; Picel, K.C.

    1985-10-24

    Recent advances in the efficiency of gas chromatography (GC) columns and improvements in instrument hardware and computer software have facilitated rapid and accurate analysis of complex organic mixtures. By applying manufacturer-supplied software (calibrated-peak methods) and custom software based on retention indices (RI) (Demirgian, 1984; Stamoudis and Demirgian, 1985), most of the classes of chemicals in these mixtures can be rapidly analyzed both qualitatively and quantitatively. Sample prefractionation is essential because it produces simpler mixtures for GC analysis, and it separates constituents by chemical class, which aids automated identification. In the analysis of any new material, existing sample preparation procedures are validated for the material or modified to produce well-resolved chemical class fractions. Representative samples and their subfractions are characterized by GC/mass spectrometry (GC/MS) before analysis by computer-assisted GC. During our studies of the toxicological interactions of chemicals in complex mixtures, we have isolated, subfractionated, and characterized the neutral components of a variety of energy-related materials. Here we present chemical characterization and mutagenicity data of selected fractions from three coal-gasification by-product tars, two from pilot-plant gasifiers, and one from a commercial scale gasifier, and analogous data for aromatic subfractions from two additional pilot gasifiers, as well as one from the commercial gasifier. 22 refs., 3 figs., 2 tabs.

  1. Genome-wide Linkage Analyses of Quantitative and Categorical Autism Subphenotypes

    PubMed Central

    Liu, Xiao-Qing; Paterson, Andrew D.; Szatmari, Peter

    2008-01-01

    Background The search for susceptibility genes in autism and autism spectrum disorders (ASD) has been hindered by the possible small effects of individual genes and by genetic (locus) heterogeneity. To overcome these obstacles, one method is to use autism-related subphenotypes instead of the categorical diagnosis of autism since they may be more directly related to the underlying susceptibility loci. Another strategy is to analyze subsets of families that meet certain clinical criteria to reduce genetic heterogeneity. Methods In this study, using 976 multiplex families from the Autism Genome Project consortium, we performed genome-wide linkage analyses on two quantitative subphenotypes, the total scores of the reciprocal social interaction domain and the restricted, repetitive, and stereotyped patterns of behavior domain from the Autism Diagnostic Interview-Revised. We also selected subsets of ASD families based on four binary subphenotypes, delayed onset of first words, delayed onset of first phrases, verbal status, and IQ ≥ 70. Results When the ASD families with IQ ≥ 70 were used, a logarithm of odds (LOD) score of 4.01 was obtained on chromosome 15q13.3-q14, which was previously linked to schizophrenia. We also obtained a LOD score of 3.40 on chromosome 11p15.4-p15.3 using the ASD families with delayed onset of first phrases. No significant evidence for linkage was obtained for the two quantitative traits. Conclusions This study demonstrates that selection of informative subphenotypes to define a homogeneous set of ASD families could be very important in detecting the susceptibility loci in autism. PMID:18632090

  2. Quantitative sleep stage analyses as a window to neonatal neurologic function

    PubMed Central

    Burns, Joseph W.; Barks, John D.E.; Chervin, Ronald D.

    2014-01-01

    Objective: To test the hypothesis that neonatal sleep physiology reflects cerebral dysfunction, we compared neurologic examination scores to the proportions of recorded sleep/wake states, sleep depth, and sleep fragmentation in critically ill neonates. Methods: Newborn infants (≥35 weeks gestation) who required intensive care and were at risk for seizures were monitored with 8- to 12-hour polysomnograms (PSGs). For each infant, the distribution of sleep-wake states, entropy of the sequence of state transitions, and delta power from the EEG portion of the PSG were quantified. Standardized neurologic examination (Thompson) scores were calculated. Results: Twenty-eight infants participated (mean gestational age 39.0 ± 1.6 weeks). An increased fraction of quiet sleep correlated with worse neurologic examination scores (Spearman rho = 0.54, p = 0.003), but the proportion of active sleep did not (p > 0.1). Higher state entropy corresponded to better examination scores (rho = −0.43, p = 0.023). Decreased delta power during quiet sleep, but not the power at other frequencies, was also associated with worse examination scores (rho = −0.48, p = 0.009). These findings retained significance after adjustment for gestational age or postmenstrual age at the time of the PSG. Sleep stage transition probabilities were also related to examination scores. Conclusions: Among critically ill neonates at risk for CNS dysfunction, several features of recorded sleep—including analyses of sleep stages, depth, and fragmentation—showed associations with neurologic examination scores. Quantitative PSG analyses may add useful objective information to the traditional neurologic assessment of critically ill neonates. PMID:24384644

  3. Deconvoluting complex tissues for expression quantitative trait locus-based analyses

    PubMed Central

    Seo, Ji-Heui; Li, Qiyuan; Fatima, Aquila; Eklund, Aron; Szallasi, Zoltan; Polyak, Kornelia; Richardson, Andrea L.; Freedman, Matthew L.

    2013-01-01

    Breast cancer genome-wide association studies have pinpointed dozens of variants associated with breast cancer pathogenesis. The majority of risk variants, however, are located outside of known protein-coding regions. Therefore, identifying which genes the risk variants are acting through presents an important challenge. Variants that are associated with mRNA transcript levels are referred to as expression quantitative trait loci (eQTLs). Many studies have demonstrated that eQTL-based strategies provide a direct way to connect a trait-associated locus with its candidate target gene. Performing eQTL-based analyses in human samples is complicated because of the heterogeneous nature of human tissue. We addressed this issue by devising a method to computationally infer the fraction of cell types in normal human breast tissues. We then applied this method to 13 known breast cancer risk loci, which we hypothesized were eQTLs. For each risk locus, we took all known transcripts within a 2 Mb interval and performed an eQTL analysis in 100 reduction mammoplasty cases. A total of 18 significant associations were discovered (eight in the epithelial compartment and 10 in the stromal compartment). This study highlights the ability to perform large-scale eQTL studies in heterogeneous tissues. PMID:23650637

  4. Multi-Window Classical Least Squares Multivariate Calibration Methods for Quantitative ICP-AES Analyses

    SciTech Connect

    CHAMBERS,WILLIAM B.; HAALAND,DAVID M.; KEENAN,MICHAEL R.; MELGAARD,DAVID K.

    1999-10-01

    The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded during calibration). The MWCLS method is found to be vastly

  5. GIbPSs: a toolkit for fast and accurate analyses of genotyping-by-sequencing data without a reference genome.

    PubMed

    Hapke, A; Thiele, D

    2016-07-01

    Genotyping-by-sequencing (GBS) and related methods are increasingly used for studies of non-model organisms from population genetic to phylogenetic scales. We present GIbPSs, a new genotyping toolkit for the analysis of data from various protocols such as RAD, double-digest RAD, GBS, and two-enzyme GBS without a reference genome. GIbPSs can handle paired-end GBS data and is able to assign reads from both strands of a restriction fragment to the same locus. GIbPSs is most suitable for population genetic and phylogeographic analyses. It avoids genotyping errors due to indel variation by identifying and discarding affected loci. GIbPSs creates a genotype database that offers rich functionality for data filtering and export in numerous formats. We performed comparative analyses of simulated and real GBS data with GIbPSs and another program, pyRAD. This program accounts for indel variation by aligning homologous sequences. GIbPSs performed better than pyRAD in several aspects. It required much less computation time and displayed higher genotyping accuracy. GIbPSs retained smaller numbers of loci overall in analyses of real GBS data. It nevertheless delivered more complete genotype matrices with greater locus overlap between individuals and greater numbers of loci sampled in all individuals. PMID:26858004

  6. A quantitative method to analyse an open-ended questionnaire: A case study about the Boltzmann Factor

    NASA Astrophysics Data System (ADS)

    Rosario Battaglia, Onofrio; Di Paola, Benedetto

    2016-05-01

    This paper describes a quantitative method to analyse an open-ended questionnaire. Student responses to a specially designed written questionnaire are quantitatively analysed by not hierarchical clustering called k -means method. Through this we can characterise behaviour students with respect their expertise to formulate explanations for phenomena or processes and/or use a given model in the different context. The physics topic is about the Boltzmann Factor, which allows the students to have a unifying view of different phenomena in different contexts.

  7. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    NASA Astrophysics Data System (ADS)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  8. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    PubMed Central

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-01-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042

  9. Quantitative Content Analysis Procedures to Analyse Students' Reflective Essays: A Methodological Review of Psychometric and Edumetric Aspects

    ERIC Educational Resources Information Center

    Poldner, E.; Simons, P. R. J.; Wijngaards, G.; van der Schaaf, M. F.

    2012-01-01

    Reflective essays are a common way to develop higher education students' reflection ability. Researchers frequently analyse reflective essays based on quantitative content analysis procedures (QCA). However, the quality criteria that should be met in QCA are not straightforward. This article aims to: (1) develop a framework of quality requirements…

  10. Study Quality in SLA: An Assessment of Designs, Analyses, and Reporting Practices in Quantitative L2 Research

    ERIC Educational Resources Information Center

    Plonsky, Luke

    2013-01-01

    This study assesses research and reporting practices in quantitative second language (L2) research. A sample of 606 primary studies, published from 1990 to 2010 in "Language Learning and Studies in Second Language Acquisition," was collected and coded for designs, statistical analyses, reporting practices, and outcomes (i.e., effect…

  11. Linear Quantitative Profiling Method Fast Monitors Alkaloids of Sophora Flavescens That Was Verified by Tri-Marker Analyses.

    PubMed

    Hou, Zhifei; Sun, Guoxiang; Guo, Yong

    2016-01-01

    The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard. PMID:27529425

  12. Linear Quantitative Profiling Method Fast Monitors Alkaloids of Sophora Flavescens That Was Verified by Tri-Marker Analyses

    PubMed Central

    Hou, Zhifei; Sun, Guoxiang; Guo, Yong

    2016-01-01

    The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard. PMID:27529425

  13. Wavelet prism decomposition analysis applied to CARS spectroscopy: a tool for accurate and quantitative extraction of resonant vibrational responses.

    PubMed

    Kan, Yelena; Lensu, Lasse; Hehl, Gregor; Volkmer, Andreas; Vartiainen, Erik M

    2016-05-30

    We propose an approach, based on wavelet prism decomposition analysis, for correcting experimental artefacts in a coherent anti-Stokes Raman scattering (CARS) spectrum. This method allows estimating and eliminating a slowly varying modulation error function in the measured normalized CARS spectrum and yields a corrected CARS line-shape. The main advantage of the approach is that the spectral phase and amplitude corrections are avoided in the retrieved Raman line-shape spectrum, thus significantly simplifying the quantitative reconstruction of the sample's Raman response from a normalized CARS spectrum in the presence of experimental artefacts. Moreover, the approach obviates the need for assumptions about the modulation error distribution and the chemical composition of the specimens under study. The method is quantitatively validated on normalized CARS spectra recorded for equimolar aqueous solutions of D-fructose, D-glucose, and their disaccharide combination sucrose. PMID:27410113

  14. The Neglected Side of the Coin: Quantitative Benefit-risk Analyses in Medical Imaging.

    PubMed

    Zanzonico, Pat B

    2016-03-01

    While it is implicitly recognized that the benefits of diagnostic imaging far outweigh any theoretical radiogenic risks, quantitative estimates of the benefits are rarely, if ever, juxtaposed with quantitative estimates of risk. This alone - expression of benefit in purely qualitative terms versus expression of risk in quantitative, and therefore seemingly more certain, terms - may well contribute to a skewed sense of the relative benefits and risks of diagnostic imaging among healthcare providers as well as patients. The current paper, therefore, briefly compares the benefits of diagnostic imaging in several cases, based on actual mortality or morbidity data if ionizing radiation were not employed, with theoretical estimates of radiogenic cancer mortality based on the "linear no-threshold" (LNT) dose-response model. PMID:26808890

  15. Self-aliquoting microarray plates for accurate quantitative matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Pabst, Martin; Fagerer, Stephan R; Köhling, Rudolf; Küster, Simon K; Steinhoff, Robert; Badertscher, Martin; Wahl, Fabian; Dittrich, Petra S; Jefimovs, Konstantins; Zenobi, Renato

    2013-10-15

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a fast analysis tool employed for the detection of a broad range of analytes. However, MALDI-MS has a reputation of not being suitable for quantitative analysis. Inhomogeneous analyte/matrix co-crystallization, spot-to-spot inhomogeneity, as well as a typically low number of replicates are the main contributing factors. Here, we present a novel MALDI sample target for quantitative MALDI-MS applications, which addresses the limitations mentioned above. The platform is based on the recently developed microarray for mass spectrometry (MAMS) technology and contains parallel lanes of hydrophilic reservoirs. Samples are not pipetted manually but deposited by dragging one or several sample droplets with a metal sliding device along these lanes. Sample is rapidly and automatically aliquoted into the sample spots due to the interplay of hydrophilic/hydrophobic interactions. With a few microliters of sample, it is possible to aliquot up to 40 replicates within seconds, each aliquot containing just 10 nL. The analyte droplet dries immediately and homogeneously, and consumption of the whole spot during MALDI-MS analysis is typically accomplished within few seconds. We evaluated these sample targets with respect to their suitability for use with different samples and matrices. Furthermore, we tested their application for generating calibration curves of standard peptides with α-cyano-4-hdydroxycinnamic acid as a matrix. For angiotensin II and [Glu(1)]-fibrinopeptide B we achieved coefficients of determination (r(2)) greater than 0.99 without the use of internal standards. PMID:24003910

  16. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    ERIC Educational Resources Information Center

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  17. Robust Algorithm for Alignment of Liquid Chromatography-Mass Spectrometry Analyses in an Accurate Mass and Time Tag Data Analysis Pipeline

    SciTech Connect

    Jaitly, Navdeep; Monroe, Matthew E.; Petyuk, Vladislav A.; Clauss, Therese RW; Adkins, Joshua N.; Smith, Richard D.

    2006-11-01

    Liquid chromatography coupled to mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS) has become a standard technique for analyzing complex peptide mixtures to determine composition and relative quantity. Several high-throughput proteomics techniques attempt to combine complementary results from multiple LC-MS and LC-MS/MS analyses to provide more comprehensive and accurate results. To effectively collate results from these techniques, variations in mass and elution time measurements between related analyses are corrected by using algorithms designed to align the various types of results: LC-MS/MS vs. LC-MS/MS, LC-MS vs. LC-MS/MS, and LC-MS vs. LC-MS. Described herein are new algorithms referred to collectively as Liquid Chromatography based Mass Spectrometric Warping and Alignment of Retention times of Peptides (LCMSWARP) which use a dynamic elution time warping approach similar to traditional algorithms that correct variation in elution time using piecewise linear functions. LCMSWARP is compared to a linear alignment algorithm that assumes a linear transformation of elution time between analyses. LCMSWARP also corrects for drift in mass measurement accuracies that are often seen in an LC-MS analysis due to factors such as analyzer drift. We also describe the alignment of LC-MS results and provide examples of alignment of analyses from different chromatographic systems to demonstrate more complex transformation functions.

  18. Accurate quantitative 13C NMR spectroscopy: repeatability over time of site-specific 13C isotope ratio determination.

    PubMed

    Caytan, Elsa; Botosoa, Eliot P; Silvestre, Virginie; Robins, Richard J; Akoka, Serge; Remaud, Gérald S

    2007-11-01

    The stability over time (repeatability) for the determination of site-specific 13C/12C ratios at natural abundance by quantitative 13C NMR spectroscopy has been tested on three probes: enriched bilabeled [1,2-13C2]ethanol; ethanol at natural abundance; and vanillin at natural abundance. It is shown in all three cases that the standard deviation for a series of measurements taken every 2-3 months over periods between 9 and 13 months is equal to or smaller than the standard deviation calculated from 5-10 replicate measurements made on a single sample. The precision which can be achieved using the present analytical 13C NMR protocol is higher than the prerequisite value of 1-2 per thousand for the determination of site-specific 13C/12C ratios at natural abundance (13C-SNIF-NMR). Hence, this technique permits the discrimination of very small variations in 13C/12C ratios between carbon positions, as found in biogenic natural products. This observed stability over time in 13C NMR spectroscopy indicates that further improvements in precision will depend primarily on improved signal-to-noise ratio. PMID:17900175

  19. Selection of accurate reference genes in mouse trophoblast stem cells for reverse transcription-quantitative polymerase chain reaction.

    PubMed

    Motomura, Kaori; Inoue, Kimiko; Ogura, Atsuo

    2016-06-17

    Mouse trophoblast stem cells (TSCs) form colonies of different sizes and morphologies, which might reflect their degrees of differentiation. Therefore, each colony type can have a characteristic gene expression profile; however, the expression levels of internal reference genes may also change, causing fluctuations in their estimated gene expression levels. In this study, we validated seven housekeeping genes by using a geometric averaging method and identified Gapdh as the most stable gene across different colony types. Indeed, when Gapdh was used as the reference, expression levels of Elf5, a TSC marker gene, stringently classified TSC colonies into two groups: a high expression groups consisting of type 1 and 2 colonies, and a lower expression group consisting of type 3 and 4 colonies. This clustering was consistent with our putative classification of undifferentiated/differentiated colonies based on their time-dependent colony transitions. By contrast, use of an unstable reference gene (Rn18s) allowed no such clear classification. Cdx2, another TSC marker, did not show any significant colony type-specific expression pattern irrespective of the reference gene. Selection of stable reference genes for quantitative gene expression analysis might be critical, especially when cell lines consisting of heterogeneous cell populations are used. PMID:26853688

  20. Selection of accurate reference genes in mouse trophoblast stem cells for reverse transcription-quantitative polymerase chain reaction

    PubMed Central

    MOTOMURA, Kaori; INOUE, Kimiko; OGURA, Atsuo

    2016-01-01

    Mouse trophoblast stem cells (TSCs) form colonies of different sizes and morphologies, which might reflect their degrees of differentiation. Therefore, each colony type can have a characteristic gene expression profile; however, the expression levels of internal reference genes may also change, causing fluctuations in their estimated gene expression levels. In this study, we validated seven housekeeping genes by using a geometric averaging method and identified Gapdh as the most stable gene across different colony types. Indeed, when Gapdh was used as the reference, expression levels of Elf5, a TSC marker gene, stringently classified TSC colonies into two groups: a high expression groups consisting of type 1 and 2 colonies, and a lower expression group consisting of type 3 and 4 colonies. This clustering was consistent with our putative classification of undifferentiated/differentiated colonies based on their time-dependent colony transitions. By contrast, use of an unstable reference gene (Rn18s) allowed no such clear classification. Cdx2, another TSC marker, did not show any significant colony type-specific expression pattern irrespective of the reference gene. Selection of stable reference genes for quantitative gene expression analysis might be critical, especially when cell lines consisting of heterogeneous cell populations are used. PMID:26853688

  1. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR

    PubMed Central

    Zhang, Jing; Teixeira da Silva, Jaime A.; Wang, ChunXia; Sun, HongMei

    2015-01-01

    Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446

  2. Sample Preparation Approaches for iTRAQ Labeling and Quantitative Proteomic Analyses in Systems Biology.

    PubMed

    Spanos, Christos; Moore, J Bernadette

    2016-01-01

    Among a variety of global quantification strategies utilized in mass spectrometry (MS)-based proteomics, isobaric tags for relative and absolute quantitation (iTRAQ) are an attractive option for examining the relative amounts of proteins in different samples. The inherent complexity of mammalian proteomes and the diversity of protein physicochemical properties mean that complete proteome coverage is still unlikely from a single analytical method. Numerous options exist for reducing protein sample complexity and resolving digested peptides prior to MS analysis. Indeed, the reliability and efficiency of protein identification and quantitation from an iTRAQ workflow strongly depend on sample preparation upstream of MS. Here we describe our methods for: (1) total protein extraction from immortalized cells; (2) subcellular fractionation of murine tissue; (3) protein sample desalting, digestion, and iTRAQ labeling; (4) peptide separation by strong cation-exchange high-performance liquid chromatography; and (5) peptide separation by isoelectric focusing. PMID:26700038

  3. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    DOE PAGESBeta

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; Parks, Jerry M.

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalaminmore » in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co axial ligand binding, leading to substantial errors in predicted

  4. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    PubMed

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-01

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  5. 1NON-INVASIVE RADIOIODINE IMAGING FOR ACCURATE QUANTITATION OF NIS REPORTER GENE EXPRESSION IN TRANSPLANTED HEARTS

    PubMed Central

    Ricci, Davide; Mennander, Ari A; Pham, Linh D; Rao, Vinay P; Miyagi, Naoto; Byrne, Guerard W; Russell, Stephen J; McGregor, Christopher GA

    2008-01-01

    Objectives We studied the concordance of transgene expression in the transplanted heart using bicistronic adenoviral vector coding for a transgene of interest (human carcinoembryonic antigen: hCEA - beta human chorionic gonadotropin: βhCG) and for a marker imaging transgene (human sodium iodide symporter: hNIS). Methods Inbred Lewis rats were used for syngeneic heterotopic cardiac transplantation. Donor rat hearts were perfused ex vivo for 30 minutes prior to transplantation with University of Wisconsin (UW) solution (n=3), with 109 pfu/ml of adenovirus expressing hNIS (Ad-NIS; n=6), hNIS-hCEA (Ad-NIS-CEA; n=6) and hNIS-βhCG (Ad-NIS-CG; n=6). On post-operative day (POD) 5, 10, 15 all animals underwent micro-SPECT/CT imaging of the donor hearts after tail vein injection of 1000 μCi 123I and blood sample collection for hCEA and βhCG quantification. Results Significantly higher image intensity was noted in the hearts perfused with Ad-NIS (1.1±0.2; 0.9±0.07), Ad-NIS-CEA (1.2±0.3; 0.9±0.1) and Ad-NIS-CG (1.1±0.1; 0.9±0.1) compared to UW group (0.44±0.03; 0.47±0.06) on POD 5 and 10 (p<0.05). Serum levels of hCEA and βhCG increased in animals showing high cardiac 123I uptake, but not in those with lower uptake. Above this threshold, image intensities correlated well with serum levels of hCEA and βhCG (R2=0.99 and R2=0.96 respectively). Conclusions These data demonstrate that hNIS is an excellent reporter gene for the transplanted heart. The expression level of hNIS can be accurately and non-invasively monitored by serial radioisotopic single photon emission computed tomography (SPECT) imaging. High concordance has been demonstrated between imaging and soluble marker peptides at the maximum transgene expression on POD 5. PMID:17980613

  6. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    PubMed

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit. PMID:26897117

  7. Fluvial drainage networks: the fractal approach as an improvement of quantitative geomorphic analyses

    NASA Astrophysics Data System (ADS)

    Melelli, Laura; Liucci, Luisa; Vergari, Francesca; Ciccacci, Sirio; Del Monte, Maurizio

    2014-05-01

    Drainage basins are primary landscape units for geomorphological investigations. Both hillslopes and river drainage system are fundamental components in drainage basins analysis. As other geomorphological systems, also the drainage basins aim to an equilibrium condition where the sequence of erosion, transport and sedimentation approach to a condition of minimum energy effort. This state is revealed by a typical geometry of landforms and of drainage net. Several morphometric indexes can measure how much a drainage basin is far from the theoretical equilibrium configuration, revealing possible external disarray. In active tectonic areas, the drainage basins have a primary importance in order to highlight style, amount and rate of tectonic impulses, and morphometric indexes allow to estimate the tectonic activity classes of different sectors in a study area. Moreover, drainage rivers are characterized by a self-similarity structure; this promotes the use of fractals theory to investigate the system. In this study, fractals techniques are employed together with quantitative geomorphological analysis to study the Upper Tiber Valley (UTV), a tectonic intermontane basin located in northern Apennines (Umbria, central Italy). The area is the result of different tectonic phases. From Late Pliocene until present time the UTV is strongly controlled by a regional uplift and by an extensional phase with different sets of normal faults playing a fundamental role in basin morphology. Thirty-four basins are taken into account for the quantitative analysis, twenty on the left side of the basin, the others on the right side. Using fractals dimension of drainage networks, Horton's laws results, concavity and steepness indexes, and hypsometric curves, this study aims to obtain an evolutionary model of the UTV, where the uplift is compared to local subsidence induced by normal fault activity. The results highlight a well defined difference between western and eastern tributary basins

  8. Quantitative trait loci analyses and RNA-seq identify genes affecting stress response in rainbow trout

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic analyses have the potential to impact aquaculture production traits by identifying markers as proxies for traits which are expensive or difficult to measure and characterizing genetic variation and biochemical mechanisms underlying phenotypic variation. One such trait is the response of rai...

  9. Genome-Wide Pathway Association Studies of Multiple Correlated Quantitative Phenotypes Using Principle Component Analyses

    PubMed Central

    Zhang, Feng; Guo, Xiong; Wu, Shixun; Han, Jing; Liu, Yongjun; Shen, Hui; Deng, Hong-Wen

    2012-01-01

    Genome-wide pathway association studies provide novel insight into the biological mechanism underlying complex diseases. Current pathway association studies primarily focus on single important disease phenotype, which is sometimes insufficient to characterize the clinical manifestations of complex diseases. We present a multi-phenotypes pathway association study(MPPAS) approach using principle component analysis(PCA). In our approach, PCA is first applied to multiple correlated quantitative phenotypes for extracting a set of orthogonal phenotypic components. The extracted phenotypic components are then used for pathway association analysis instead of original quantitative phenotypes. Four statistics were proposed for PCA-based MPPAS in this study. Simulations using the real data from the HapMap project were conducted to evaluate the power and type I error rates of PCA-based MPPAS under various scenarios considering sample sizes, additive and interactive genetic effects. A real genome-wide association study data set of bone mineral density (BMD) at hip and spine were also analyzed by PCA-based MPPAS. Simulation studies illustrated the performance of PCA-based MPPAS for identifying the causal pathways underlying complex diseases. Genome-wide MPPAS of BMD detected associations between BMD and KENNY_CTNNB1_TARGETS_UP as well as LONGEVITYPATHWAY pathways in this study. We aim to provide a applicable MPPAS approach, which may help to gain deep understanding the potential biological mechanism of association results for complex diseases. PMID:23285279

  10. Interfacial undercooling in solidification of colloidal suspensions: analyses with quantitative measurements

    PubMed Central

    You, Jiaxue; Wang, Lilin; Wang, Zhijun; Li, Junjie; Wang, Jincheng; Lin, Xin; Huang, Weidong

    2016-01-01

    Interfacial undercooling in the complex solidification of colloidal suspensions is of significance and remains a puzzling problem. Two types of interfacial undercooling are supposed to be involved in the freezing of colloidal suspensions, i.e., solute constitutional supercooling (SCS) caused by additives in the solvent and particulate constitutional supercooling (PCS) caused by particles. However, quantitative identification of the interfacial undercooling in the solidification of colloidal suspensions, is still absent; thus, the question of which type of undercooling is dominant in this complex system remains unanswered. Here, we quantitatively measured the static and dynamic interface undercoolings of SCS and PCS in ideal and practical colloidal systems. We show that the interfacial undercooling primarily comes from SCS caused by the additives in the solvent, while PCS is minor. This finding implies that the thermodynamic effect of particles from the PCS is not the fundamental physical mechanism for pattern formation of cellular growth and lamellar structure in the solidification of colloidal suspensions, a general case of ice-templating method. Instead, the patterns in the ice-templating method can be controlled effectively by adjusting the additives. PMID:27329394

  11. Quantitative analyses of tartaric acid based on terahertz time domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Cao, Binghua; Fan, Mengbao

    2010-10-01

    Terahertz wave is the electromagnetic spectrum situated between microwave and infrared wave. Quantitative analysis based on terahertz spectroscopy is very important for the application of terahertz techniques. But how to realize it is still under study. L-tartaric acid is widely used as acidulant in beverage, and other food, such as soft drinks, wine, candy, bread and some colloidal sweetmeats. In this paper, terahertz time-domain spectroscopy is applied to quantify the tartaric acid. Two methods are employed to process the terahertz spectra of different samples with different content of tartaric acid. The first one is linear regression combining correlation analysis. The second is partial least square (PLS), in which the absorption spectra in the 0.8-1.4THz region are used to quantify the tartaric acid. To compare the performance of these two principles, the relative error of the two methods is analyzed. For this experiment, the first method does better than the second one. But the first method is suitable for the quantitative analysis of materials which has obvious terahertz absorption peaks, while for material which has no obvious terahertz absorption peaks, the second one is more appropriate.

  12. Interfacial undercooling in solidification of colloidal suspensions: analyses with quantitative measurements.

    PubMed

    You, Jiaxue; Wang, Lilin; Wang, Zhijun; Li, Junjie; Wang, Jincheng; Lin, Xin; Huang, Weidong

    2016-01-01

    Interfacial undercooling in the complex solidification of colloidal suspensions is of significance and remains a puzzling problem. Two types of interfacial undercooling are supposed to be involved in the freezing of colloidal suspensions, i.e., solute constitutional supercooling (SCS) caused by additives in the solvent and particulate constitutional supercooling (PCS) caused by particles. However, quantitative identification of the interfacial undercooling in the solidification of colloidal suspensions, is still absent; thus, the question of which type of undercooling is dominant in this complex system remains unanswered. Here, we quantitatively measured the static and dynamic interface undercoolings of SCS and PCS in ideal and practical colloidal systems. We show that the interfacial undercooling primarily comes from SCS caused by the additives in the solvent, while PCS is minor. This finding implies that the thermodynamic effect of particles from the PCS is not the fundamental physical mechanism for pattern formation of cellular growth and lamellar structure in the solidification of colloidal suspensions, a general case of ice-templating method. Instead, the patterns in the ice-templating method can be controlled effectively by adjusting the additives. PMID:27329394

  13. Quantitative Estimates of Sequence Divergence for Comparative Analyses of Mammalian Genomes

    PubMed Central

    Cooper, Gregory M.; Brudno, Michael; Program, NISC Comparative Sequencing; Green, Eric D.; Batzoglou, Serafim; Sidow, Arend

    2003-01-01

    Comparative sequence analyses on a collection of carefully chosen mammalian genomes could facilitate identification of functional elements within the human genome and allow quantification of evolutionary constraint at the single nucleotide level. High-resolution quantification would be informative for determining the distribution of important positions within functional elements and for evaluating the relative importance of nucleotide sites that carry single nucleotide polymorphisms (SNPs). Because the level of resolution in comparative sequence analyses is a direct function of sequence diversity, we propose that the information content of a candidate mammalian genome be defined as the sequence divergence it would add relative to already-sequenced genomes. We show that reliable estimates of genomic sequence divergence can be obtained from small genomic regions. On the basis of a multiple sequence alignment of ∼1.4 megabases each from eight mammals, we generate such estimates for five unsequenced mammals. Estimates of the neutral divergence in these data suggest that a small number of diverse mammalian genomes in addition to human, mouse, and rat would allow single nucleotide resolution in comparative sequence analyses. [The multiple sequence alignment of the CFTR region and a spreadsheet with the calculations performed, will be available as supplementary information online at www.genome.org.] PMID:12727901

  14. Quantitative characterization of metastatic disease in the spine. Part II. Histogram-based analyses

    SciTech Connect

    Whyne, Cari; Hardisty, Michael; Wu, Florence; Skrinskas, Tomas; Clemons, Mark; Gordon, Lyle; Basran, Parminder S.

    2007-08-15

    Radiological imaging is essential to the appropriate management of patients with bone metastasis; however, there have been no widely accepted guidelines as to the optimal method for quantifying the potential impact of skeletal lesions or to evaluate response to treatment. The current inability to rapidly quantify the response of bone metastases excludes patients with cancer and bone disease from participating in clinical trials of many new treatments as these studies frequently require patients with so-called measurable disease. Computed tomography (CT) can provide excellent skeletal detail with a sensitivity for the diagnosis of bone metastases. The purpose of this study was to establish an objective method to quantitatively characterize disease in the bony spine using CT-based segmentations. It was hypothesized that histogram analysis of CT vertebral density distributions would enable standardized segmentation of tumor tissue and consequently allow quantification of disease in the metastatic spine. Thirty two healthy vertebral CT scans were first studied to establish a baseline characterization. The histograms of the trabecular centrums were found to be Gaussian distributions (average root-mean-square difference=30 voxel counts), as expected for a uniform material. Intrapatient vertebral level similarity was also observed as the means were not significantly different (p>0.8). Thus, a patient-specific healthy vertebral body histogram is able to characterize healthy trabecular bone throughout that individual's thoracolumbar spine. Eleven metastatically involved vertebrae were analyzed to determine the characteristics of the lytic and blastic bone voxels relative to the healthy bone. Lytic and blastic tumors were segmented as connected areas with voxel intensities between specified thresholds. The tested thresholds were {mu}-1.0{sigma}, {mu}-1.5{sigma}, and {mu}-2.0{sigma}, for lytic and {mu}+2.0{sigma}, {mu}+3.0{sigma}, and {mu}+3.5{sigma} for blastic tissue where

  15. Vibrational algorithms for quantitative crystallographic analyses of hydroxyapatite-based biomaterials: I, theoretical foundations.

    PubMed

    Pezzotti, Giuseppe; Zhu, Wenliang; Boffelli, Marco; Adachi, Tetsuya; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato

    2015-05-01

    The Raman spectroscopic method has quantitatively been applied to the analysis of local crystallographic orientation in both single-crystal hydroxyapatite and human teeth. Raman selection rules for all the vibrational modes of the hexagonal structure were expanded into explicit functions of Euler angles in space and six Raman tensor elements (RTE). A theoretical treatment has also been put forward according to the orientation distribution function (ODF) formalism, which allows one to resolve the statistical orientation patterns of the nm-sized hydroxyapatite crystallite comprised in the Raman microprobe. Close-form solutions could be obtained for the Euler angles and their statistical distributions resolved with respect to the direction of the average texture axis. Polarized Raman spectra from single-crystalline hydroxyapatite and textured polycrystalline (teeth enamel) samples were compared, and a validation of the proposed Raman method could be obtained through confirming the agreement between RTE values obtained from different samples. PMID:25673243

  16. Quantitative Analyses of Core Promoters Enable Precise Engineering of Regulated Gene Expression in Mammalian Cells.

    PubMed

    Ede, Christopher; Chen, Ximin; Lin, Meng-Yin; Chen, Yvonne Y

    2016-05-20

    Inducible transcription systems play a crucial role in a wide array of synthetic biology circuits. However, the majority of inducible promoters are constructed from a limited set of tried-and-true promoter parts, which are susceptible to common shortcomings such as high basal expression levels (i.e., leakiness). To expand the toolbox for regulated mammalian gene expression and facilitate the construction of mammalian genetic circuits with precise functionality, we quantitatively characterized a panel of eight core promoters, including sequences with mammalian, viral, and synthetic origins. We demonstrate that this selection of core promoters can provide a wide range of basal gene expression levels and achieve a gradient of fold-inductions spanning 2 orders of magnitude. Furthermore, commonly used parts such as minimal CMV and minimal SV40 promoters were shown to achieve robust gene expression upon induction, but also suffer from high levels of leakiness. In contrast, a synthetic promoter, YB_TATA, was shown to combine low basal expression with high transcription rate in the induced state to achieve significantly higher fold-induction ratios compared to all other promoters tested. These behaviors remain consistent when the promoters are coupled to different genetic outputs and different response elements, as well as across different host-cell types and DNA copy numbers. We apply this quantitative understanding of core promoter properties to the successful engineering of human T cells that respond to antigen stimulation via chimeric antigen receptor signaling specifically under hypoxic environments. Results presented in this study can facilitate the design and calibration of future mammalian synthetic biology systems capable of precisely programmed functionality. PMID:26883397

  17. CT blurring induced bias of quantitative in-stent restenosis analyses

    NASA Astrophysics Data System (ADS)

    Marquering, Henk A.; Stoel, Berend C.; Dijkstra, Jouke; Geleijns, Koos; Persoon, Marion; Jukema, J. Wouter; Streekstra, Geert J.; Reiber, Johan H. C.

    2008-03-01

    Rational and Objective: In CT systems, blurring is the main limiting factor for imaging in-stent restenosis. The aim of this study is to systematically analyze the effect of blurring related biases on the quantitative assessment of in-stent restenosis and to evaluate potential correction methods. Methods: 3D analytical models of a blurred, stented vessel are presented to quantify blurring related artifacts in the stent diameter measurement. Two correction methods are presented for an improved stent diameter measurement. We also examine the suitability of deconvolution techniques for correcting blurring artifacts. Results: Blurring results in a shift of the maximum of the signal intensity towards the center position of the stent, resulting in an underestimation of the stent diameter. This shift can be expressed as a function of the stent radius and width of the point spread function. The correction for this phenomenon reduces the error with 75 percent. Deconvolution reduces the blurring artifacts but introduces a ringing artifact. Conclusion: The analytical vessel models are well suited to study the influence of various parameters on blurring-induced artifacts. The blurring-related underestimation of the stent diameter can significantly be reduced using the presented corrections. Care should be taken into choosing suitable deconvolution filters since they may introduce new artifacts.

  18. Health risks in wastewater irrigation: comparing estimates from quantitative microbial risk analyses and epidemiological studies.

    PubMed

    Mara, D D; Sleigh, P A; Blumenthal, U J; Carr, R M

    2007-03-01

    The combination of standard quantitative microbial risk analysis (QMRA) techniques and 10,000-trial Monte Carlo risk simulations was used to estimate the human health risks associated with the use of wastewater for unrestricted and restricted crop irrigation. A risk of rotavirus infection of 10(-2) per person per year (pppy) was used as the reference level of acceptable risk. Using the model scenario of involuntary soil ingestion for restricted irrigation, the risk of rotavirus infection is approximately 10(-2) pppy when the wastewater contains < or =10(6) Escherichia coli per 100ml and when local agricultural practices are highly mechanised. For labour-intensive agriculture the risk of rotavirus infection is approximately 10(-2) pppy when the wastewater contains < or = 10(5) E. coli per 100ml; however, the wastewater quality should be < or = 10(4) E. coli per 100ml when children under 15 are exposed. With the model scenario of lettuce consumption for unrestricted irrigation, the use of wastewaters containing < or =10(4) E. coli per 100ml results in a rotavirus infection risk of approximately 10(-2) pppy; however, again based on epidemiological evidence from Mexico, the current WHO guideline level of < or =1,000 E. coli per 100ml should be retained for root crops eaten raw. PMID:17402278

  19. Quantitative analyses of Streptococcus mutans biofilms with quartz crystal microbalance, microjet impingement and confocal microscopy.

    PubMed

    Kreth, J; Hagerman, E; Tam, K; Merritt, J; Wong, D T W; Wu, B M; Myung, N V; Shi, W; Qi, F

    2004-10-01

    Microbial biofilm formation can be influenced by many physiological and genetic factors. The conventional microtiter plate assay provides useful but limited information about biofilm formation. With the fast expansion of the biofilm research field, there are urgent needs for more informative techniques to quantify the major parameters of a biofilm, such as adhesive strength and total biomass. It would be even more ideal if these measurements could be conducted in a real-time, non-invasive manner. In this study, we used quartz crystal microbalance (QCM) and microjet impingement (MJI) to measure total biomass and adhesive strength, respectively, of S. mutans biofilms formed under different sucrose concentrations. In conjunction with confocal laser scanning microscopy (CLSM) and the COMSTAT software, we show that sucrose concentration affects the biofilm strength, total biomass, and architecture in both qualitative and quantitative manners. Our data correlate well with previous observations about the effect of sucrose on the adherence of S. mutans to the tooth surface, and demonstrate that QCM is a useful tool for studying the kinetics of biofilm formation in real time and that MJI is a sensitive, easy-to-use device to measure the adhesive strength of a biofilm. PMID:16429589

  20. Global earthquake casualties due to secondary effects: A quantitative analysis for improving rapid loss analyses

    USGS Publications Warehouse

    Marano, K.D.; Wald, D.J.; Allen, T.I.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER's overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra-Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability. ?? Springer Science+Business Media B.V. 2009.

  1. A Quantitative Microtiter Assay for Sialylated Glycoform Analyses Using Lectin Complexes

    PubMed Central

    Srinivasan, Karunya; Washburn, Nathaniel; Sipsey, Sandra F.; Meccariello, Robin; Meador, James W.; Ling, Leona E.; Manning, Anthony M.; Kaundinya, Ganesh V.

    2015-01-01

    Fidelity of glycan structures is a key requirement for biotherapeutics, with carbohydrates playing an important role for therapeutic efficacy. Comprehensive glycan profiling techniques such as liquid chromatography (LC) and mass spectrometry (MS), while providing detailed description of glycan structures, require glycan cleavage, labeling, and paradigms to deconvolute the considerable data sets they generate. On the other hand, lectins as probes on microarrays have recently been used in orthogonal approaches for in situ glycoprofiling but require analyte labeling to take advantage of the capabilities of automated microarray readers and data analysis they afford. Herein, we describe a lectin-based microtiter assay (lectin–enzyme-linked immunosorbent assay [ELISA]) to quantify terminal glycan moieties, applicable to in vitro and in-cell glycan-engineered Fc proteins as well as intact IgGs from intravenous immunoglobulin (IVIG), a blood product containing pooled polyvalent IgG antibodies extracted from plasma from healthy human donors. We corroborate our findings with industry-standard LC-MS profiling. This “customizable” ELISA juxtaposes readouts from multiple lectins, focusing on a subset of glycoforms, and provides the ability to discern single- versus dual-arm glycosylation while defining levels of epitopes at sensitivities comparable to MS. Extendable to other biologics, this ELISA can be used stand-alone or complementary to MS for quantitative glycan analysis. PMID:25851037

  2. Quantitative proteomics analyses of activation states of human THP-1 macrophages.

    PubMed

    Meijer, Kees; Weening, Desiree; de Vries, Marcel P; Priebe, Marion G; Vonk, Roel J; Roelofsen, Han

    2015-10-14

    Macrophages display large functional and phenotypical plasticity. They can adopt a broad range of activation states depending on their microenvironment. Various surface markers are used to characterize these differentially polarized macrophages. However, this is not informative for the functions of the macrophage. In order to have a better understanding of the functional changes of macrophages upon differential polarization, we studied differences in LPS- and IL4-stimulated macrophages. The THP-1 human monocytic cell line, was used as a model system. Cells were labeled, differentiated and stimulated with either LPS or IL-4 in a quantitative SILAC proteomics set-up. The resulting sets of proteins were functionally clustered. LPS-stimulated macrophages show increased secretion of proinflammatory peptides, leading to increased pressure on protein biosynthesis and processing. IL4-stimulated macrophages show upregulation of cell adhesion and extracellular matrix remodeling. Our approach provides an integrated view of polarization-induced functional changes and proves useful for studying functional differences between subsets of macrophages. Moreover, the identified polarization specific proteins may contribute to a better characterization of different activation states in situ and their role in various inflammatory processes. PMID:26200757

  3. Quantitative and fingerprint analyses of Chinese sweet tea plant (Rubus Suavissimus S. Lee)

    PubMed Central

    Chou, Guixin; Xu, Shun-Jun; Liu, Dong; Koh, Gar Yee; Zhang, Jian; Liu, Zhijun

    2009-01-01

    Quality of botanical food is increasingly assessed by the content of multiple bioactive compounds. In this study we report, for the first time, an HPLC fingerprinting method for the quality evaluation of Rubus suavissimus leaves possessing multiple bioactivities. Five constituents, gallic acid, rutin, ellagic acid, rubusoside, and steviol monoside were quantified and used in developing qualitative chromatographic fingerprints. The limits of detection and quantification ranged from 0.29 μg/mL to 37.86 μg/mL. The relative standard deviations (RSDs) of intra- and inter-day precisions were no more than 3.14% and 3.01%, respectively. The average recoveries were between 93.1% and 97.5%. The developed method was validated in analyzing fourteen leaf samples with satisfactory results. The contents of the five marker compounds accounted for an average of about 6% w/w with a variability of 16% among the fourteen samples collected from a single site and year. Gallic acid was the least whereas steviol monoside the most variable compounds among the fourteen leaf samples. The characteristic compound rubusoside that is responsible for the sweet taste accounted for 5% of leaf weight. The validated method can now be used to quantitatively and qualitatively assess the quality of Rubus suavissimus leaves as traditional beverage or potential medicines. PMID:19138116

  4. Quantitative structure-activity analyses of novel hydroxyphenylurea derivatives as antioxidants.

    PubMed

    Nakao, K; Shimizu, R; Kubota, H; Yasuhara, M; Hashimura, Y; Suzuki, T; Fujita, T; Ohmizu, H

    1998-06-01

    A series of substituted hydroxyphenylureas was synthesized, the chemical structure of which was designed based on structures of natural antioxidants, vitamin E (alpha-tocopherol) and uric acid. They exhibited high inhibitory activity against lipid peroxidation. In order to gain an insight into the mechanism of the inhibition reaction, we analyzed their structure-activity relationships quantitatively. Electronic and steric effects of substituents on the phenolic hydroxyl group were shown to be of importance in governing the inhibitory potency. An increase in the electron donating property of substituents toward the phenolic hydroxyl group enhanced the antioxidative activity by the stabilization of an electron-deficient radical-type transition state. The steric shielding by ortho-substituents stabilized the phenoxy radicals formed following the transition state. Derivatives having the carboxyl group were only weakly active presumably because of an intermolecular ion-dipole interaction of the phenolic hydroxyl group with the carboxylate anion which could retard the formation of the transition state. PMID:9681151

  5. Qualitative and quantitative analyses of alkaloids in Uncaria species by UPLC-ESI-Q-TOF/MS.

    PubMed

    Wang, Hai-Bo; Qi, Wen; Zhang, Lin; Yuan, Dan

    2014-01-01

    An ultra performance liquid chromatography (UPLC) coupled with quadrupole time-of-flight mass spectrometry (Q-TOF/MS) method has been optimized and established for the rapid analysis of the alkaloids in 22 samples originating from five Uncaria (U.) species. The accurate mass measurement of all the protonated molecules and subsequent fragment ions offers higher quality structural information for the interpretation of fragmentation pathways of the various groups of alkaloids. A total of 19 oxindole alkaloids, 16 indole alkaloids and 1 flavone were identified by co-chromatography of the sample extract with authentic standards, comparison of the retention time, characteristic molecular ions and fragment ions, or were tentatively identified by MS/MS determination. Moreover, the method was validated for the simultaneous quantification of the 24 components within 10.5 min. The potential chemical markers were identified for classification of the U. species samples by principal component analysis (PCA) and orthogonal partial least squared discriminant analysis (OPLS-DA). The results demonstrate the similarity and differences in alkaloids among the five U. species, which is helpful for the standardization and quality control of the medical materials of the U. Ramulus Cum Unics (URCU). Furthermore, with multivariate statistical analysis, the determined markers are more definite and useful for chemotaxonomy of the U. genus. PMID:25366313

  6. Laboratory Assay of Brood Care for Quantitative Analyses of Individual Differences in Honey Bee (Apis mellifera) Affiliative Behavior

    PubMed Central

    Shpigler, Hagai Y.; Robinson, Gene E.

    2015-01-01

    Care of offspring is a form of affiliative behavior that is fundamental to studies of animal social behavior. Insects do not figure prominently in this topic because Drosophila melanogaster and other traditional models show little if any paternal or maternal care. However, the eusocial honey bee exhibits cooperative brood care with larvae receiving intense and continuous care from their adult sisters, but this behavior has not been well studied because a robust quantitative assay does not exist. We present a new laboratory assay that enables quantification of group or individual honey bee brood “nursing behavior” toward a queen larva. In addition to validating the assay, we used it to examine the influence of the age of the larva and the genetic background of the adult bees on nursing performance. This new assay also can be used in the future for mechanistic analyses of eusociality and comparative analyses of affilative behavior with other animals. PMID:26569402

  7. Quantitative infrared spectroscopy of glucose in blood using partial least-squares analyses

    SciTech Connect

    Ward, K.J.; Haaland, D.M.; Robinson, M.R.; Eaton, R.P.

    1989-01-01

    The concentration of glucose in drawn samples of human blood has been determined using attenuated total reflectance (ATR) Fourier transform infrared (FT-IR) spectroscopy and partial least-squares (PLS) multivariate calibration. A twelve sample calibration set over the physiological glucose range of 50-400 mg/deciliter (dl) resulted in an average error of 5.2 mg/dl. These results were obtained using a cross validated PLS calibration over all infrared data in the frequency range of 950-1200 cm/sup /minus/1/. These results are a dramatic improvement relative to those obtained by previous studies of this system using univariate peak height analyses. 3 refs., 3 figs.

  8. Comparative Genomics Analyses Reveal Extensive Chromosome Colinearity and Novel Quantitative Trait Loci in Eucalyptus.

    PubMed

    Li, Fagen; Zhou, Changpin; Weng, Qijie; Li, Mei; Yu, Xiaoli; Guo, Yong; Wang, Yu; Zhang, Xiaohong; Gan, Siming

    2015-01-01

    Dense genetic maps, along with quantitative trait loci (QTLs) detected on such maps, are powerful tools for genomics and molecular breeding studies. In the important woody genus Eucalyptus, the recent release of E. grandis genome sequence allows for sequence-based genomic comparison and searching for positional candidate genes within QTL regions. Here, dense genetic maps were constructed for E. urophylla and E. tereticornis using genomic simple sequence repeats (SSR), expressed sequence tag (EST) derived SSR, EST-derived cleaved amplified polymorphic sequence (EST-CAPS), and diversity arrays technology (DArT) markers. The E. urophylla and E. tereticornis maps comprised 700 and 585 markers across 11 linkage groups, totaling at 1,208.2 and 1,241.4 cM in length, respectively. Extensive synteny and colinearity were observed as compared to three earlier DArT-based eucalypt maps (two maps with E. grandis × E. urophylla and one map of E. globulus) and with the E. grandis genome sequence. Fifty-three QTLs for growth (10-56 months of age) and wood density (56 months) were identified in 22 discrete regions on both maps, in which only one colocalizaiton was found between growth and wood density. Novel QTLs were revealed as compared with those previously detected on DArT-based maps for similar ages in Eucalyptus. Eleven to 585 positional candidate genes were obained for a 56-month-old QTL through aligning QTL confidence interval with the E. grandis genome. These results will assist in comparative genomics studies, targeted gene characterization, and marker-assisted selection in Eucalyptus and the related taxa. PMID:26695430

  9. Comparative Genomics Analyses Reveal Extensive Chromosome Colinearity and Novel Quantitative Trait Loci in Eucalyptus

    PubMed Central

    Weng, Qijie; Li, Mei; Yu, Xiaoli; Guo, Yong; Wang, Yu; Zhang, Xiaohong; Gan, Siming

    2015-01-01

    Dense genetic maps, along with quantitative trait loci (QTLs) detected on such maps, are powerful tools for genomics and molecular breeding studies. In the important woody genus Eucalyptus, the recent release of E. grandis genome sequence allows for sequence-based genomic comparison and searching for positional candidate genes within QTL regions. Here, dense genetic maps were constructed for E. urophylla and E. tereticornis using genomic simple sequence repeats (SSR), expressed sequence tag (EST) derived SSR, EST-derived cleaved amplified polymorphic sequence (EST-CAPS), and diversity arrays technology (DArT) markers. The E. urophylla and E. tereticornis maps comprised 700 and 585 markers across 11 linkage groups, totaling at 1,208.2 and 1,241.4 cM in length, respectively. Extensive synteny and colinearity were observed as compared to three earlier DArT-based eucalypt maps (two maps with E. grandis × E. urophylla and one map of E. globulus) and with the E. grandis genome sequence. Fifty-three QTLs for growth (10–56 months of age) and wood density (56 months) were identified in 22 discrete regions on both maps, in which only one colocalizaiton was found between growth and wood density. Novel QTLs were revealed as compared with those previously detected on DArT-based maps for similar ages in Eucalyptus. Eleven to 585 positional candidate genes were obained for a 56-month-old QTL through aligning QTL confidence interval with the E. grandis genome. These results will assist in comparative genomics studies, targeted gene characterization, and marker-assisted selection in Eucalyptus and the related taxa. PMID:26695430

  10. The superior analyses of igneous rocks from Roth's Tabellen, 1869 to 1884, arranged according to the quantitative system of classification

    USGS Publications Warehouse

    Washington, H.S.

    1904-01-01

    In Professional Paper No. 14 there were collected the chemical analyses of igneous rocks published from 1884 to 1900, inclusive, arranged according to the quantitative system of classification recently proposed by Cross, Iddings, Pirsson, and Washington. In order to supplement this work it has appeared advisable to select the more reliable and complete of the earlier analyses collected by Justus Roth and arrange them also in the same manner for publication. Petrographers would thus have available for use according to the new system almost the entire body of chemical work of real value on igneous rocks, the exceptions being a few analyses published prior to 1900 which may have been overlooked by both Roth and myself. The two collections would form a foundation as broad as possible for future research and discussion. I must express my sense of obligation to the United States Geological Survey for publishing the present collection of analyses, and my thanks to my colleagues in the new system of classification for their friendly advice and assistance. 

  11. Quantitative Proteomic and Genetic Analyses of the Schizophrenia Susceptibility Factor Dysbindin Identify Novel Roles of the BLOC-1 Complex

    PubMed Central

    Gokhale, Avanti; Larimore, Jennifer; Werner, Erica; So, Lomon; De Luca, Andres Moreno; Lese-Martin, Christa; Lupashin, Vladimir V.; Smith, Yoland; Faundez, Victor

    2012-01-01

    The Biogenesis of Lysosome-Related Organelles Complex 1 (BLOC-1) is a protein complex containing the schizophrenia susceptibility factor dysbindin, which is encoded by the gene DTNBP1. However, mechanisms engaged by dysbindin defining schizophrenia susceptibility pathways have not been quantitatively elucidated. Here, we discovered prevalent and novel cellular roles of the BLOC-1 complex in neuronal cells by performing large-scale Stable Isotopic Labeling of Cells in Culture quantitative proteomics (SILAC) combined with genetic analyses in dysbindin-null mice (Mus musculus) and the genome of schizophrenia patients. We identified 24 proteins that associate with the BLOC-1 complex many of which were altered in content/distribution in cells or tissues deficient in BLOC-1. New findings include BLOC-1 interactions with the COG complex, a Golgi apparatus tether, and antioxidant enzymes peroxiredoxins 1-2. Importantly, loci encoding eight of the 24 proteins are affected by genomic copy number variation in schizophrenia patients. Thus, our quantitative proteomic studies expand the functional repertoire of the BLOC-1 complex and provide insight into putative molecular pathways of schizophrenia susceptibility. PMID:22423091

  12. Stability Test and Quantitative and Qualitative Analyses of the Amino Acids in Pharmacopuncture Extracted from Scolopendra subspinipes mutilans

    PubMed Central

    Cho, GyeYoon; Han, KyuChul; Yoon, JinYoung

    2015-01-01

    Objectives: Scolopendra subspinipes mutilans (S. subspinipes mutilans) is known as a traditional medicine and includes various amino acids, peptides and proteins. The amino acids in the pharmacopuncture extracted from S. subspinipes mutilans by using derivatization methods were analyzed quantitatively and qualitatively by using high performance liquid chromatography (HPLC) over a 12 month period to confirm its stability. Methods: Amino acids of pharmacopuncture extracted from S. subspinipes mutilans were derived by using O-phthaldialdehyde (OPA) & 9-fluorenyl methoxy carbonyl chloride (FMOC) reagent and were analyzed using HPLC. The amino acids were detected by using a diode array detector (DAD) and a fluorescence detector (FLD) to compare a mixed amino acid standard (STD) to the pharmacopuncture from centipedes. The stability tests on the pharmacopuncture from centipedes were done using HPLC for three conditions: a room temperature test chamber, an acceleration test chamber, and a cold test chamber. Results: The pharmacopuncture from centipedes was prepared by using the method of the Korean Pharmacopuncture Institute (KPI) and through quantitative analyses was shown to contain 9 amino acids of the 16 amino acids in the mixed amino acid STD. The amounts of the amino acids in the pharmacopuncture from centipedes were 34.37 ppm of aspartate, 123.72 ppm of arginine, 170.63 ppm of alanine, 59.55 ppm of leucine and 57 ppm of lysine. The relative standard deviation (RSD %) results for the pharmacopuncture from centipedes had a maximum value of 14.95% and minimum value of 1.795% on the room temperature test chamber, the acceleration test chamber and the cold test chamber stability tests. Conclusion: Stability tests on and quantitative and qualitative analyses of the amino acids in the pharmacopuncture extracted from centipedes by using derivatization methods were performed by using HPLC. Through research, we hope to determine the relationship between time and the

  13. Quantitative analyses of T2-weighted MRI as a potential marker for response to somatostatin analogs in newly diagnosed acromegaly.

    PubMed

    Heck, Ansgar; Emblem, Kyrre E; Casar-Borota, Olivera; Bollerslev, Jens; Ringstad, Geir

    2016-05-01

    In growth hormone (GH)-producing adenomas, T2-weighted MRI signal intensity is a marker for granulation pattern and response to somatostatin analogs (SSA). Prediction of treatment response is necessary for individualized treatment, and T2 intensity assessment might improve preoperative classification of somatotropinomas. The objectives of this study are (I) to explore the feasibility of quantitative T2-weighted MRI histogram analyses in newly diagnosed somatotroph adenomas and their relation to clinical and histological parameters and (II) to compare the quantitative method to conventional, visual assessment of T2 intensity. The study was a retrospective cohort study of 58 newly diagnosed patients. In 34 of these, response to primary SSA treatment after median 6 months was evaluated. Parameters from the T2 histogram analyses (T2 intensity ratio and T2 homogeneity ratio) were correlated to visually assessed T2 intensity (hypo-, iso-, hyperintense), baseline characteristics, response to SSA treatment, and histological granulation pattern (anti-Cam5.2). T2 intensity ratio was lowest in the hypointense tumors and highest in the hyperintense tumors (0.66 ± 0.10 vs. 1.07 ± 0.11; p < 0.001). T2 intensity at baseline correlated with reduction in GH (r = -0.67; p < 0.001) and IGF-1 (r = -0.36; p = 0.037) after primary SSA treatment (n = 34). The T2 homogeneity ratio correlated with adenoma size reduction (r = -0.45; p = 0.008). Sparsely granulated adenomas had a higher T2 intensity than densely or intermediately granulated adenomas. T2 histogram analyses are an applicable tool to assess T2 intensity in somatotroph adenomas. Quantitatively assessed T2 intensity ratio in GH-producing adenomas correlates with conventional assessment of T2 intensity, baseline characteristics, response to SSA treatment, and histological granulation pattern. PMID:26475495

  14. Tandem Mass Spectrometry Measurement of the Collision Products of Carbamate Anions Derived from CO2 Capture Sorbents: Paving the Way for Accurate Quantitation

    NASA Astrophysics Data System (ADS)

    Jackson, Phil; Fisher, Keith J.; Attalla, Moetaz Ibrahim

    2011-08-01

    The reaction between CO2 and aqueous amines to produce a charged carbamate product plays a crucial role in post-combustion capture chemistry when primary and secondary amines are used. In this paper, we report the low energy negative-ion CID results for several anionic carbamates derived from primary and secondary amines commonly used as post-combustion capture solvents. The study was performed using the modern equivalent of a triple quadrupole instrument equipped with a T-wave collision cell. Deuterium labeling of 2-aminoethanol (1,1,2,2,-d4-2-aminoethanol) and computations at the M06-2X/6-311++G(d,p) level were used to confirm the identity of the fragmentation products for 2-hydroxyethylcarbamate (derived from 2-aminoethanol), in particular the ions CN-, NCO- and facile neutral losses of CO2 and water; there is precedent for the latter in condensed phase isocyanate chemistry. The fragmentations of 2-hydroxyethylcarbamate were generalized for carbamate anions derived from other capture amines, including ethylenediamine, diethanolamine, and piperazine. We also report unequivocal evidence for the existence of carbamate anions derived from sterically hindered amines ( Tris(2-hydroxymethyl)aminomethane and 2-methyl-2-aminopropanol). For the suite of carbamates investigated, diagnostic losses include the decarboxylation product (-CO2, 44 mass units), loss of 46 mass units and the fragments NCO- ( m/z 42) and CN- ( m/z 26). We also report low energy CID results for the dicarbamate dianion (-O2CNHC2H4NHCO{2/-}) commonly encountered in CO2 capture solution utilizing ethylenediamine. Finally, we demonstrate a promising ion chromatography-MS based procedure for the separation and quantitation of aqueous anionic carbamates, which is based on the reported CID findings. The availability of accurate quantitation methods for ionic CO2 capture products could lead to dynamic operational tuning of CO2 capture-plants and, thus, cost-savings via real-time manipulation of solvent

  15. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    SciTech Connect

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  16. Accurate and easy-to-use assessment of contiguous DNA methylation sites based on proportion competitive quantitative-PCR and lateral flow nucleic acid biosensor.

    PubMed

    Xu, Wentao; Cheng, Nan; Huang, Kunlun; Lin, Yuehe; Wang, Chenguang; Xu, Yuancong; Zhu, Longjiao; Du, Dan; Luo, Yunbo

    2016-06-15

    Many types of diagnostic technologies have been reported for DNA methylation, but they require a standard curve for quantification or only show moderate accuracy. Moreover, most technologies have difficulty providing information on the level of methylation at specific contiguous multi-sites, not to mention easy-to-use detection to eliminate labor-intensive procedures. We have addressed these limitations and report here a cascade strategy that combines proportion competitive quantitative PCR (PCQ-PCR) and lateral flow nucleic acid biosensor (LFNAB), resulting in accurate and easy-to-use assessment. The P16 gene with specific multi-methylated sites, a well-studied tumor suppressor gene, was used as the target DNA sequence model. First, PCQ-PCR provided amplification products with an accurate proportion of multi-methylated sites following the principle of proportionality, and double-labeled duplex DNA was synthesized. Then, a LFNAB strategy was further employed for amplified signal detection via immune affinity recognition, and the exact level of site-specific methylation could be determined by the relative intensity of the test line and internal reference line. This combination resulted in all recoveries being greater than 94%, which are pretty satisfactory recoveries in DNA methylation assessment. Moreover, the developed cascades show significantly high usability as a simple, sensitive, and low-cost tool. Therefore, as a universal platform for sensing systems for the detection of contiguous multi-sites of DNA methylation without external standards and expensive instrumentation, this PCQ-PCR-LFNAB cascade method shows great promise for the point-of-care diagnosis of cancer risk and therapeutics. PMID:26914373

  17. Rapid Quantitative Analyses of Elements on Herb Medicine and Food Powder Using TEA CO2 Laser-Induced Plasma

    NASA Astrophysics Data System (ADS)

    Khumaeni, Ali; Ramli, Muliadi; Idris, Nasrullah; Lee, Yong Inn; Kurniawan, Koo Hendrik; Lie, Tjung Jie; Deguchi, Yoji; Niki, Hideaki; Kagawa, Kiichiro

    2009-03-01

    A novel technique for rapid quantitative analyses of elements on herb medicine and food powder has successfully been developed. In this technique, the powder samples were plugged in a small hole (2 mm in diameter and 3 mm in depth) and covered by a metal mesh. The Transversely Excited Atmospheric (TEA) CO2 laser (1500 mJ, 200 ns) was focused on the powder sample surfaces passing through the metal mesh at atmospheric pressure of nitrogen surrounding gas. It is hypothesized that the small hole functions to confine the powder particles and suppresses the blowing-off, while the metal mesh works as the source of electrons to initiate the strong gas breakdown plasma. The confined powder particles are subsequently ablated by the laser irradiation and the ablated particles move into the strong gas breakdown plasma region to be atomized and excited. Using this method, a quantitative analysis of the milk powder sample containing different concentrations of Ca was successfully demonstrated, resulting in a good linear calibration curve with high precision.

  18. Advances in Quantitative Analyses and Reference Materials Related to Laser Ablation ICP-MS: A Look at Methods and New Directions

    NASA Astrophysics Data System (ADS)

    Koenig, A. E.; Ridley, W. I.

    2009-12-01

    al. 2002), the MACS-1 and MACS-3 Ca carbonate RMs and a prototype Ca phosphate RM. Other work in-house currently includes testing of additional sulfide materials (Fe and Ni sulfides) and a gypsum material. Data for several matrices and RMs will be presented using multiple laser wavelengths. For new methods development regarding quantitative analyses, we have developed several new methods for quantitative trace element mapping in a variety of mineral, biomineral and materials applications. Rapid trace element mapping in bones (Koenig et al. 2009) is not only quantitative for trace elements but provides data that would be difficult to obtain as quickly or accurately by EPMA or other techniques. A method has been developed for rapid mapping of trace elements in building materials and other complex rock materials using a modification of the sum to 100% method presented by others (e.g. Leach and Heftje, 2001). This paper will outline new methods of integrating imaging and analytical data from EPMA, SEM, Raman and other techniques that improve the utility, accuracy and overall science of the subsequent LA-ICP-MS. Additional new directions for quantitative analyses of fluid inclusions, tissues, minerals and biological samples will be discussed.

  19. Comparing the accuracy of quantitative versus qualitative analyses of interim PET to prognosticate Hodgkin lymphoma: a systematic review protocol of diagnostic test accuracy

    PubMed Central

    Procházka, Vít; Klugar, Miloslav; Bachanova, Veronika; Klugarová, Jitka; Tučková, Dagmar; Papajík, Tomáš

    2016-01-01

    Introduction Hodgkin lymphoma is an effectively treated malignancy, yet 20% of patients relapse or are refractory to front-line treatments with potentially fatal outcomes. Early detection of poor treatment responders is crucial for appropriate application of tailored treatment strategies. Tumour metabolic imaging of Hodgkin lymphoma using visual (qualitative) 18-fluorodeoxyglucose positron emission tomography (FDG-PET) is a gold standard for staging and final outcome assessment, but results gathered during the interim period are less accurate. Analysis of continuous metabolic–morphological data (quantitative) FDG-PET may enhance the robustness of interim disease monitoring, and help to improve treatment decision-making processes. The objective of this review is to compare diagnostic test accuracy of quantitative versus qualitative interim FDG-PET in the prognostication of patients with Hodgkin lymphoma. Methods The literature on this topic will be reviewed in a 3-step strategy that follows methods described by the Joanna Briggs Institute (JBI). First, MEDLINE and EMBASE databases will be searched. Second, listed databases for published literature (MEDLINE, Tripdatabase, Pedro, EMBASE, the Cochrane Central Register of Controlled Trials and WoS) and unpublished literature (Open Grey, Current Controlled Trials, MedNar, ClinicalTrials.gov, Cos Conference Papers Index and International Clinical Trials Registry Platform of the WHO) will be queried. Third, 2 independent reviewers will analyse titles, abstracts and full texts, and perform hand search of relevant studies, and then perform critical appraisal and data extraction from selected studies using the DATARI tool (JBI). If possible, a statistical meta-analysis will be performed on pooled sensitivity and specificity data gathered from the selected studies. Statistical heterogeneity will be assessed. Funnel plots, Begg's rank correlations and Egger's regression tests will be used to detect and/or correct publication

  20. Validation of Reference Genes for Transcriptional Analyses in Pleurotus ostreatus by Using Reverse Transcription-Quantitative PCR

    PubMed Central

    Castanera, Raúl; López-Varas, Leticia; Pisabarro, Antonio G.

    2015-01-01

    Recently, the lignin-degrading basidiomycete Pleurotus ostreatus has become a widely used model organism for fungal genomic and transcriptomic analyses. The increasing interest in this species has led to an increasing number of studies analyzing the transcriptional regulation of multigene families that encode extracellular enzymes. Reverse transcription (RT) followed by real-time PCR is the most suitable technique for analyzing the expression of gene sets under multiple culture conditions. In this work, we tested the suitability of 13 candidate genes for their use as reference genes in P. ostreatus time course cultures for enzyme production. We applied three different statistical algorithms and obtained a combination of stable reference genes for optimal normalization of RT-quantitative PCR assays. This reference index can be used for future transcriptomic analyses and validation of transcriptome sequencing or microarray data. Moreover, we analyzed the expression patterns of a laccase and a manganese peroxidase (lacc10 and mnp3, respectively) in lignocellulose and glucose-based media using submerged, semisolid, and solid-state fermentation. By testing different normalization strategies, we demonstrate that the use of nonvalidated reference genes as internal controls leads to biased results and misinterpretations of the biological responses underlying expression changes. PMID:25862220

  1. Development and validation of a liquid chromatography-tandem mass spectrometric assay for quantitative analyses of triptans in hair.

    PubMed

    Vandelli, Daniele; Palazzoli, Federica; Verri, Patrizia; Rustichelli, Cecilia; Marchesi, Filippo; Ferrari, Anna; Baraldi, Carlo; Giuliani, Enrico; Licata, Manuela; Silingardi, Enrico

    2016-04-01

    Triptans are specific drugs widely used for acute treatment of migraine, being selective 5HT1B/1D receptor agonists. A proper assumption of triptans is very important for an effective treatment; nevertheless patients often underuse, misuse, overuse or use triptans inconsistently, i.e., not following the prescribed therapy. Drug analysis in hair can represent a powerful tool for monitoring the compliance of the patient to the therapy, since it can greatly increase the time-window of detection compared to analyses in biological fluids, such as plasma or urine. In the present study, a liquid chromatography-tandem mass spectrometric (LC-MS/MS) method has been developed and validated for the quantitative analysis in human hair of five triptans commonly prescribed in Italy: almotriptan (AL), eletriptan (EP), rizatriptan (RIZ), sumatriptan (SUM) and zolmitriptan (ZP). Hair samples were decontaminated and incubated overnight in diluted hydrochloric acid; the extracts were purified by mixed-mode SPE cartridges and analyzed by LC-MS/MS under gradient elution in positive multiple reaction monitoring (MRM) mode. The procedure was fully validated in terms of selectivity, linearity, limit of detection (LOD) and lower limit of quantitation (LLOQ), accuracy, precision, carry-over, recovery, matrix effect and dilution integrity. The method was linear in the range 10-1000pg/mg hair, with R(2) values of at least 0.990; the validated LLOQ values were in the range 5-7pg/mg hair. The method offered satisfactory precision (RSD <10%), accuracy (90-110%) and recovery (>85%) values. The validated procedure was applied on 147 authentic hair samples from subjects being treated in the Headache Centre of Modena University Hospital in order to verify the possibility of monitoring the corresponding hair levels for the taken triptans. PMID:26970848

  2. A Second-Generation Device for Automated Training and Quantitative Behavior Analyses of Molecularly-Tractable Model Organisms

    PubMed Central

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L.; Granata, Christopher; Levin, Michael

    2010-01-01

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424

  3. A second-generation device for automated training and quantitative behavior analyses of molecularly-tractable model organisms.

    PubMed

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L; Granata, Christopher; Levin, Michael

    2010-01-01

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424

  4. Specific catalysis of asparaginyl deamidation by carboxylic acids: kinetic, thermodynamic, and quantitative structure-property relationship analyses.

    PubMed

    Connolly, Brian D; Tran, Benjamin; Moore, Jamie M R; Sharma, Vikas K; Kosky, Andrew

    2014-04-01

    Asparaginyl (Asn) deamidation could lead to altered potency, safety, and/or pharmacokinetics of therapeutic protein drugs. In this study, we investigated the effects of several different carboxylic acids on Asn deamidation rates using an IgG1 monoclonal antibody (mAb1*) and a model hexapeptide (peptide1) with the sequence YGKNGG. Thermodynamic analyses of the kinetics data revealed that higher deamidation rates are associated with predominantly more negative ΔS and, to a lesser extent, more positive ΔH. The observed differences in deamidation rates were attributed to the unique ability of each type of carboxylic acid to stabilize the energetically unfavorable transition-state conformations required for imide formation. Quantitative structure property relationship (QSPR) analysis using kinetic data demonstrated that molecular descriptors encoding for the geometric spatial distribution of atomic properties on various carboxylic acids are effective determinants for the deamidation reaction. Specifically, the number of O-O and O-H atom pairs on carboxyl and hydroxyl groups with interatomic distances of 4-5 Å on a carboxylic acid buffer appears to determine the rate of deamidation. Collectively, the results from structural and thermodynamic analyses indicate that carboxylic acids presumably form multiple hydrogen bonds and charge-charge interactions with the relevant deamidation site and provide alignment between the reactive atoms on the side chain and backbone. We propose that carboxylic acids catalyze deamidation by stabilizing a specific, energetically unfavorable transition-state conformation of l-asparaginyl intermediate II that readily facilitates bond formation between the γ-carbonyl carbon and the deprotonated backbone nitrogen for cyclic imide formation. PMID:24620787

  5. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    PubMed Central

    2010-01-01

    Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene

  6. Quantitative solid-state 13C nuclear magnetic resonance spectrometric analyses of wood xylen: effect of increasing carbohydrate content

    USGS Publications Warehouse

    Bates, A.L.; Hatcher, P.G.

    1992-01-01

    Isolated lignin with a low carbohydrate content was spiked with increasing amounts of alpha-cellulose, and then analysed by solid-state 13C nuclear magnetic resonance (NMR) using cross-polarization with magic angle spinning (CPMAS) and dipolar dephasing methods in order to assess the quantitative reliability of CPMAS measurement of carbohydrate content and to determine how increasingly intense resonances for carbohydrate carbons affect calculations of the degree of lignin's aromatic ring substitution and methoxyl carbon content. Comparisons were made of the carbohydrate content calculated by NMR with carbohydrate concentrations obtained by phenol-sulfuric acid assay and by the calculation from the known amounts of cellulose added. The NMR methods used in this study yield overestimates for carbohydrate carbons due to resonance area overlap from the aliphatic side chain carbons of lignin. When corrections are made for these overlapping resonance areas, the NMR results agree very well with results obtained by other methods. Neither the calculated methoxyl carbon content nor the degree of aromatic ring substitution in lignin, both calculated from dipolar dephasing spectra, change with cellulose content. Likewise, lignin methoxyl content does not correlate with cellulose abundance when measured by integration of CPMAS spectra. ?? 1992.

  7. Interfacing microwells with nanoliter compartments: a sampler generating high-resolution concentration gradients for quantitative biochemical analyses in droplets.

    PubMed

    Gielen, Fabrice; Buryska, Tomas; Van Vliet, Liisa; Butz, Maren; Damborsky, Jiri; Prokop, Zbynek; Hollfelder, Florian

    2015-01-01

    Analysis of concentration dependencies is key to the quantitative understanding of biological and chemical systems. In experimental tests involving concentration gradients such as inhibitor library screening, the number of data points and the ratio between the stock volume and the volume required in each test determine the quality and efficiency of the information gained. Titerplate assays are currently the most widely used format, even though they require microlitre volumes. Compartmentalization of reactions in pico- to nanoliter water-in-oil droplets in microfluidic devices provides a solution for massive volume reduction. This work addresses the challenge of producing microfluidic-based concentration gradients in a way that every droplet represents one unique reagent combination. We present a simple microcapillary technique able to generate such series of monodisperse water-in-oil droplets (with a frequency of up to 10 Hz) from a sample presented in an open well (e.g., a titerplate). Time-dependent variation of the well content results in microdroplets that represent time capsules of the composition of the source well. By preserving the spatial encoding of the droplets in tubing, each reactor is assigned an accurate concentration value. We used this approach to record kinetic time courses of the haloalkane dehalogenase DbjA and analyzed 150 combinations of enzyme/substrate/inhibitor in less than 5 min, resulting in conclusive Michaelis-Menten and inhibition curves. Avoiding chips and merely requiring two pumps, a magnetic plate with a stirrer, tubing, and a pipet tip, this easy-to-use device rivals the output of much more expensive liquid handling systems using a fraction (∼100-fold less) of the reagents consumed in microwell format. PMID:25496166

  8. Value of Quantitative and Qualitative Analyses of Circulating Cell-Free DNA as Diagnostic Tools for Hepatocellular Carcinoma

    PubMed Central

    Liao, Wenjun; Mao, Yilei; Ge, Penglei; Yang, Huayu; Xu, Haifeng; Lu, Xin; Sang, Xinting; Zhong, Shouxian

    2015-01-01

    Abstract Qualitative and quantitative analyses of circulating cell-free DNA (cfDNA) are potential methods for the detection of hepatocellular carcinoma (HCC). Many studies have evaluated these approaches, but the results have been variable. This meta-analysis is the first to synthesize these published results and evaluate the use of circulating cfDNA values for HCC diagnosis. All articles that met our inclusion criteria were assessed using QUADAS guidelines after the literature research. We also investigated 3 subgroups in this meta-analysis: qualitative analysis of abnormal concentrations of circulating cfDNA; qualitative analysis of single-gene methylation alterations; and multiple analyses combined with alpha-fetoprotein (AFP). Statistical analyses were performed using the software Stata 12.0. We synthesized these published results and calculated accuracy measures (pooled sensitivity and specificity, positive/negative likelihood ratios [PLRs/NLRs], diagnostic odds ratios [DORs], and corresponding 95% confidence intervals [95% CIs]). Data were pooled using bivariate generalized linear mixed model. Furthermore, summary receiver operating characteristic curves and area under the curve (AUC) were used to summarize overall test performance. Heterogeneity and publication bias were also examined. A total of 2424 subjects included 1280 HCC patients in 22 studies were recruited in this meta-analysis. Pooled sensitivity and specificity, PLR, NLR, DOR, AUC, and CIs of quantitative analysis were 0.741 (95% CI: 0.610–0.840), 0.851 (95% CI: 0.718–0.927), 4.970 (95% CI: 2.694–9.169), 0.304 (95% CI: 0.205–0.451), 16.347 (95% CI: 8.250–32.388), and 0.86 (95% CI: 0.83–0.89), respectively. For qualitative analysis, the values were 0.538 (95% CI: 0.401–0.669), 0.944 (95% CI: 0.889–0.972), 9.545 (95% CI: 5.298–17.196), 0.490 (95% CI: 0.372–0.646), 19.491 (95% CI: 10.458–36.329), and 0.87 (95% CI: 0.84–0.90), respectively. After combining with AFP assay, the

  9. Genome-Wide Identification and Validation of Reference Genes in Infected Tomato Leaves for Quantitative RT-PCR Analyses

    PubMed Central

    Müller, Oliver A.; Grau, Jan; Thieme, Sabine; Prochaska, Heike; Adlung, Norman; Sorgatz, Anika; Bonas, Ulla

    2015-01-01

    The Gram-negative bacterium Xanthomonas campestris pv. vesicatoria (Xcv) causes bacterial spot disease of pepper and tomato by direct translocation of type III effector proteins into the plant cell cytosol. Once in the plant cell the effectors interfere with host cell processes and manipulate the plant transcriptome. Quantitative RT-PCR (qRT-PCR) is usually the method of choice to analyze transcriptional changes of selected plant genes. Reliable results depend, however, on measuring stably expressed reference genes that serve as internal normalization controls. We identified the most stably expressed tomato genes based on microarray analyses of Xcv-infected tomato leaves and evaluated the reliability of 11 genes for qRT-PCR studies in comparison to four traditionally employed reference genes. Three different statistical algorithms, geNorm, NormFinder and BestKeeper, concordantly determined the superiority of the newly identified reference genes. The most suitable reference genes encode proteins with homology to PHD finger family proteins and the U6 snRNA-associated protein LSm7. In addition, we identified pepper orthologs and validated several genes as reliable normalization controls for qRT-PCR analysis of Xcv-infected pepper plants. The newly identified reference genes will be beneficial for future qRT-PCR studies of the Xcv-tomato and Xcv-pepper pathosystems, as well as for the identification of suitable normalization controls for qRT-PCR studies of other plant-pathogen interactions, especially, if related plant species are used in combination with bacterial pathogens. PMID:26313760

  10. Genome-Wide Identification and Validation of Reference Genes in Infected Tomato Leaves for Quantitative RT-PCR Analyses.

    PubMed

    Müller, Oliver A; Grau, Jan; Thieme, Sabine; Prochaska, Heike; Adlung, Norman; Sorgatz, Anika; Bonas, Ulla

    2015-01-01

    The Gram-negative bacterium Xanthomonas campestris pv. vesicatoria (Xcv) causes bacterial spot disease of pepper and tomato by direct translocation of type III effector proteins into the plant cell cytosol. Once in the plant cell the effectors interfere with host cell processes and manipulate the plant transcriptome. Quantitative RT-PCR (qRT-PCR) is usually the method of choice to analyze transcriptional changes of selected plant genes. Reliable results depend, however, on measuring stably expressed reference genes that serve as internal normalization controls. We identified the most stably expressed tomato genes based on microarray analyses of Xcv-infected tomato leaves and evaluated the reliability of 11 genes for qRT-PCR studies in comparison to four traditionally employed reference genes. Three different statistical algorithms, geNorm, NormFinder and BestKeeper, concordantly determined the superiority of the newly identified reference genes. The most suitable reference genes encode proteins with homology to PHD finger family proteins and the U6 snRNA-associated protein LSm7. In addition, we identified pepper orthologs and validated several genes as reliable normalization controls for qRT-PCR analysis of Xcv-infected pepper plants. The newly identified reference genes will be beneficial for future qRT-PCR studies of the Xcv-tomato and Xcv-pepper pathosystems, as well as for the identification of suitable normalization controls for qRT-PCR studies of other plant-pathogen interactions, especially, if related plant species are used in combination with bacterial pathogens. PMID:26313760

  11. Quantitative analyse of trace elements with HR-ICP-MS Element2 : an example of application in calcite shell of the Great Scallop Pecten Maximus.

    NASA Astrophysics Data System (ADS)

    Richard, M.; Chauvaud, L.; Benoit, M.; Thebault, J.; L'Helguen, S.; Hemond, C.; Maguer, J.; Sinquin, G.

    2008-12-01

    Carbonate minerals are abundant on the Earth's surface, and they are produced by a number of processes, including precipitation from hydrothermal fluids or synthesis by organisms like coral, foraminifera, molluscs, or bacteria. Consequently, they are found in a large variety of environments. Their isotopic compositions (Sr, C, or O ) and trace element concentrations are widely used to understand or reconstruct biological, geological or biogeochimical processes. A large scientific community define the elemental composition of bivalve shells a promising tool as a recorder of environmental parameters like sea surface temperature, salinity and primary productivity. But we have compile evidences that trace elements variation within shells can be species dependant or change in a complex network of environmental interactions. In this context, a better understanding of the incorporation of elements from seawater into biogenic carbonate is necessary to generalize the use of these proxies. Daily shell growth in the calcitic bivalve Pecten maximus has been extensively measured and these daily growth marks can be used to date each subsequent sample of calcium carbonate. In this study, micro- sampling of carbonate powder along the shell was carried out with a high-resolution inductively coupled plasma-mass spectrometry (HR-ICP-MS, Finnigan Element2). This method led to a quantitative detection of trace element in biocaronates and to the accurate reconstruction of ontogenetic profiles of elemental ratios with a 3-day temporal resolution. Repeated analyses of different growth layers sections on the same valve showed that the trace elements are homogeneously distributed along the shell. Mo concentration was reproducible for several scallop individuals from a same location over different years and from different coastal temperate environments. Each profile was characterised by a background level punctuated by sharp episodic peaks occurring in spring (may). Some hypotheses will be

  12. Quantitative evaluation of grain shapes by utilizing elliptic Fourier and principal component analyses: Implications for sedimentary environment discrimination

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Fujiwara, H.; Ohta, T.

    2013-12-01

    Fourier analysis has allowed new advancements in determining the shape of sand grains. However, the full quantification of grain shapes has not as yet been accomplished, because Fourier expansion produces numerous descriptors, making it difficult to give a comprehensive interpretation to the results of Fourier analysis. In order to overcome this difficulty, this study focuses on the combined application of elliptic Fourier and principal component analyses (EF-PCA). The EF-PCA method allows to reduce the number of extracted Fourier variables and enables a visual inspection of the results of Fourier analysis. Thus, this approach would facilitate the understanding of the sedimentological significances of the results obtained using Fourier expansion. 0.250-0.355 mm sized quartz grains collected from glacial, foreshore, fluvial and aeolian environments were scanned by digitalizing microscope in 200 magnification ratio. Then the elliptic Fourier coefficients of grain outlines were analyzed using a program package SHAPE (Iwata and Ukai, 2002). In order to examine the degree of roundness and surface smoothness of grains, principal component analysis was then performed on both unstandardized and standardized data matrices obtained by elliptic Fourier analysis. The result of EF-PCA based on unstandardized data matrix extracted descriptors describing overall form and shape of grains because unstandardized data matrix would enhance the contribution of large amplitude and low frequency trigonometric functions. The shape descriptors extracted by this method can be interpreted as elongation index (REF1) and multiple bump indices (REF2, REF3, and REF2 + REF3). These descriptors indicate that aeolian, foreshore, and fluvial sediments contain grains with shapes similar to circles, ellipses, and cylinders, respectively. Meanwhile, the result of EF-PCA based on standardized data matrix enhanced the contribution of low amplitude and high frequency trigonometric functions, meaning that

  13. Diachronous fault array growth within continental rift basins: Quantitative analyses from the East Shetland Basin, northern North Sea

    NASA Astrophysics Data System (ADS)

    Claringbould, Johan; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Odinsen, Tore

    2016-04-01

    The evolution of rift basins has been the subject of many studies, however, these studies have been mainly restricted to investigating the geometry of rift-related fault arrays. The relative timing of development of individual faults that make up the fault array is not yet well constrained. First-order tectono-stratigraphic models for rifts predict that normal faults develop broadly synchronously throughout the basin during a temporally distinct 'syn-rift' episode. However, largely due to the mechanical interaction between adjacent structures, distinctly diachronous activity is known to occur on the scale of individual fault segments and systems. Our limited understanding of how individual segments and systems contribute to array-scale strain largely reflects the limited dimension and resolution of the data available and methods applied. Here we utilize a regional extensive subsurface dataset comprising multiple 3D seismic MegaSurveys (10,000 km2), long (>75km) 2D seismic profiles, and exploration wells, to investigate the evolution of the fault array in the East Shetland Basin, North Viking Graben, northern North Sea. Previous studies propose this basin formed in response to multiphase rifting during two temporally distinct extensional phases in the Permian-Triassic and Middle-to-Late Jurassic, separated by a period of tectonic quiescence and thermal subsidence in the Early Jurassic. We document the timing of growth of individual structures within the rift-related fault array across the East Shetland Basin, constraining the progressive migration of strain from pre-Triassic-to-Late Jurassic. The methods used include (i) qualitative isochron map analysis, (ii) quantitative syn-kinematic deposit thickness difference across fault & expansion index calculations, and (iii) along fault throw-depth & backstripped displacement-length analyses. In contrast to established models, we demonstrate that the initiation, growth, and cessation of individual fault segments and

  14. 75 FR 29537 - Draft Transportation Conformity Guidance for Quantitative Hot-spot Analyses in PM2.5

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-26

    ... (58 FR 62188) and has subsequently published several amendments. II. Background on the Draft Guidance.... In its March 10, 2006 final rule (71 FR 12468), EPA stated that quantitative PM 2.5 and PM 10 hot... its March 2006 final rule (71 FR 12502), this draft guidance was developed in coordination with...

  15. Marker and real-time quantitative analyses to confirm hemophilia B carrier diagnosis of a complete deletion of the F9 gene.

    PubMed

    Venceslá, Adoración; Barceló, María Jesús; Baena, Manel; Quintana, Manuel; Baiget, Montserrat; Tizzano, Eduardo F

    2007-11-01

    Approximately 3% of hemophilia B patients have major deletions in the F9 gene, half of which are complete. Marker and quantitative PCR analyses were employed for carrier diagnosis in a family of a mentally retarded hemophilia B patient with a total deletion of the F9 gene and neighbor genes. Both methodologies allowed the confirmation of carrier or non-carrier status. PMID:18024414

  16. Quantitative Assessment of Protein Structural Models by Comparison of H/D Exchange MS Data with Exchange Behavior Accurately Predicted by DXCOREX

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Pantazatos, Dennis; Li, Sheng; Hamuro, Yoshitomo; Hilser, Vincent J.; Woods, Virgil L.

    2012-01-01

    Peptide amide hydrogen/deuterium exchange mass spectrometry (DXMS) data are often used to qualitatively support models for protein structure. We have developed and validated a method (DXCOREX) by which exchange data can be used to quantitatively assess the accuracy of three-dimensional (3-D) models of protein structure. The method utilizes the COREX algorithm to predict a protein's amide hydrogen exchange rates by reference to a hypothesized structure, and these values are used to generate a virtual data set (deuteron incorporation per peptide) that can be quantitatively compared with the deuteration level of the peptide probes measured by hydrogen exchange experimentation. The accuracy of DXCOREX was established in studies performed with 13 proteins for which both high-resolution structures and experimental data were available. The DXCOREX-calculated and experimental data for each protein was highly correlated. We then employed correlation analysis of DXCOREX-calculated versus DXMS experimental data to assess the accuracy of a recently proposed structural model for the catalytic domain of a Ca2+-independent phospholipase A2. The model's calculated exchange behavior was highly correlated with the experimental exchange results available for the protein, supporting the accuracy of the proposed model. This method of analysis will substantially increase the precision with which experimental hydrogen exchange data can help decipher challenging questions regarding protein structure and dynamics.

  17. Characterization of a Highly Conserved Histone Related Protein, Ydl156w, and Its Functional Associations Using Quantitative Proteomic Analyses*

    PubMed Central

    Gilmore, Joshua M.; Sardiu, Mihaela E.; Venkatesh, Swaminathan; Stutzman, Brent; Peak, Allison; Seidel, Chris W.; Workman, Jerry L.; Florens, Laurence; Washburn, Michael P.

    2012-01-01

    A significant challenge in biology is to functionally annotate novel and uncharacterized proteins. Several approaches are available for deducing the function of proteins in silico based upon sequence homology and physical or genetic interaction, yet this approach is limited to proteins with well-characterized domains, paralogs and/or orthologs in other species, as well as on the availability of suitable large-scale data sets. Here, we present a quantitative proteomics approach extending the protein network of core histones H2A, H2B, H3, and H4 in Saccharomyces cerevisiae, among which a novel associated protein, the previously uncharacterized Ydl156w, was identified. In order to predict the role of Ydl156w, we designed and applied integrative bioinformatics, quantitative proteomics and biochemistry approaches aiming to infer its function. Reciprocal analysis of Ydl156w protein interactions demonstrated a strong association with all four histones and also to proteins strongly associated with histones including Rim1, Rfa2 and 3, Yku70, and Yku80. Through a subsequent combination of the focused quantitative proteomics experiments with available large-scale genetic interaction data and Gene Ontology functional associations, we provided sufficient evidence to associate Ydl156w with multiple processes including chromatin remodeling, transcription and DNA repair/replication. To gain deeper insights into the role of Ydl156w in histone biology we investigated the effect of the genetic deletion of ydl156w on H4 associated proteins, which lead to a dramatic decrease in the association of H4 with RNA polymerase III proteins. The implication of a role for Ydl156w in RNA Polymerase III mediated transcription was consequently verified by RNA-Seq experiments. Finally, using these approaches we generated a refined network of Ydl156w-associated proteins. PMID:22199229

  18. Reduced Number of Pigmented Neurons in the Substantia Nigra of Dystonia Patients? Findings from Extensive Neuropathologic, Immunohistochemistry, and Quantitative Analyses

    PubMed Central

    Iacono, Diego; Geraci-Erck, Maria; Peng, Hui; Rabin, Marcie L.; Kurlan, Roger

    2015-01-01

    Background Dystonias (Dys) represent the third most common movement disorder after essential tremor (ET) and Parkinson's disease (PD). While some pathogenetic mechanisms and genetic causes of Dys have been identified, little is known about their neuropathologic features. Previous neuropathologic studies have reported generically defined neuronal loss in various cerebral regions of Dys brains, mostly in the basal ganglia (BG), and specifically in the substantia nigra (SN). Enlarged pigmented neurons in the SN of Dys patients with and without specific genetic mutations (e.g., GAG deletions in DYT1 dystonia) have also been described. Whether or not Dys brains are associated with decreased numbers or other morphometric changes of specific neuronal types is unknown and has never been addressed with quantitative methodologies. Methods Quantitative immunohistochemistry protocols were used to estimate neuronal counts and volumes of nigral pigmented neurons in 13 SN of Dys patients and 13 SN of age-matched control subjects (C). Results We observed a significant reduction (∼20%) of pigmented neurons in the SN of Dys compared to C (p<0.01). Neither significant volumetric changes nor evident neurodegenerative signs were observed in the remaining pool of nigral pigmented neurons in Dys brains. These novel quantitative findings were confirmed after exclusion of possible co-occurring SN pathologies including Lewy pathology, tau-neurofibrillary tangles, β-amyloid deposits, ubiquitin (ubiq), and phosphorylated-TAR DNA-binding protein 43 (pTDP43)-positive inclusions. Discussion A reduced number of nigral pigmented neurons in the absence of evident neurodegenerative signs in Dys brains could indicate previously unconsidered pathogenetic mechanisms of Dys such as neurodevelopmental defects in the SN. PMID:26069855

  19. Quantitative and qualitative analyses of under-balcony acoustics with real and simulated arrays of multiple sources

    NASA Astrophysics Data System (ADS)

    Kwon, Youngmin

    The objective of this study was to quantitatively and qualitatively identify the acoustics of the under-balcony areas in music performance halls under realistic conditions that are close to an orchestral performance in consideration of multiple music instrumental sources and their diverse sound propagation patterns. The study executed monaural and binaural impulse response measurements with an array of sixteen directional sources (loudspeakers) for acoustical assessments. Actual measurements in a performance hall as well as computer simulations were conducted for the quantitative assessments. Psycho-acoustical listening tests were conducted for the qualitative assessments using the music signals binaurally recorded in the hall with the same source array. The results obtained from the multiple directional source tests were analyzed by comparing them to those obtained from the tests performed with a single omni-directional source. These two sets of results obtained in the under-balcony area were also compared to those obtained in the main orchestra area. The quantitative results showed that the use of a single source conforming to conventional measurement protocol seems to be competent for measurements of the room acoustical parameters such as EDTmid, RTmid, C80500-2k, IACCE3 and IACCL3. These quantitative measures, however, did not always agree with the results of the qualitative assessments. The primary reason is that, in many other acoustical analysis respects, the acoustical phenomena shown from the multiple source measurements were not similar to those shown from the single source measurements. Remarkable differences were observed in time-domain impulse responses, frequency content, spectral distribution, directional distribution of the early reflections, and in sound energy density over time. Therefore, the room acoustical parameters alone should not be the acoustical representative characterizing a performance hall or a specific area such as the under

  20. Differential Label-free Quantitative Proteomic Analysis of Shewanella oneidensis Cultured under Aerobic and Suboxic Conditions by Accurate Mass and Time Tag Approach

    SciTech Connect

    Fang, Ruihua; Elias, Dwayne A.; Monroe, Matthew E.; Shen, Yufeng; McIntosh, Martin; Wang, Pei; Goddard, Carrie D.; Callister, Stephen J.; Moore, Ronald J.; Gorby, Yuri A.; Adkins, Joshua N.; Fredrickson, Jim K.; Lipton, Mary S.; Smith, Richard D.

    2006-04-01

    We describe the application of liquid chromatography coupled to mass spectrometry (LC/MS) without the use of stable isotope labeling for differential quantitative proteomics analysis of whole cell lysates of Shewanella oneidensis MR-1 cultured under aerobic and sub-oxic conditions. Liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) was used to initially identify peptide sequences, and LC coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR) was used to confirm these identifications, as well as measure relative peptide abundances. 2343 peptides, covering 668 proteins were identified with high confidence and quantified. Among these proteins, a subset of 56 changed significantly using statistical approaches such as SAM, while another subset of 56 that were annotated as performing housekeeping functions remained essentially unchanged in relative abundance. Numerous proteins involved in anaerobic energy metabolism exhibited up to a 10-fold increase in relative abundance when S. oneidensis is transitioned from aerobic to sub-oxic conditions.

  1. AquaLite, a bioluminescent label for immunoassay and nucleic acid detection: quantitative analyses at the attomol level

    NASA Astrophysics Data System (ADS)

    Smith, David F.; Stults, Nancy L.

    1996-04-01

    AquaLiteR is a direct, bioluminescent label capable of detecting attomol levels of analyte in clinical immunoassays and assays for the quantitative measurement of nucleic acids. Bioluminescent immunoassays (BIAs) require no radioisotopes and avoid complex fluorescent measurements and many of the variables of indirect enzyme immunoassays (EIAs). AquaLite, a recombinant form of the photoprotein aequorin from a bioluminescent jellyfish, is coupled directly to antibodies to prepare bioluminescent conjugates for assay development. When the AquaLite-antibody complex is exposed to a solution containing calcium ions, a flash of blue light ((lambda) max equals 469 nm) is generated. The light signal is measured in commercially available luminometers that simultaneously inject a calcium solution and detect subattomol photoprotein levies in either test tubes or microtiter plates. Immunometric or 'sandwich' type assays are available for the quantitative measurement of human endocrine hormones and nucleic acids. The AquaLite TSH assay can detect 1 attomol of thyroid stimulating hormone (TSH) in 0.2 mL of human serum and is a useful clinical tool for diagnosing hyperthyroid patients. AquaLite-based nucleic acid detection permits quantifying attomol levels of specific nucleic acid markers and represents possible solution to the difficult problem of quantifying the targets of nucleic acid amplification methods.

  2. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction.

    PubMed

    Lu, Y; Rong, C Z; Zhao, J Y; Lao, X J; Xie, L; Li, S; Qin, X

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  3. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction

    PubMed Central

    Lu, Y.; Rong, C.Z.; Zhao, J.Y.; Lao, X.J.; Xie, L.; Li, S.; Qin, X.

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  4. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    PubMed

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts. PMID:25445290

  5. Identification of Phosphorylated Cyclin-Dependent Kinase 1 Associated with Colorectal Cancer Survival Using Label-Free Quantitative Analyses

    PubMed Central

    Tyan, Yu-Chang; Hsiao, Eric S. L.; Chu, Po-Chen; Lee, Chung-Ta; Lee, Jenq-Chang; Chen, Yi-Ming Arthur; Liao, Pao-Chi

    2016-01-01

    Colorectal cancer is the most common form of cancer in the world, and the five-year survival rate is estimated to be almost 90% in the early stages. Therefore, the identification of potential biomarkers to assess the prognosis of early stage colorectal cancer patients is critical for further clinical treatment. Dysregulated tyrosine phosphorylation has been found in several diseases that play a significant regulator of signaling in cellular pathways. In this study, this strategy was used to characterize the tyrosine phosphoproteome of colorectal cell lines with different progression abilities (SW480 and SW620). We identified a total of 280 phosphotyrosine (pTyr) peptides comprising 287 pTyr sites from 261 proteins. Label-free quantitative analysis revealed the differential level of a total of 103 pTyr peptides between SW480 and SW620 cells. We showed that cyclin-dependent kinase I (CDK1) pTyr15 level in SW480 cells was 3.3-fold greater than in SW620 cells, and these data corresponded with the label-free mass spectrometry-based proteomic quantification analysis. High level CDK1 pTyr15 was associated with prolonged disease-free survival for stage II colorectal cancer patients (n = 79). Taken together, our results suggest that the CDK1 pTyr15 protein is a potential indicator of the progression of colorectal cancer. PMID:27383761

  6. LobeFinder: A Convex Hull-Based Method for Quantitative Boundary Analyses of Lobed Plant Cells1[OPEN

    PubMed Central

    Wu, Tzu-Ching; Belteton, Samuel A.; Szymanski, Daniel B.; Umulis, David M.

    2016-01-01

    Dicot leaves are composed of a heterogeneous mosaic of jigsaw puzzle piece-shaped pavement cells that vary greatly in size and the complexity of their shape. Given the importance of the epidermis and this particular cell type for leaf expansion, there is a strong need to understand how pavement cells morph from a simple polyhedral shape into highly lobed and interdigitated cells. At present, it is still unclear how and when the patterns of lobing are initiated in pavement cells, and one major technological bottleneck to addressing the problem is the lack of a robust and objective methodology to identify and track lobing events during the transition from simple cell geometry to lobed cells. We developed a convex hull-based algorithm termed LobeFinder to identify lobes, quantify geometric properties, and create a useful graphical output of cell coordinates for further analysis. The algorithm was validated against manually curated images of pavement cells of widely varying sizes and shapes. The ability to objectively count and detect new lobe initiation events provides an improved quantitative framework to analyze mutant phenotypes, detect symmetry-breaking events in time-lapse image data, and quantify the time-dependent correlation between cell shape change and intracellular factors that may play a role in the morphogenesis process. PMID:27288363

  7. LobeFinder: A Convex Hull-Based Method for Quantitative Boundary Analyses of Lobed Plant Cells.

    PubMed

    Wu, Tzu-Ching; Belteton, Samuel A; Pack, Jessica; Szymanski, Daniel B; Umulis, David M

    2016-08-01

    Dicot leaves are composed of a heterogeneous mosaic of jigsaw puzzle piece-shaped pavement cells that vary greatly in size and the complexity of their shape. Given the importance of the epidermis and this particular cell type for leaf expansion, there is a strong need to understand how pavement cells morph from a simple polyhedral shape into highly lobed and interdigitated cells. At present, it is still unclear how and when the patterns of lobing are initiated in pavement cells, and one major technological bottleneck to addressing the problem is the lack of a robust and objective methodology to identify and track lobing events during the transition from simple cell geometry to lobed cells. We developed a convex hull-based algorithm termed LobeFinder to identify lobes, quantify geometric properties, and create a useful graphical output of cell coordinates for further analysis. The algorithm was validated against manually curated images of pavement cells of widely varying sizes and shapes. The ability to objectively count and detect new lobe initiation events provides an improved quantitative framework to analyze mutant phenotypes, detect symmetry-breaking events in time-lapse image data, and quantify the time-dependent correlation between cell shape change and intracellular factors that may play a role in the morphogenesis process. PMID:27288363

  8. International collaborative study of the endogenous reference gene LAT52 used for qualitative and quantitative analyses of genetically modified tomato.

    PubMed

    Yang, Litao; Zhang, Haibo; Guo, Jinchao; Pan, Liangwen; Zhang, Dabing

    2008-05-28

    One tomato ( Lycopersicon esculentum) gene, LAT52, has been proved to be a suitable endogenous reference gene for genetically modified (GM) tomato detection in a previous study. Herein are reported the results of a collaborative ring trial for international validation of the LAT52 gene as endogenous reference gene and its analytical systems; 14 GMO detection laboratories from 8 countries were invited, and results were finally received from 13. These data confirmed the species specificity by testing 10 plant genomic DNAs, less allelic variation and stable single copy number of the LAT52 gene, among 12 different tomato cultivars. Furthermore, the limit of detection of LAT52 qualitative PCR was proved to be 0.1%, which corresponded to 11 copies of haploid tomato genomic DNA, and the limit of quantification for the quantitative PCR system was about 10 copies of haploid tomato genomic DNA with acceptable PCR efficiency and linearity. Additionally, the bias between the test and true values of 8 blind samples ranged from 1.94 to 10.64%. All of these validated results indicated that the LAT52 gene is suitable for use as an endogenous reference gene for the identification and quantification of GM tomato and its derivates. PMID:18442244

  9. Comprehensive and quantitative proteomic analyses of zebrafish plasma reveals conserved protein profiles between genders and between zebrafish and human

    PubMed Central

    Li, Caixia; Tan, Xing Fei; Lim, Teck Kwang; Lin, Qingsong; Gong, Zhiyuan

    2016-01-01

    Omic approaches have been increasingly used in the zebrafish model for holistic understanding of molecular events and mechanisms of tissue functions. However, plasma is rarely used for omic profiling because of the technical challenges in collecting sufficient blood. In this study, we employed two mass spectrometric (MS) approaches for a comprehensive characterization of zebrafish plasma proteome, i.e. conventional shotgun liquid chromatography-tandem mass spectrometry (LC-MS/MS) for an overview study and quantitative SWATH (Sequential Window Acquisition of all THeoretical fragment-ion spectra) for comparison between genders. 959 proteins were identified in the shotgun profiling with estimated concentrations spanning almost five orders of magnitudes. Other than the presence of a few highly abundant female egg yolk precursor proteins (vitellogenins), the proteomic profiles of male and female plasmas were very similar in both number and abundance and there were basically no other highly gender-biased proteins. The types of plasma proteins based on IPA (Ingenuity Pathway Analysis) classification and tissue sources of production were also very similar. Furthermore, the zebrafish plasma proteome shares significant similarities with human plasma proteome, in particular in top abundant proteins including apolipoproteins and complements. Thus, the current study provided a valuable dataset for future evaluation of plasma proteins in zebrafish. PMID:27071722

  10. COTS-Based Fault Tolerance in Deep Space: Qualitative and Quantitative Analyses of a Bus Network Architecture

    NASA Technical Reports Server (NTRS)

    Tai, Ann T.; Chau, Savio N.; Alkalai, Leon

    2000-01-01

    Using COTS products, standards and intellectual properties (IPs) for all the system and component interfaces is a crucial step toward significant reduction of both system cost and development cost as the COTS interfaces enable other COTS products and IPs to be readily accommodated by the target system architecture. With respect to the long-term survivable systems for deep-space missions, the major challenge for us is, under stringent power and mass constraints, to achieve ultra-high reliability of the system comprising COTS products and standards that are not developed for mission-critical applications. The spirit of our solution is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper, we discuss our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. We first derive and qualitatively analyze a -'stacktree topology" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.

  11. Comprehensive and quantitative proteomic analyses of zebrafish plasma reveals conserved protein profiles between genders and between zebrafish and human.

    PubMed

    Li, Caixia; Tan, Xing Fei; Lim, Teck Kwang; Lin, Qingsong; Gong, Zhiyuan

    2016-01-01

    Omic approaches have been increasingly used in the zebrafish model for holistic understanding of molecular events and mechanisms of tissue functions. However, plasma is rarely used for omic profiling because of the technical challenges in collecting sufficient blood. In this study, we employed two mass spectrometric (MS) approaches for a comprehensive characterization of zebrafish plasma proteome, i.e. conventional shotgun liquid chromatography-tandem mass spectrometry (LC-MS/MS) for an overview study and quantitative SWATH (Sequential Window Acquisition of all THeoretical fragment-ion spectra) for comparison between genders. 959 proteins were identified in the shotgun profiling with estimated concentrations spanning almost five orders of magnitudes. Other than the presence of a few highly abundant female egg yolk precursor proteins (vitellogenins), the proteomic profiles of male and female plasmas were very similar in both number and abundance and there were basically no other highly gender-biased proteins. The types of plasma proteins based on IPA (Ingenuity Pathway Analysis) classification and tissue sources of production were also very similar. Furthermore, the zebrafish plasma proteome shares significant similarities with human plasma proteome, in particular in top abundant proteins including apolipoproteins and complements. Thus, the current study provided a valuable dataset for future evaluation of plasma proteins in zebrafish. PMID:27071722

  12. Quantitative anatomical and behavioral analyses of regeneration and collateral sprouting following spinal cord transection in the nurse shark (ginglymostoma cirratum).

    PubMed

    Gelderd, J B

    1979-01-01

    The spinal cord was transected at the mid-thoracic level in 32 nurse sharks. Four animals per group were sacrificed at intervals of 10, 20, 30, 40, 60 and 90 days postoperative. Two groups of fish underwent a subsequent spinla1 cord retransection at the same site at 90 days and were sacrificed 10 and 20 days later. Three sections of spinal cord were removed from each shark for histological analysis. Behaviorally, timed trials for swimming speed and a strength test for axial musculature contraction caudal to the lesion site were performed at 5 day postoperative intervals. Histological analysis showed little regeneration (9-13 percent) of two descending tracts 90 days following the lesion and no return of rostrally controlled movements caudal to the lesion. However, synaptic readjustment did occur caudal to the lesion. This phenomenon was attributed to local segmental sprouting of adjacent, intact nerve fibers. A close correlation was shown between this synaptic readjustment and the strength of uncontrollable undulatory movements seen caudal to the lesion site following spinal cord transection. The relationship of regeneration and collateral sprouting to quantitative behavioral changes is discussed. PMID:543459

  13. Quantitative Analyse von Korallengemeinschaften des Sanganeb-Atolls (mittleres Rotes Meer). I. Die Besiedlungsstruktur hydrodynamisch unterschiedlich exponierter Außen- und Innenriffe

    NASA Astrophysics Data System (ADS)

    Mergner, H.; Schuhmacher, H.

    1985-12-01

    The Sanganeb-Atoll off Port Sudan is an elongate annular reef which rests on a probably raised block in the fracture zone along the Red Sea-graben. Its gross morphology was most likely formed by subaerial erosion during low sealevel conditions. Features of its topography and hydrography are described. The prevailing wind waves are from NE, Hence, the outer and inner reef slopes are exposed to different hydrodynamic conditions. The sessile benthos was analysed using the quadrat method. Four test quadrats (5×5 m each) were selected on the outer and inner slopes at a depth of 10 m along a SSW-NNE transect across the atoll. Cnidaria were by far the most dominating group; coralline algae, Porifera, Bryozoa and Ascidia, however, counted for just under 3 % living cover. Light and temperature intensities did not differ significantly at the sites studied; water movement, however, decreased in the following order: TQ IV (outer NE side of the reef ring) was exposed to strong swell and surf; TQ II (inner side of the SW-ring) was met by a strong longreef current; TQ I was situated on the outer lee of the SW-atoll ring and TQ III in the inner lee of the NE-side. This hydrodynamic gradient correlates with the composition of the coral communities from predominantly branching Scleractinia (staghorn-like and other Acropora species and Pocillopora) in TQ IV, through a Lobophyllia, Porites and Xenia-dominated community in TQ II, and a mixed community with an increasing percentage of xeniid and alcyoniid soft corals in TQ I, to a community in TQ III which is dominated by the soft corals Sinularia and Dendronephthya. The cnidarian cover ranged between 42.4 and 56.6 % whereby the two exposed test quadrats had a higher living coverage than the protected ones. In total, 2649 colonies comprising 124 species of stony, soft and hydrocorals were recorded by an elaborate method of accurate in-situ mapping. The 90 scleractinian species found include 3 species new to the Red Sea and 11 hitherto

  14. Quantitative and qualitative validations of a sonication-based DNA extraction approach for PCR-based molecular biological analyses.

    PubMed

    Dai, Xiaohu; Chen, Sisi; Li, Ning; Yan, Han

    2016-05-15

    The aim of this study was to comprehensively validate the sonication-based DNA extraction method, in hope of the replacement of the so-called 'standard DNA extraction method' - the commercial kit method. Microbial cells in the digested sludge sample, containing relatively high amount of PCR-inhibitory substances, such as humic acid and protein, were applied as the experimental alternatives. The procedure involving solid/liquid separation of sludge sample and dilution of both DNA templates and inhibitors, the minimum templates for PCR-based analyses, and the in-depth understanding from the bias analysis by pyrosequencing technology were obtained and confirmed the availability of the sonication-based DNA extraction method. PMID:26774955

  15. Quantitative and molecular analyses of mutation in a pSV2gpt transformed CHO cell line

    SciTech Connect

    Stankowski, L.F. Jr.; Tindall, K.R.; Hsie, A.W.

    1983-01-01

    Following NDA-mediated gene transfer we have isolated a cell line useful for studying gene mutation at the molecular level. This line, AS52, derived from a hypoxanthine-guanine phosphoribosyl transferase (HGPRT) deficient Chinese hamster ovary (CHO) cell line, carries a single copy of the E. coli xanthine-guanine phosphoribosyl transferase (XGPRT) gene (gpt) and exhibits a spontaneous mutant frequency of 20 TG/sup r/ mutants/10/sup 6/ clonable cells. As with HGPRT/sup -/ mutants, XGPRT/sup -/ mutants can be selected in 6-thioguanine. AS52 (XGPRT/sup +/) and wild type CHO (HGPRT/sup +/) cell exhibit almost identical cytotoxic responses to various agents. We observed significant differences in mutation induction by UV light and ethyl methanesulfonate (EMS). Ratios of XGPRT/sup -/ to HGPRT/sup -/ mutants induced per unit dose (J/m/sup 2/ for UV light and ..mu..g/ml for EMS) are 1.4 and 0.70, respectively. Preliminary Southern blot hybridization analyses has been performed on 30 XGPRT/sup -/ AS52 mutants. A majority of spontaneous mutants have deletions ranging in size from 1 to 4 kilobases (9/19) to complete loss of gpt sequences (4/19); the remainder have no detectable (5/19) or only minor (1/19) alterations. 5/5 UV-induced and 5/6 EMS-induced mutants do not show a detectable change. Similar analyses are underway for mutations induced by x-irradiation and ICR 191 treatment.

  16. Quantitative Plasmon Mode and Surface-Enhanced Raman Scattering Analyses of Strongly Coupled Plasmonic Nanotrimers with Diverse Geometries.

    PubMed

    Lee, Haemi; Kim, Gyeong-Hwan; Lee, Jung-Hoon; Kim, Nam Hoon; Nam, Jwa-Min; Suh, Yung Doug

    2015-07-01

    Here, we quantitatively monitored and analyzed the spectral redistributions of the coupled plasmonic modes of trimeric Au nanostructures with two ∼1 nm interparticle gaps and single-dye-labeled DNA in each gap as a function of varying trimer symmetries. Our precise Mie scattering measurement with the laser-scanning-assisted dark-field microscopy allows for individual visualization of the orientations of the radiation fields of the coupled plasmon modes of the trimers and analyzing the magnitude and direction of the surface-enhanced Raman scattering (SERS) signals from the individual plasmonic trimers. We found that the geometric transition from acute-angled trimer to linear trimer induces the red shift of the longitudinally polarized mode and the blue shift of the axially polarized mode. The finite element method (FEM) calculation results show the distinct "on" and "off" of the plasmonic modes at the two gaps of the trimer. Importantly, the single-molecule-level systematic correlation studies among the near-field, far-field, and surface-enhanced Raman scattering reveal that the SERS signals from the trimers are determined by the largely excited coupled plasmon between the two competing plasmon modes, longitudinal and axial modes. Further, the FEM calculation revealed that even 0.5 nm or smaller discrepancy in the sizes of two gaps of the linear trimer led to >10-fold difference in the SERS signal. Granted that two gap sizes are not likely to be completely the same in actual experiments, one of two gaps plays a more significant role in generating the SERS signal. Overall, this work provides the knowledge and handles for the understanding and systematic control of the magnitude and polarization direction of the both plasmonic response and SERS signal from trimeric nanostructures and sets up the platform for the optical properties and the applications of plasmonically coupled trimers and higher multimeric nanostructures. PMID:26075353

  17. Quantitative Signaling and Structure-Activity Analyses Demonstrate Functional Selectivity at the Nociceptin/Orphanin FQ Opioid Receptor.

    PubMed

    Chang, Steven D; Mascarella, S Wayne; Spangler, Skylar M; Gurevich, Vsevolod V; Navarro, Hernan A; Carroll, F Ivy; Bruchas, Michael R

    2015-09-01

    Comprehensive studies that consolidate selective ligands, quantitative comparisons of G protein versus arrestin-2/3 coupling, together with structure-activity relationship models for G protein-coupled receptor (GPCR) systems are less commonly employed. Here we examine biased signaling at the nociceptin/orphanin FQ opioid receptor (NOPR), the most recently identified member of the opioid receptor family. Using real-time, live-cell assays, we identified the signaling profiles of several NOPR-selective ligands in upstream GPCR signaling (G protein and arrestin pathways) to determine their relative transduction coefficients and signaling bias. Complementing this analysis, we designed novel ligands on the basis of NOPR antagonist J-113,397 [(±)-1-[(3R*,4R*)-1-(cyclooctylmethyl)-3-(hydroxymethyl)-4-piperidinyl]-3-ethyl-1,3-dihydro-2H-benzimidazol-2-one] to explore structure-activity relationships. Our study shows that NOPR is capable of biased signaling, and further, the NOPR selective ligands MCOPPB [1-[1-(1-methylcyclooctyl)-4-piperidinyl]-2-(3R)-3-piperidinyl-1H-benzimidazole trihydrochloride] and NNC 63-0532 [8-(1-naphthalenylmethyl)-4-oxo-1-phenyl-1,3,8-triazaspiro[4.5]decane-3-acetic acid, methyl ester] are G protein-biased agonists. Additionally, minor structural modification of J-113,397 can dramatically shift signaling from antagonist to partial agonist activity. We explore these findings with in silico modeling of binding poses. This work is the first to demonstrate functional selectivity and identification of biased ligands at the nociceptin opioid receptor. PMID:26134494

  18. Quantitative determination of the oxidation state of iron in biotite using x-ray photoelectron spectroscopy: II. In situ analyses

    SciTech Connect

    Raeburn, S.P. |; Ilton, E.S.; Veblen, D.R.

    1997-11-01

    X-ray photoelectron spectroscopy (XPS) was used to determine Fe(III)/{Sigma}Fe in individual biotite crystals in thin sections of ten metapelites and one syenite. The in situ XPS analyses of Fe(III)/{Sigma}Fe in biotite crystals in the metapelites were compared with published Fe(III)/{Sigma}Fe values determined by Moessbauer spectroscopy (MS) for mineral separates from the same hand samples. The difference between Fe(III)/{Sigma}Fe by the two techniques was greatest for samples with the lowest Fe(III)/{Sigma}Fe (by MS). For eight metamorphic biotites with Fe(III)/{Sigma}Fe = 9-27% comparison of the two techniques yielded a linear correlation of r = 0.94 and a statistically acceptable fit of [Fe(III)/{Sigma}Fe]{sub xps} = [Fe(III)/{Sigma}Fe]{sub ms}. The difference between Fe(III)/{Sigma}Fe by the two techniques was greater for two samples with Fe(III)/{Sigma}Fe {le} 6% (by MS). For biotite in the syenite sample, Fe(III)/{Sigma}Fe determined by both in situ XPS and bulk wet chemistry/electron probe microanalysis were similar. This contribution demonstrates that XPS can be used to analyze bulk Fe(III)/{Sigma}Fe in minerals in thin sections when appropriate precautions taken to avoid oxidation of the near-surface during preparation of samples. 25 refs., 3 figs., 4 tabs.

  19. Quantitative and pattern recognition analyses of magnoflorine, spinosin, 6'''-feruloyl spinosin and jujuboside A by HPLC in Zizyphi Semen.

    PubMed

    Kim, Won Il; Zhao, Bing Tian; Zhang, Hai Yan; Lee, Je Hyun; Son, Jong Keun; Woo, Mi Hee

    2014-01-01

    Two rapid and simple HPLC methods with UV detector to determine three main compounds (magnoflorine, spinosin and 6'''-feruloyl spinosin) and evaporative light scattering detector (ELSD) to determine jujuboside A were developed for the chemical analyses of Zizyphi Semen. Magnoflorine, spinosin, and 6'''-feruloyl spinosin were separated with an YMC J'sphere ODS-H80 column (250 mm × 4.6 mm, 4 μm) by the gradient elution followed by the isocratic elution using methanol with 0.1 % formic acid and water with 0.1 % formic acid as the mobile phase. The flow rate was 1.0 mL/min. Jujuboside A was separated by HPLC-ELSD with YoungJinBioChrom Aegispak C18-L column (250 mm × 4.6 mm, 5 μm) column in a gradient elution using methanol with 0.1 % formic acid (A) and water with 0.1 % formic acid as the mobile phase. These two methods were fully validated with respect to linearity, precision, accuracy, stability, and robustness. These HPLC methods were applied successfully to quantify four compounds in a Zizyphi Semen extract. The HPLC analytical methods were validated for pattern recognition analysis by repeated analysis of 91 seed samples corresponding to 48 Zizyphus jujuba var. spinosa (J01-J48) and 43 Zizyphus mauritiana (M01-M43). The results indicate that these methods are suitable for a quality evaluation of Zizyphi Semen. PMID:24310099

  20. Quantitative and qualitative analyses of DNA fragments based on electrical charge detection on a portable electrophoresis device.

    PubMed

    Chen, Gin-Shin; Chen, Sheng-Fu; Lu, Chih-Cheng

    2004-01-01

    A concept regarding DNA fragments electrophoretic analyses by directly detecting electrical charges is proposed. The arrival time and voltage of charged DNA fragments with different charge-to-mass ratio could be detected using the custom-made micro electronic circuits. These time and voltage information imply the size and intensity information acquired from the conventional slab gel image by fluorescent labeling. A prototype of the portable electrophoresis device consists of a flow channel with the dimension of 35 mm (length) x 0.5 mm (width) x 0.2 mm (depth) on an acrylic substrate, and the detection circuit with amplification gain of 10,000 and analogous filter bandwidth between 0.1 Hz and 10 Hz has been developed. A simple experiment was carried out to demonstrate the feasibility of proposed idea. The volume of 2mul of the DNA ladder (1 Kb Plus DNA ladder, Invitrogen, U.S.A.) with the diluted concentration of 0.1mug/mul was loaded into the reservoir when applying the electrical field of 12.5 V/cm to both end of the flow channel, which was only filled with TBE solution. The preliminary results showed that the developed electrophoresis device can pick up the electrical signals of un-separated DNA fragments with total mass of 0.2 mug , and the magnitude is 0.6 V . Micro flow channels fabricated by an excimer-laser machine and low-noise amplifier with high gain, e.g. 100,000 are being processed. Moreover, HEC (hydroxyethylcellulose) solution will be utilized as the media in the micro channels for DNA fragments separation. PMID:17270819

  1. Gamma-Diketone central neuropathy: quantitative analyses of cytoskeletal components in myelinated axons of the rat rubrospinal tract.

    PubMed

    Lopachin, Richard M; Jortner, Bernard S; Reid, Maria L; Monir, Alim

    2005-12-01

    Loss of axon caliber is a primary component of gamma-diketone neuropathy [LoPachin RM, DeCaprio AP. gamma-Diketone central neuropathy: axon atrophy and the role of cytoskeletal protein adduction. Toxicol Appl Pharmacol 2004;199:20-34]. It is possible that this effect is mediated by changes in the density of cytoskeletal components and corresponding spatial relationships. To examine this possibility, morphometric methods were used to quantify the effects of 2,5-hexanedione (HD) intoxication on neurofilament-microtubule densities and nearest neighbor distances in myelinated rubrospinal axons. Rats were exposed to HD at one of two daily dose-rates (175 or 400 mg/kg per day, gavage) until a moderate level of neurotoxicity was achieved (99 or 21 days of intoxication, respectively) as determined by gait analysis and measurements of hindlimb grip strength. Results indicate that, regardless of dose-rate, HD intoxication did not cause changes in axonal neurofilament (NF) density, but did significantly increase microtubule (MT) density. No consistent alterations in interneurofilament or NF-MT distances were detected by ultrastructural morphometric analyses. These data suggest that the axon atrophy induced by HD was not mediated by major disruptions of stationary cytoskeletal organization. Recent biochemical studies of spinal cord from HD intoxicated rats showed that, although the NF protein content in the stationary cytoskeleton (polymer fraction) was not affected, the mobile subunit pool was depleted substantially [LoPachin RM, He D, Reid ML, Opanashuk LA. 2,5-Hexanedione-induced changes in the monomeric neurofilament protein content of rat spinal cord fractions. Toxicol Appl Pharmacol 2004;198:61-73]. The stability of the polymer fraction during HD intoxication is consistent with the absence of significant ultrastructural modifications noted in the present study. Together, these findings implicate loss of mobile NF proteins as the primary mechanism of axon atrophy. PMID

  2. Qualitative and quantitative analyses of magmatic stoping in the roof of the Proterozoic Åva ring complex

    NASA Astrophysics Data System (ADS)

    Krumbholz, Michael; Burchardt, Steffi

    2013-04-01

    Daly (1903) defined magmatic stoping as magma emplacement due to the detachment of blocks of magma-chamber roof- and wall rocks and their incorporation into the magma chamber. Stoping itself involves a number of interrelated processes, e.g. hydraulic fracturing, partial melting, and explosive exfoliation, that are a product of the complex thermal, mechanical, and chemical interaction of magma and the country rocks. However, the individual processes, as well as the influence of the main controlling parameters, are poorly understood. This makes it difficult to quantify the contribution of magmatic stoping as a magma-emplacement process, which has resulted in vigorous debates about its efficiency and overall significance. To resolve this controversy, detailed, qualitative and quantitative studies to better understand the involved processes and the interaction of forces are essential. We studied strongly foliated amphibolite-facies volcaniclastic metasedimentary rocks that were intruded by granitic magmas of the Åva ring complex (Finland), a 1.76 Ga intrusion which formed at 5 to 6 km depth (Eklund and Shebanov, 2005). In the roof region of the main intrusion, the country rock is strongly fragmented and incorporated into the granite as xenoliths ranging in size (area) from tens of m2 to mm2. We systematically recorded subhorizontal, glacially polished coastal outcrops that contain thousands of xenoliths. The xenoliths show signs of brittle deformation resulting in intense fragmentation caused by the intrusion of granitic veins and dyklets, i.e. the fragments are angular. Bigger blocks are often split along the foliation and are surrounded by a cloud of smaller blocks. In many places, the blocks still fit to each other like a jig saw puzzle, while in other domains, they appear to have tumbled around. In contrast, some outcrops contain rounded xenolithic blocks that show signs of ductile deformation. From the outcrop maps, we carefully recorded all xenoliths to

  3. Evidence for simvastatin anti-inflammatory actions based on quantitative analyses of NETosis and other inflammation/oxidation markers

    PubMed Central

    Al-Ghoul, Walid M.; Kim, Margarita S.; Fazal, Nadeem; Azim, Anser C.; Ali, Ashraf

    2014-01-01

    Simvastatin (SMV) has been shown to exhibit promising anti-inflammatory properties alongside its classic cholesterol lowering action. We tested these emerging effects in a major thermal injury mouse model (3rd degree scald, ~20% TBSA) with previously documented, inflammation-mediated intestinal defects. Neutrophil extracellular traps (NETs) inflammation measurement methods were used alongside classic gut mucosa inflammation and leakiness measurements with exogenous melatonin treatment as a positive control. Our hypothesis is that simvastatin has protective therapeutic effects against early postburn gut mucosa inflammation and leakiness. To test this hypothesis, we compared untreated thermal injury (TI) adult male mice with TI littermates treated with simvastatin (0.2 mg/kg i.p., TI + SMV) immediately following burn injury and two hours before being sacrificed the day after; melatonin-treated (Mel) (1.86 mg/kg i.p., TI + Mel) mice were compared as a positive control. Mice were assessed for the following: (1) tissue oxidation and neutrophil infiltration in terminal ileum mucosa using classic carbonyl, Gr-1, and myeloperoxidase immunohistochemical or biochemical assays, (2) NETosis in terminal ileum and colon mucosa homogenates and peritoneal and fluid blood samples utilizing flow cytometric analyses of the surrogate NETosis biomarkers, picogreen and Gr-1, and (3) transepithelial gut leakiness as measured in terminal ileum and colon with FITC-dextran and transepithelial electrical resistance (TEER). Our results reveal that simvastatin and melatonin exhibit consistently comparable therapeutic protective effects against the following: (1) gut mucosa oxidative stress as revealed in the terminal ileum by markers of protein carbonylation as well as myeloperoxidase (MPO) and Gr-1 infiltration, (2) NETosis as revealed in the gut milieu, peritoneal lavage and plasma utilizing picogreen and Gr-1 flow cytometry and microscopy, and (3) transepithelial gut leakiness as

  4. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  5. Till formation under a soft-bedded palaeo-ice stream of the Scandinavian Ice Sheet, constrained using qualitative and quantitative microstructural analyses

    NASA Astrophysics Data System (ADS)

    Narloch, Włodzimierz; Piotrowski, Jan A.; Wysota, Wojciech; Tylmann, Karol

    2015-08-01

    This study combines micro- and macroscale studies, laboratory experiments and quantitative analyses to decipher processes of till formation under a palaeo-ice stream and the nature of subglacial sediment deformation. Till micromorphology (grain lineations, grain stacks, turbate structures, crushed grains, intraclasts and domains), grain-size and till fabric data are used to investigate a basal till generated by the Vistula Ice Stream of the Scandinavian Ice Sheet during the last glaciation in north-central Poland. A comparison of microstructures from the in situ basal till and laboratory-sheared till experiments show statistical relationships between the number of grain lineations and grain stacks; and between the number of grain lineations and turbate structures. Microstructures in the in situ till document both brittle and ductile styles of deformation, possibly due to fluctuating basal water pressures beneath the ice stream. No systematic vertical and lateral trends are detected in the parameters investigated in the in situ till, which suggests a subglacial mosaic of relatively stable and unstable areas. This situation can be explained by an unscaled space-transgressive model of subglacial till formation whereby at any given point in time different processes operated in different places under the ice sheet, possibly related to the distance from the ice margin and water pressure at the ice-bed interface. A new quantitative measure reflecting the relationship between the number of grain lineations and grain stacks may be helpful in discriminating between pervasive and non-pervasive deformation and constraining the degree of stress heterogeneity within a deformed bed. Independent strain magnitude estimations revealed by a quantitative analysis of micro- and macro-particle data show low cumulative strain in the ice-stream till in the order of 10-102.

  6. Sensitivity Analyses of Exposure Estimates from a Quantitative Job-exposure Matrix (SYN-JEM) for Use in Community-based Studies

    PubMed Central

    Peters, Susan

    2013-01-01

    Objectives: We describe the elaboration and sensitivity analyses of a quantitative job-exposure matrix (SYN-JEM) for respirable crystalline silica (RCS). The aim was to gain insight into the robustness of the SYN-JEM RCS estimates based on critical decisions taken in the elaboration process. Methods: SYN-JEM for RCS exposure consists of three axes (job, region, and year) based on estimates derived from a previously developed statistical model. To elaborate SYN-JEM, several decisions were taken: i.e. the application of (i) a single time trend; (ii) region-specific adjustments in RCS exposure; and (iii) a prior job-specific exposure level (by the semi-quantitative DOM-JEM), with an override of 0 mg/m3 for jobs a priori defined as non-exposed. Furthermore, we assumed that exposure levels reached a ceiling in 1960 and remained constant prior to this date. We applied SYN-JEM to the occupational histories of subjects from a large international pooled community-based case–control study. Cumulative exposure levels derived with SYN-JEM were compared with those from alternative models, described by Pearson correlation (Rp) and differences in unit of exposure (mg/m3-year). Alternative models concerned changes in application of job- and region-specific estimates and exposure ceiling, and omitting the a priori exposure ranking. Results: Cumulative exposure levels for the study subjects ranged from 0.01 to 60 mg/m3-years, with a median of 1.76 mg/m3-years. Exposure levels derived from SYN-JEM and alternative models were overall highly correlated (Rp > 0.90), although somewhat lower when omitting the region estimate (Rp = 0.80) or not taking into account the assigned semi-quantitative exposure level (Rp = 0.65). Modification of the time trend (i.e. exposure ceiling at 1950 or 1970, or assuming a decline before 1960) caused the largest changes in absolute exposure levels (26–33% difference), but without changing the relative ranking (Rp = 0.99). Conclusions: Exposure estimates

  7. Analysis of glycosidically bound aroma precursors in tea leaves. 1. Qualitative and quantitative analyses of glycosides with aglycons as aroma compounds.

    PubMed

    Wang, D; Yoshimura, T; Kubota, K; Kobayashi, A

    2000-11-01

    Twenty-six synthetic glycosides constituting aglycons of the main tea aroma compounds ((Z)-3-hexenol, benzyl alcohol, 2-phenylethanol, methyl salicylate, geraniol, linalool, and four isomers of linalool oxides) were synthesized in our laboratory as authentic compounds. Those compounds were used to carry out a direct qualitative and quantitative determination of the glycosides as aroma precursors in different tea cultivars by capillary gas chromatographic-mass spectrometric (GC-MS) analyses after trifluoroacetyl conversion of the tea glycosidic fractions. Eleven beta-D-glucopyranosides, 10 beta-primeverosides (6-O-beta-D-xylopyranosyl-beta-D-glucopyranoside) with aglycons as the above alcohols, and geranyl beta-vicianoside (6-O-alpha-L-arabinopyranosyl-beta-D-glucopyranoside) were identified (tentatively identified in the case of methyl salicylate beta-primeveroside) in fresh tea leaves and quantified on the basis of calibration curves that had been established by using the synthetic compounds. Primeverosides were more abundant than glucosides in each cultivar we investigated for making green tea, oolong tea, and black tea. Separation of the diastereoisomers of linalool and four isomers of linalool oxides by GC analyses is also discussed. PMID:11087494

  8. LC-MS/MS method development and validation for quantitative analyses of 2-aminothiazoline-4-carboxylic acid - a new cyanide exposure marker in post mortem blood.

    PubMed

    Giebułtowicz, Joanna; Rużycka, Monika; Fudalej, Marcin; Krajewski, Paweł; Wroczyński, Piotr

    2016-04-01

    2-aminothiazoline-4-carboxylic acid (ATCA) is a hydrogen cyanide metabolite that has been found to be a reliable biomarker of cyanide poisoning, because of its long-term stability in biological material. There are several methods of ATCA determination; however, they are restricted to extraction on mixed mode cation exchange sorbents. To date, there has been no reliable method of ATCA determination in whole blood, the most frequently used material in forensic analysis. This novel method for ATCA determination in post mortem specimen includes protein precipitation, and derivatization of interfering compounds and their later extraction with ethyl acetate. ATCA was quantitatively analyzed via high performance liquid chromatography-tandem mass spectrometry with positive electrospray ionization detection using a hydrophilic interaction liquid chromatography column. The method satisfied all validation criteria and was tested on the real samples with satisfactory results. Therefore, this analytical approach has been proven to be a tool for measuring endogenous levels of ATCA in post mortem specimens. To conclude, a novel, accurate and sensitive method of ATCA determination in post mortem blood was developed. The establishment of the method provides new possibilities in the field of forensic science. PMID:26838446

  9. Quantitative x-ray diffraction analyses of samples used for sorption studies by the Isotope and Nuclear Chemistry Division, Los Alamos National Laboratory

    SciTech Connect

    Chipera, S.J.; Bish, D.L.

    1989-09-01

    Yucca Mountain, Nevada, is currently being investigated to determine its suitability to host our nation`s first geologic high-level nuclear waste repository. As part of an effort to determine how radionuclides will interact with rocks at Yucca Mountain, the Isotope and Nuclear Chemistry (INC) Division of Los Alamos National Laboratory has conducted numerous batch sorption experiments using core samples from Yucca Mountain. In order to understand better the interaction between the rocks and radionuclides, we have analyzed the samples used by INC with quantitative x-ray diffraction methods. Our analytical methods accurately determine the presence or absence of major phases, but we have not identified phases present below {approximately}1 wt %. These results should aid in understanding and predicting the potential interactions between radionuclides and the rocks at Yucca Mountain, although the mineralogic complexity of the samples and the lack of information on trace phases suggest that pure mineral studies may be necessary for a more complete understanding. 12 refs., 1 fig., 1 tab.

  10. Haplotype and quantitative transcript analyses of Portuguese breast/ovarian cancer families with the BRCA1 R71G founder mutation of Galician origin.

    PubMed

    Santos, Catarina; Peixoto, Ana; Rocha, Patrícia; Vega, Ana; Soares, Maria José; Cerveira, Nuno; Bizarro, Susana; Pinheiro, Manuela; Pereira, Deolinda; Rodrigues, Helena; Castro, Fernando; Henrique, Rui; Teixeira, Manuel R

    2009-01-01

    We investigated the functional effect of the missense variant c.211A>G (R71G) localized at position -2 of exon 5 donor splice site in the BRCA1 gene and evaluated whether Portuguese and Galician families with this mutation share a common ancestry. Three unrelated Portuguese breast/ovarian cancer families carrying this variant were studied through qualitative and quantitative transcript analyses. We also evaluated the presence of loss of heterozigosity and the histopathologic characteristics of the carcinomas in those families. Informative families (two from Portugal and one from Galicia) were genotyped for polymorphic microsatellite markers flanking BRCA1 to reconstruct haplotypes. Qualitative RNA analysis revealed the presence of two alternative transcripts both in carriers of the BRCA1 R71G variant and in controls. Semi-quantitative fragment analysis and real-time RT-PCR showed a significant increase of the transcript with an out of frame deletion of the last 22nt of exon 5 (BRCA1-Delta22ntex5) and a decrease of the full-length transcript (BRCA1-ex5FL) in patients carrying the R71G mutation as compared to controls, whereas no significant differences were found for the transcript with in frame skipping of exon 5 (BRCA1-Deltaex5). One haplotype was found to segregate in the two informative Portuguese families and in the Galician family. We demonstrate that disruption of alternative transcript ratios is the mechanism causing hereditary breast/ovarian cancer associated with the BRCA1 R71G mutation. Furthermore, our findings indicate a common ancestry of the Portuguese and Galician families sharing this mutation. PMID:19123044

  11. Advances in the Quantitative Characterization of the Shape of Ash-Sized Pyroclast Populations: Fractal Analyses Coupled to Micro- and Nano-Computed Tomography Techniques

    NASA Astrophysics Data System (ADS)

    Rausch, J.; Vonlanthen, P.; Grobety, B. H.

    2014-12-01

    The quantification of shape parameters in pyroclasts is fundamental to infer the dominant type of magma fragmentation (magmatic vs. phreatomagmatic), as well as the behavior of volcanic plumes and clouds in the atmosphere. In a case study aiming at reconstructing the fragmentation mechanisms triggering maar eruptions in two geologically and compositionally distinctive volcanic fields (West and East Eifel, Germany), the shapes of a large number of ash particle contours obtained from SEM images were analyzed by a dilation-based fractal method. Volcanic particle contours are pseudo-fractals showing mostly two distinct slopes in Richardson plots related to the fractal dimensions D1 (small-scale "textural" dimension) and D2 (large-scale "morphological" dimension). The validity of the data obtained from 2D sections was tested by analysing SEM micro-CT slices of one particle cut in different orientations and positions. Results for West Eifel maar particles yield large D1 values (> 1.023), resembling typical values of magmatic particles, which are characterized by a complex shape, especially at small scales. In contrast, the D1 values of ash particles from one East Eifel maar deposit are much smaller, coinciding with the fractal dimensions obtained from phreatomagmatic end-member particles. These quantitative morphological analyses suggest that the studied maar eruptions were triggered by two different fragmentation processes: phreatomagmatic in the East Eifel and magmatic in the West Eifel. The application of fractal analysis to quantitatively characterize the shape of pyroclasts and the linking of fractal dimensions to specific fragmentation processes has turned out to be a very promising tool for studying the fragmentation history of any volcanic eruption. The next step is to extend morphological analysis of volcanic particles to 3 dimensions. SEM micro-CT, already applied in this study, offers the required resolution, but is not suitable for the analysis of a large

  12. NEW TARGET AND CONTROL ASSAYS FOR QUANTITATIVE POLYMERASE CHAIN REACTION (QPCR) ANALYSIS OF ENTEROCOCCI IN WATER

    EPA Science Inventory

    Enterococci are frequently monitored in water samples as indicators of fecal pollution. Attention is now shifting from culture based methods for enumerating these organisms to more rapid molecular methods such as QPCR. Accurate quantitative analyses by this method requires highly...

  13. Quantitative in vivo Analyses Reveal Calcium-dependent Phosphorylation Sites and Identifies a Novel Component of the Toxoplasma Invasion Motor Complex

    PubMed Central

    Nebl, Thomas; Prieto, Judith Helena; Kapp, Eugene; Smith, Brian J.; Williams, Melanie J.; Yates, John R.; Cowman, Alan F.; Tonkin, Christopher J.

    2011-01-01

    Apicomplexan parasites depend on the invasion of host cells for survival and proliferation. Calcium-dependent signaling pathways appear to be essential for micronemal release and gliding motility, yet the target of activated kinases remains largely unknown. We have characterized calcium-dependent phosphorylation events during Toxoplasma host cell invasion. Stimulation of live tachyzoites with Ca2+-mobilizing drugs leads to phosphorylation of numerous parasite proteins, as shown by differential 2-DE display of 32[P]-labeled protein extracts. Multi-dimensional Protein Identification Technology (MudPIT) identified ∼546 phosphorylation sites on over 300 Toxoplasma proteins, including 10 sites on the actomyosin invasion motor. Using a Stable Isotope of Amino Acids in Culture (SILAC)-based quantitative LC-MS/MS analyses we monitored changes in the abundance and phosphorylation of the invasion motor complex and defined Ca2+-dependent phosphorylation patterns on three of its components - GAP45, MLC1 and MyoA. Furthermore, calcium-dependent phosphorylation of six residues across GAP45, MLC1 and MyoA is correlated with invasion motor activity. By analyzing proteins that appear to associate more strongly with the invasion motor upon calcium stimulation we have also identified a novel 15-kDa Calmodulin-like protein that likely represents the MyoA Essential Light Chain of the Toxoplasma invasion motor. This suggests that invasion motor activity could be regulated not only by phosphorylation but also by the direct binding of calcium ions to this new component. PMID:21980283

  14. Synthesis, pharmacological characterization, and quantitative structure-activity relationship analyses of 3,7,9,9-tetraalkylbispidines: derivatives with specific bradycardic activity.

    PubMed

    Schön, U; Antel, J; Brückner, R; Messinger, J; Franke, R; Gruska, A

    1998-01-29

    A series of 3,7,9,9-tetraalkyl-3,7-diazabicyclo[3.3.1]nonane derivatives (bispidines) was synthesized and identified as potential antiischemic agents. Pharmacological experiments in vitro as well as in vivo are described, and the results are listed. For selection of those compounds fitting best to the desired profile of a specific bradycardic antianginal agent--decrease in heart rate without affecting contractility and blood pressure--these results were scored and ranked. Quantitative structure--activity relationship (QSAR) analyses were performed and discussed a posteriori by means of Hansch, nonelementary discriminant and factor analysis to get insight into the molecular features determining the biological profile. Highly significant equations were obtained, indicating hydrophobic and steric effects. Both pharmacological ranking and QSAR considerations showed compound 6 as the optimum within the structural class under investigation. Compound 6 (tedisamil, KC8857) has been selected as the most promising compound and was chosen for further pharmacological and clinical investigations as an antiischemic drug. PMID:9464363

  15. Development of a new real-time TaqMan PCR assay for quantitative analyses of Candida albicans resistance genes expression.

    PubMed

    Kofla, Grzegorz; Ruhnke, Markus

    2007-01-01

    Candida albicans is an important opportunistic pathogen that can cause serious fungal diseases in immunocompromised patients including cancer patients, transplant patients, and patients receiving immunosuppressive therapy in general, those with human immunodeficiency virus infections and undergoing major surgery. Its emergence spectrum varies from mucosal to systemic infections and the first line treatment is still based on fluconazole, a triazole derivate with a potent antifungal activity against most of C. albicans strains. Nevertheless the emergence of fluconazole-resistant C. albicans strains can lead to treatment failures and thus become a clinical problem in the management of such infections. For that reason we consider it important to study mechanisms inducing azole resistance and the possibilities to influence this process. In this work we give a short report on a real-time PCR (TaqMan) assay, which can be used for quantitative analyses of gene expression levels of MDR1, CDR1 and ERG11, genes supposed to contribute to development of the resistance mechanisms. We show some results achieved with that assay in fluconazole susceptible and resistant strains that confirm results seen earlier in experiments using Northern blot hybridisation and prove that the comparative DeltaCt method is valid for our system. PMID:16945439

  16. A High-Density Genetic Map with Array-Based Markers Facilitates Structural and Quantitative Trait Locus Analyses of the Common Wheat Genome

    PubMed Central

    Iehisa, Julio Cesar Masaru; Ohno, Ryoko; Kimura, Tatsuro; Enoki, Hiroyuki; Nishimura, Satoru; Okamoto, Yuki; Nasuda, Shuhei; Takumi, Shigeo

    2014-01-01

    The large genome and allohexaploidy of common wheat have complicated construction of a high-density genetic map. Although improvements in the throughput of next-generation sequencing (NGS) technologies have made it possible to obtain a large amount of genotyping data for an entire mapping population by direct sequencing, including hexaploid wheat, a significant number of missing data points are often apparent due to the low coverage of sequencing. In the present study, a microarray-based polymorphism detection system was developed using NGS data obtained from complexity-reduced genomic DNA of two common wheat cultivars, Chinese Spring (CS) and Mironovskaya 808. After design and selection of polymorphic probes, 13,056 new markers were added to the linkage map of a recombinant inbred mapping population between CS and Mironovskaya 808. On average, 2.49 missing data points per marker were observed in the 201 recombinant inbred lines, with a maximum of 42. Around 40% of the new markers were derived from genic regions and 11% from repetitive regions. The low number of retroelements indicated that the new polymorphic markers were mainly derived from the less repetitive region of the wheat genome. Around 25% of the mapped sequences were useful for alignment with the physical map of barley. Quantitative trait locus (QTL) analyses of 14 agronomically important traits related to flowering, spikes, and seeds demonstrated that the new high-density map showed improved QTL detection, resolution, and accuracy over the original simple sequence repeat map. PMID:24972598

  17. Quantitative analyses of postmortem heat shock protein mRNA profiles in the occipital lobes of human cerebral cortices: implications in cause of death.

    PubMed

    Chung, Ukhee; Seo, Joong-Seok; Kim, Yu-Hoon; Son, Gi Hoon; Hwang, Juck-Joon

    2012-11-01

    Quantitative RNA analyses of autopsy materials to diagnose the cause and mechanism of death are challenging tasks in the field of forensic molecular pathology. Alterations in mRNA profiles can be induced by cellular stress responses during supravital reactions as well as by lethal insults at the time of death. Here, we demonstrate that several gene transcripts encoding heat shock proteins (HSPs), a gene family primarily responsible for cellular stress responses, can be differentially expressed in the occipital region of postmortem human cerebral cortices with regard to the cause of death. HSPA2 mRNA levels were higher in subjects who died due to mechanical asphyxiation (ASP), compared with those who died by traumatic injury (TI). By contrast, HSPA7 and A13 gene transcripts were much higher in the TI group than in the ASP and sudden cardiac death (SCD) groups. More importantly, relative abundances between such HSP mRNA species exhibit a stronger correlation to, and thus provide more discriminative information on, the death process than does routine normalization to a housekeeping gene. Therefore, the present study proposes alterations in HSP mRNA composition in the occipital lobe as potential forensic biological markers, which may implicate the cause and process of death. PMID:23135635

  18. Quantitative pharmacological analyses of the interaction between flumazenil and midazolam in monkeys discriminating midazolam: Determination of the functional half life of flumazenil.

    PubMed

    Zanettini, Claudio; France, Charles P; Gerak, Lisa R

    2014-01-15

    The duration of action of a drug is commonly estimated using plasma concentration, which is not always practical to obtain or an accurate estimate of functional half life. For example, flumazenil is used clinically to reverse the effects of benzodiazepines like midazolam; however, its elimination can be altered by other drugs, including some benzodiazepines, thereby altering its half life. This study used Schild analyses to characterize antagonism of midazolam by flumazenil and determine the functional half life of flumazenil. Four monkeys discriminated 0.178mg/kg midazolam while responding under a fixed-ratio 10 schedule of stimulus-shock termination; flumazenil was given at various times before determination of a midazolam dose-effect curve. There was a time-related decrease in the magnitude of shift of the midazolam dose-effect curve as the interval between flumazenil and midazolam increased. The potency of flumazenil, estimated by apparent pA2 values (95% CI), was 7.30 (7.12, 7.49), 7.17 (7.03, 7.31), 6.91 (6.72, 7.10) and 6.80 (6.67, 6.92) at 15, 30, 60 and 120min after flumazenil administration, respectively. The functional half life of flumazenil, derived from potency estimates, was 57±13min. Thus, increasing the interval between flumazenil and midazolam causes orderly decreases in flumazenil potency; however, across a broad range of conditions, the qualitative nature of the interaction does not change, as indicated by slopes of Schild plots at all time points that are not different from unity. Differences in potency of flumazenil are therefore due to elimination of flumazenil and not due to pharmacodynamic changes over time. PMID:24216249

  19. Quantitative analyses of axonal endings in the central nucleus of the inferior colliculus and distribution of 3H-labeling after injections in the dorsal cochlear nucleus.

    PubMed

    Oliver, D L

    1985-07-15

    Quantitative analyses of electron microscopic (EM) autoradiographs were used to identify the afferents from the dorsal cochlear nucleus in the central nucleus of the inferior colliculus (IC) in the cat. In order to localize the sources of radioactivity, material from axonal transport experiments was analyzed by means of a hypothetical grain procedure which takes the cross-scatter of beta particles into account. Measurements of the synaptic vesicles in axonal endings and a cluster analysis were used to identify different groups of endings. In order to determine which types of endings arise in the dorsal cochlear nucleus, axonal endings labeled after axonal transport and unlabeled endings were characterized and compared to the groups defined by the cluster analysis. Axonal endings with round synaptic vesicles were labeled with more than 2 grains/micron2 which was about 30% of the radioactivity in the central nucleus of the IC. This was six to seven times greater than if the radioactivity had been randomly distributed. Other tissue compartments usually had less radioactivity. Some myelinated and unmyelinated axons were labeled, but, as a group they had lower amounts of radioactivity than predicted by random labeling. In most cases, only low levels of activity were found in glial and postsynaptic structures. Five groups of axonal endings in the medial part of the central nucleus were identified by an analysis which clustered similar types of endings. The variance of the longest axis, the mean diameter, the variance of area, and the mean area of the synaptic vesicles were the variables most useful in distinguishing these five groups. Axonal endings with round synaptic vesicles were classified as either small, or large, or very large, while endings with pleomorphic vesicles were either large or small. Using measurements of the cross-sectional diameter of dendritic microtubules, samples of digitized axonal endings from normal and experimental cases were normalized and

  20. Evaluation of a real-time quantitative PCR method with propidium monazide treatment for analyses of viable fecal indicator bacteria in wastewater samples

    EPA Science Inventory

    The U.S. EPA is currently evaluating rapid, real-time quantitative PCR (qPCR) methods for determining recreational water quality based on measurements of fecal indicator bacteria DNA sequences. In order to potentially use qPCR for other Clean Water Act needs, such as updating cri...

  1. Quantitative Analyses about Market- and Prevalence-Based Needs for Adapted Physical Education Teachers in the Public Schools in the United States

    ERIC Educational Resources Information Center

    Zhang, Jiabei

    2011-01-01

    The purpose of this study was to analyze quantitative needs for more adapted physical education (APE) teachers based on both market- and prevalence-based models. The market-based need for more APE teachers was examined based on APE teacher positions funded, while the prevalence-based need for additional APE teachers was analyzed based on students…

  2. High resolution rare-earth elements analyses of natural apatite and its application in geo-sciences: Combined micro-PIXE, quantitative CL spectroscopy and electron spin resonance analyses

    NASA Astrophysics Data System (ADS)

    Habermann, D.; Götte, T.; Meijer, J.; Stephan, A.; Richter, D. K.; Niklas, J. R.

    2000-03-01

    The rare-earth element (REE) distribution in natural apatite is analysed by micro-PIXE, cathodoluminescence (CL) microscopy and spectroscopy and electron spin resonance (ESR) spectroscopy. The micro-PIXE analyses of an apatite crystal from Cerro de Mercado (Mexico) and the summary of 20 analyses of six francolite (conodonts of Triassic age) samples indicate that most of the REEs are enriched in apatite and francolite comparative to average shale standard (NASC). The analyses of fossil francolite revealing the REE-distribution not to be in balance with the REE-distribution of seawater and fish bone debris. Strong inhomogenous lateral REE-distribution in fossil conodont material is shown by CL-mapping and most probably not being a vital effect. Therefore, the resulting REE-signal from fossil francolite is the sum of vital and post-mortem incorporation. The necessary charge compensation for the substitution of divalent Ca by trivalent REE being done by different kind of electron defects and defect ions.

  3. Radiation applications in art and archaeometry. X-ray fluorescence applications to archaeometry. Possibility of obtaining non-destructive quantitative analyses

    NASA Astrophysics Data System (ADS)

    Milazzo, Mario

    2004-01-01

    The possibility of obtaining quantitative XRF analysis in archaeometric applications is considered in the following cases: Examinations of metallic objects with irregular surface: coins, for instance. Metallic objects with a natural or artificial patina on the surface. Glass or ceramic samples for which the problems for quantitative analysis rise from the non-detectability of matrix low Z elements. The fundamental parameter method for quantitative XRF analysis is based on a numerical procedure involving he relative values of XRF lines intensity. As a consequence it can be applied also to the experimental XRF spectra obtained for metallic objects if the correction for the irregular shape consists only in introducing a constant factor which does not affect the XRF intensity relative value. This is in fact possible in non-very-restrictive conditions for the experimental set up. The finenesses of coins with a superficial patina can be evaluated by resorting to the measurements of Rayleigh to Compton scattering intensity ratio at an incident energy higher than the one of characteristic X-ray. For glasses and ceramics the measurements of the Compton scattered intensity of the exciting radiation and the use of a proper scaling law make possible to evaluate the matrix absorption coefficients for all characteristic X-ray line energies.

  4. Origin of mobility enhancement by chemical treatment of gate-dielectric surface in organic thin-film transistors: Quantitative analyses of various limiting factors in pentacene thin films

    NASA Astrophysics Data System (ADS)

    Matsubara, R.; Sakai, Y.; Nomura, T.; Sakai, M.; Kudo, K.; Majima, Y.; Knipp, D.; Nakamura, M.

    2015-11-01

    For the better performance of organic thin-film transistors (TFTs), gate-insulator surface treatments are often applied. However, the origin of mobility increase has not been well understood because mobility-limiting factors have not been compared quantitatively. In this work, we clarify the influence of gate-insulator surface treatments in pentacene thin-film transistors on the limiting factors of mobility, i.e., size of crystal-growth domain, crystallite size, HOMO-band-edge fluctuation, and carrier transport barrier at domain boundary. We quantitatively investigated these factors for pentacene TFTs with bare, hexamethyldisilazane-treated, and polyimide-coated SiO2 layers as gate dielectrics. By applying these surface treatments, size of crystal-growth domain increases but both crystallite size and HOMO-band-edge fluctuation remain unchanged. Analyzing the experimental results, we also show that the barrier height at the boundary between crystal-growth domains is not sensitive to the treatments. The results imply that the essential increase in mobility by these surface treatments is only due to the increase in size of crystal-growth domain or the decrease in the number of energy barriers at domain boundaries in the TFT channel.

  5. Applications of Quaternary stratigraphic, soil-geomorphic, and quantitative geomorphic analyses to the evaluation of tectonic activity and landscape evolution in the Upper Coastal Plain, South Carolina

    SciTech Connect

    Hanson, K.L.; Bullard, T.F.; de Wit, M.W.; Stieve, A.L.

    1993-07-01

    Geomorphic analyses combined with mapping of fluvial terraces and upland geomorphic surfaces provide new approaches and data for evaluating the Quaternary activity of post-Cretaceous faults that are recognized in subsurface data at the Savannah River Site in the Upper Coastal Plain of southwestern South Carolina. Analyses of longitudinal stream and terrace profiles, regional slope maps, and drainage basin morphometry indicate long-term uplift and southeast tilt of the site region. Preliminary results of drainage basin characterization suggests an apparent rejuvenation of drainages along the trace of the Pen Branch fault (a Tertiary reactivated reverse fault that initiated as a basin-margin normal fault along the northern boundary of the Triassic Dunbarton Basin). This apparent rejuvenation of drainages may be the result of nontectonic geomorphic processes or local tectonic uplift and tilting within a framework of regional uplift.

  6. 3D numerical analyses for the quantitative risk assessment of subsidence and water flood due to the partial collapse of an abandoned gypsum mine.

    NASA Astrophysics Data System (ADS)

    Castellanza, R.; Orlandi, G. M.; di Prisco, C.; Frigerio, G.; Flessati, L.; Fernandez Merodo, J. A.; Agliardi, F.; Grisi, S.; Crosta, G. B.

    2015-09-01

    After the abandonment occurred in the '70s, the mining system (rooms and pillars) located in S. Lazzaro di Savena (BO, Italy), grown on three levels with the method rooms and pillars, has been progressively more and more affected by degradation processes due to water infiltration. The mine is located underneath a residential area causing significant concern to the local municipality. On the basis of in situ surveys, laboratory and in situ geomechanical tests, some critical scenarios were adopted in the analyses to simulate the progressive collapse of pillars and of roofs in the most critical sectors of the mine. A first set of numerical analyses using 3D geotechnical FEM codes were performed to predict the extension of the subsidence area and its interaction with buildings. Secondly 3D CFD analyses were used to evaluated the amount of water that could be eventually ejected outside the mine and eventually flooding the downstream village. The predicted extension of the subsidence area together with the predicted amount of the ejected water have been used to design possible remedial measurements.

  7. Fast and Reliable Quantitative Peptidomics with labelpepmatch.

    PubMed

    Verdonck, Rik; De Haes, Wouter; Cardoen, Dries; Menschaert, Gerben; Huhn, Thomas; Landuyt, Bart; Baggerman, Geert; Boonen, Kurt; Wenseleers, Tom; Schoofs, Liliane

    2016-03-01

    The use of stable isotope tags in quantitative peptidomics offers many advantages, but the laborious identification of matching sets of labeled peptide peaks is still a major bottleneck. Here we present labelpepmatch, an R-package for fast and straightforward analysis of LC-MS spectra of labeled peptides. This open-source tool offers fast and accurate identification of peak pairs alongside an appropriate framework for statistical inference on quantitative peptidomics data, based on techniques from other -omics disciplines. A relevant case study on the desert locust Schistocerca gregaria proves our pipeline to be a reliable tool for quick but thorough explorative analyses. PMID:26828777

  8. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    SciTech Connect

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-08-15

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput quantitative LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. In addition, we also report on the optimization of a reversed-phase LC method for the separation of lipids in these sample types.

  9. Regulation of flavonol content and composition in (Syrah×Pinot Noir) mature grapes: integration of transcriptional profiling and metabolic quantitative trait locus analyses.

    PubMed

    Malacarne, Giulia; Costantini, Laura; Coller, Emanuela; Battilana, Juri; Velasco, Riccardo; Vrhovsek, Urska; Grando, Maria Stella; Moser, Claudio

    2015-08-01

    Flavonols are a ubiquitous class of flavonoids that accumulate preferentially in flowers and mature berries. Besides their photo-protective function, they play a fundamental role during winemaking, stabilizing the colour by co-pigmentation with anthocyanins and contributing to organoleptic characteristics. Although the general flavonol pathway has been genetically and biochemically elucidated, the genetic control of flavonol content and composition at harvest is still not clear. To this purpose, the grapes of 170 segregating F1 individuals from a 'Syrah'×'Pinot Noir' population were evaluated at the mature stage for the content of six flavonol aglycons in four seasons. Metabolic data in combination with genetic data enabled the identification of 16 mQTLs (metabolic quantitative trait loci). For the first time, major genetic control by the linkage group 2 (LG 2)/MYBA region on flavonol variation, in particular of tri-hydroxylated flavonols, is demonstrated. Moreover, seven regions specifically associated with the fine control of flavonol biosynthesis are identified. Gene expression profiling of two groups of individuals significantly divergent for their skin flavonol content identified a large set of differentially modulated transcripts. Among these, the transcripts coding for MYB and bZIP transcription factors, methyltranferases, and glucosyltranferases specific for flavonols, proteins, and factors belonging to the UV-B signalling pathway and co-localizing with the QTL regions are proposed as candidate genes for the fine regulation of flavonol content and composition in mature grapes. PMID:26071529

  10. Quantitative In Vivo Fluorescence Cross-Correlation Analyses Highlight the Importance of Competitive Effects in the Regulation of Protein-Protein Interactions

    PubMed Central

    Sadaie, Wakako; Harada, Yoshie; Matsuda, Michiyuki

    2014-01-01

    Computer-assisted simulation is a promising approach for clarifying complicated signaling networks. However, this approach is currently limited by a deficiency of kinetic parameters determined in living cells. To overcome this problem, we applied fluorescence cross-correlation spectrometry (FCCS) to measure dissociation constant (Kd) values of signaling molecule complexes in living cells (in vivo Kd). Among the pairs of fluorescent molecules tested, that of monomerized enhanced green fluorescent protein (mEGFP) and HaloTag-tetramethylrhodamine was most suitable for the measurement of in vivo Kd by FCCS. Using this pair, we determined 22 in vivo Kd values of signaling molecule complexes comprising the epidermal growth factor receptor (EGFR)–Ras–extracellular signal-regulated kinase (ERK) mitogen-activated protein (MAP) kinase pathway. With these parameters, we developed a kinetic simulation model of the EGFR-Ras-ERK MAP kinase pathway and uncovered a potential role played by stoichiometry in Shc binding to EGFR during the peak activations of Ras, MEK, and ERK. Intriguingly, most of the in vivo Kd values determined in this study were higher than the in vitro Kd values reported previously, suggesting the significance of competitive bindings inside cells. These in vivo Kd values will provide a sound basis for the quantitative understanding of signal transduction. PMID:24958104

  11. Regulation of flavonol content and composition in (Syrah×Pinot Noir) mature grapes: integration of transcriptional profiling and metabolic quantitative trait locus analyses

    PubMed Central

    Malacarne, Giulia; Costantini, Laura; Coller, Emanuela; Battilana, Juri; Velasco, Riccardo; Vrhovsek, Urska; Grando, Maria Stella; Moser, Claudio

    2015-01-01

    Flavonols are a ubiquitous class of flavonoids that accumulate preferentially in flowers and mature berries. Besides their photo-protective function, they play a fundamental role during winemaking, stabilizing the colour by co-pigmentation with anthocyanins and contributing to organoleptic characteristics. Although the general flavonol pathway has been genetically and biochemically elucidated, the genetic control of flavonol content and composition at harvest is still not clear. To this purpose, the grapes of 170 segregating F1 individuals from a ‘Syrah’×’Pinot Noir’ population were evaluated at the mature stage for the content of six flavonol aglycons in four seasons. Metabolic data in combination with genetic data enabled the identification of 16 mQTLs (metabolic quantitative trait loci). For the first time, major genetic control by the linkage group 2 (LG 2)/MYBA region on flavonol variation, in particular of tri-hydroxylated flavonols, is demonstrated. Moreover, seven regions specifically associated with the fine control of flavonol biosynthesis are identified. Gene expression profiling of two groups of individuals significantly divergent for their skin flavonol content identified a large set of differentially modulated transcripts. Among these, the transcripts coding for MYB and bZIP transcription factors, methyltranferases, and glucosyltranferases specific for flavonols, proteins, and factors belonging to the UV-B signalling pathway and co-localizing with the QTL regions are proposed as candidate genes for the fine regulation of flavonol content and composition in mature grapes. PMID:26071529

  12. Bearing, azimuth and drainage (bAd) calculator: A new GIS supported tool for quantitative analyses of drainage networks and watershed parameters

    NASA Astrophysics Data System (ADS)

    Dinesh, A. C.; Joseph Markose, Vipin; Jayappa, K. S.

    2012-11-01

    We present ‘bAd Calculator', a new software developed in Visual Basic programme which can be applied for analyses of various drainage basin parameters, directional aspects, etc. The graphical user interface of bAd Calculator can be used for several applications such as determination of bearing and azimuth of linear features and their representation in rose diagram. Various drainage basin parameters such as drainage density (Dd), stream frequency (Fs), bifurcation ratio (Rb), mean bifurcation ratio (Rbm), steam length ratio (Rl) , drainage texture (T), texture ratio (Rt), dissection index (DI), length of overland flow (Lg), RHO coefficient, circulatory ratio (Rc), hypsometric integral (HI), etc. can be easily calculated by using the software. The software provides a point-and-click technique for rapid acquisition of watershed parameters with user-specified grids/sub-basins.

  13. Expression of the G2-M checkpoint regulators cyclin B1 and cdc2 in nonmalignant and malignant human breast lesions: immunocytochemical and quantitative image analyses.

    PubMed Central

    Kawamoto, H.; Koizumi, H.; Uchikoshi, T.

    1997-01-01

    We investigated the in vivo expression of cyclin B1 and Cdc2 (key molecules for G2-M transition during the cell cycle) in nonmalignant and cancerous human breast lesions using immunohistochemistry and quantitative proliferative index (PI) analysis. Breast epithelial cells co-expressed cyclin B1 and Cdc2 in their cytoplasm in the G2 phase and in their nuclei in the M phase. Cyclin B1, but not Cdc2, immunostaining rapidly disappeared from the nuclei during the mitotic metaphase to anaphase transition. Static image analysis revealed the mean proliferative index for cyclin B1/cdc2 for each type of lesion to be as follows: normal glands (n = 20), 2.0/2.5%; benign lesions, including typical ductal hyperplasia (n = 76), 2.5/5.8%; atypical ductal hyperplasia (n = 21), 3.0/6.6%; carcinomas in situ (n = 70), 7.4/14.0%; and invasive carcinomas (n = 58), 10.0/22.9%. Proliferative index data for atypical hyperplasia were virtually identical to those for benign lesions and were significantly lower than those for breast cancer, suggesting that expression levels of cyclin B1 and Cdc2 may be used to distinguish premalignant human breast lesions from advanced disease. Furthermore, the proliferative index for cyclin B1 for comedo-type ductal carcinomas in situ agreed with that for invasive ductal carcinomas (mean, 10.1% versus 9.5%), apparently explaining the clinicopathological aggressiveness of this tumor at the molecular level. Images Figure 1 Figure 2 Figure 3 PMID:9006317

  14. The quantitative spectrum of inositol phosphate metabolites in avian erythrocytes, analysed by proton n.m.r. and h.p.l.c. with direct isomer detection.

    PubMed Central

    Radenberg, T; Scholz, P; Bergmann, G; Mayr, G W

    1989-01-01

    The spectrum of inositol phosphate isomers present in avian erythrocytes was investigated in qualitative and quantitative terms. Inositol phosphates were isolated in micromolar quantities from turkey blood by anion-exchange chromatography on Q-Sepharose and subjected to proton n.m.r. and h.p.l.c. analysis. We employed a h.p.l.c. technique with a novel, recently described complexometric post-column detection system, called 'metal-dye detection' [Mayr (1988) Biochem. J. 254, 585-591], which enabled us to identify non-radioactively labelled inositol phosphate isomers and to determine their masses. The results indicate that avian erythrocytes contain the same inositol phosphate isomers as mammalian cells. Denoted by the 'lowest-locant rule' [NC-IUB Recommendations (1988) Biochem. J. 258, 1-2] irrespective of true enantiomerism, these are Ins(1,4)P2, Ins(1,6)P2, Ins(1,3,4)P3, Ins(1,4,5)P3, Ins(1,3,4,5)P4, Ins(1,3,4,6)P4, Ins(1,4,5,6)P4, Ins(1,3,4,5,6)P5, and InsP6. Furthermore, we identified two inositol trisphosphate isomers hitherto not described for mammalian cells, namely Ins(1,5,6)P3 and Ins(2,4,5)P3. The possible position of these two isomers in inositol phosphate metabolism and implications resulting from absolute abundances of inositol phosphates are discussed. PMID:2604720

  15. TopCAT and PySESA: Open-source software tools for point cloud decimation, roughness analyses, and quantitative description of terrestrial surfaces

    NASA Astrophysics Data System (ADS)

    Hensleigh, J.; Buscombe, D.; Wheaton, J. M.; Brasington, J.; Welcker, C. W.; Anderson, K.

    2015-12-01

    The increasing use of high-resolution topography (HRT) constructed from point clouds obtained from technology such as LiDAR, SoNAR, SAR, SfM and a variety of range-imaging techniques, has created a demand for custom analytical tools and software for point cloud decimation (data thinning and gridding) and spatially explicit statistical analysis of terrestrial surfaces. We will present on a number of analytical and computational tools designed to quantify surface roughness and texture, directly from point clouds in a variety of ways (using spatial- and frequency-domain statistics). TopCAT (Topographic Point Cloud Analysis Toolkit; Brasington et al., 2012) and PySESA (Python program for Spatially Explicit Spectral Analysis) both work by applying a small moving window to (x,y,z) data to calculate a suite of (spatial and spectral domain) statistics, which are then spatially-referenced on a regular (x,y) grid at a user-defined resolution. Collectively, these tools facilitate quantitative description of surfaces and may allow, for example, fully automated texture characterization and segmentation, roughness and grain size calculation, and feature detection and classification, on very large point clouds with great computational efficiency. Using tools such as these, it may be possible to detect geomorphic change in surfaces which have undergone minimal elevation difference, for example deflation surfaces which have coarsened but undergone no net elevation change, or surfaces which have eroded and accreted, leaving behind a different textural surface expression than before. The functionalities of the two toolboxes are illustrated with example high-resolution bathymetric point cloud data collected with multibeam echosounder, and topographic data collected with LiDAR.

  16. Optimisation and validation of a quantitative and confirmatory LC-MS method for multi-residue analyses of β-lactam and tetracycline antibiotics in bovine muscle.

    PubMed

    Rezende, C P; Almeida, M P; Brito, R B; Nonaka, C K; Leite, M O

    2012-01-01

    A multi-residue method for the determination of the β-lactam antibiotics ampicillin, cefazolin, cloxacillin, dicloxacillin, nafcillin, oxacillin, penicillin G, penicillin V and the tetracyclines chlotetracycline, tetracycline and oxytetracycline was optimised and validated in bovine muscle. The method is based on the extraction of the residues from muscle using water/acetonitrile (2/8, v/v) with subsequent use of dispersive solid-phase C18 and hexane for purification. Extracts were analysed using ultra-performance liquid chromatography (UPLC-MS/MS) coupled with the mass spectrometer in positive electrospray ionisation mode (ESI+) for all analytes. The method was validated according to the requirements of European Commission Decision 2002/657/EC. The validation results were obtained within the MRL range of 0-1.5 of the MRL, with recoveries varying from 90% to 110% and CV < 20% (n = 54), except for cloxacillin, dicloxacillin and nafcillin. However, matrix interference was observed. The decision limit (CCα) ranged from 10% to 15% of the MRL. The uncertainty measurement was estimated based on both bottom-up and top-down strategies and the uncertainty values were found to be lower than 20% of the MRL. The method has a simple extraction procedure whereby analytes are separated with reasonable resolutions in a single 11-min chromatographic run. According to the validation results, this method is suitable for monitoring β-lactams and tetracyclines according to National Program for Residue and Contaminant Control - Brazil (NPRC-Brazil) in bovine muscle. PMID:22070766

  17. Integration of CO2 flux and remotely-sensed data for primary production and ecosystem respiration analyses in the Northern Great Plains: potential for quantitative spatial extrapolation

    USGS Publications Warehouse

    Gilmanov, Tagir G.; Tieszen, Larry L.; Wylie, Bruce K.; Flanagan, Larry B.; Frank, Albert B.; Haferkamp, Marshall R.; Meyers, Tilden P.; Morgan, Jack A.

    2005-01-01

    Aim  Extrapolation of tower CO2 fluxes will be greatly facilitated if robust relationships between flux components and remotely sensed factors are established. Long-term measurements at five Northern Great Plains locations were used to obtain relationships between CO2fluxes and photosynthetically active radiation (Q), other on-site factors, and Normalized Difference Vegetation Index (NDVI) from the SPOT VEGETATION data set. Location  CO2 flux data from the following stations and years were analysed: Lethbridge, Alberta 1998–2001; Fort Peck, MT 2000, 2002; Miles City, MT 2000–01; Mandan, ND 1999–2001; and Cheyenne, WY 1997–98. Results  Analyses based on light-response functions allowed partitioning net CO2 flux (F) into gross primary productivity (Pg) and ecosystem respiration (Re). Weekly averages of daytime respiration, γday, estimated from light responses were closely correlated with weekly averages of measured night-time respiration, γnight (R2 0.64 to 0.95). Daytime respiration tended to be higher than night-time respiration, and regressions of γday on γnight for all sites were different from 1 : 1 relationships. Over 13 site-years, gross primary production varied from 459 to 2491 g CO2 m−2 year−1, ecosystem respiration from 996 to 1881 g CO2 m−2 year−1, and net ecosystem exchange from −537 (source) to +610 g CO2 m−2 year−1 (sink). Maximum daily ecological light-use efficiencies, ɛd,max = Pg/Q, were in the range 0.014 to 0.032 mol CO2 (mol incident quanta)−1. Main conclusions  Ten-day average Pg was significantly more highly correlated with NDVI than 10-day average daytime flux, Pd (R2 = 0.46 to 0.77 for Pg-NDVI and 0.05 to 0.58 for Pd-NDVI relationships). Ten-day average Re was also positively correlated with NDVI, with R2values from 0.57 to 0.77. Patterns of the relationships of Pg and Re with NDVI and other factors indicate possibilities for establishing multivariate

  18. A method to accurately quantitate intensities of (32)P-DNA bands when multiple bands appear in a single lane of a gel is used to study dNTP insertion opposite a benzo[a]pyrene-dG adduct by Sulfolobus DNA polymerases Dpo4 and Dbh.

    PubMed

    Sholder, Gabriel; Loechler, Edward L

    2015-01-01

    Quantitating relative (32)P-band intensity in gels is desired, e.g., to study primer-extension kinetics of DNA polymerases (DNAPs). Following imaging, multiple (32)P-bands are often present in lanes. Though individual bands appear by eye to be simple and well-resolved, scanning reveals they are actually skewed-Gaussian in shape and neighboring bands are overlapping, which complicates quantitation, because slower migrating bands often have considerable contributions from the trailing edges of faster migrating bands. A method is described to accurately quantitate adjacent (32)P-bands, which relies on having a standard: a simple skewed-Gaussian curve from an analogous pure, single-component band (e.g., primer alone). This single-component scan/curve is superimposed on its corresponding band in an experimentally determined scan/curve containing multiple bands (e.g., generated in a primer-extension reaction); intensity exceeding the single-component scan/curve is attributed to other components (e.g., insertion products). Relative areas/intensities are determined via pixel analysis, from which relative molarity of components is computed. Common software is used. Commonly used alternative methods (e.g., drawing boxes around bands) are shown to be less accurate. Our method was used to study kinetics of dNTP primer-extension opposite a benzo[a]pyrene-N(2)-dG-adduct with four DNAPs, including Sulfolobus solfataricus Dpo4 and Sulfolobus acidocaldarius Dbh. Vmax/Km is similar for correct dCTP insertion with Dpo4 and Dbh. Compared to Dpo4, Dbh misinsertion is slower for dATP (∼20-fold), dGTP (∼110-fold) and dTTP (∼6-fold), due to decreases in Vmax. These findings provide support that Dbh is in the same Y-Family DNAP class as eukaryotic DNAP κ and bacterial DNAP IV, which accurately bypass N(2)-dG adducts, as well as establish the scan-method described herein as an accurate method to quantitate relative intensity of overlapping bands in a single lane, whether generated

  19. Digital image analyser for autoradiography

    SciTech Connect

    Muth, R.A.; Plotnick, J.

    1985-05-01

    The most critical parameter in quantitative autoradiography for assay of tissue concentrations of tracers is the ability to obtain precise and accurate measurements of optical density of the images. Existing high precision systems for image analysis, rotating drum densitometers, are expensive, suffer from mechanical problems and are slow. More moderately priced and reliable video camera based systems are available, but their outputs generally do not have the uniformity and stability necessary for high resolution quantitative autoradiography. The authors have designed and constructed an image analyser optimized for quantitative single and multiple tracer autoradiography which the authors refer to as a memory-mapped charged-coupled device scanner (MM-CCD). The input is from a linear array of CCD's which is used to optically scan the autoradiograph. Images are digitized into 512 x 512 picture elements with 256 gray levels and the data is stored in buffer video memory in less than two seconds. Images can then be transferred to RAM memory by direct memory-mapping for further processing. Arterial blood curve data and optical density-calibrated standards data can be entered and the optical density images can be converted automatically to tracer concentration or functional images. In double tracer studies, images produced from both exposures can be stored and processed in RAM to yield ''pure'' individual tracer concentration or functional images. Any processed image can be transmitted back to the buffer memory to be viewed on a monitor and processed for region of interest analysis.

  20. Quantitative immunohistochemical analyses of the expression of E-cadherin, thrombomodulin, CD44H and CD44v6 in primary tumours of pharynx/larynx squamous cell carcinoma and their lymph node metastases.

    PubMed

    Hernández Gaspar, R; de los Toyos, J R; Alvarez Marcos, C; Riera, J R; Sampedro, A

    1999-01-01

    The quantitative expression of E-cadherin, thrombomodulin, CD44H and CD44v6 in 32 specimens of primary tumours of pharynx/larynx squamous cell carcinoma and their lymph node metastases was studied by immunohistochemistry. With the aim of obtaining comparative and objective data, image acquisition conditions were kept unaltered for all the measurements and the immunostaining intensity was quantified by applying an image processing system. On the one hand, correlations were only observed between CD44H and CD44v6, both in primary tumours and metastases, and between E-cadherin and TM in metastases. On the other hand, statistical analyses of paired data did not show significant differences in the expression of these markers between the two tumour sites. In agreement with previous reports, E-cadherin expression was rather low or negative in primary tumours and metastases of the three poorly differentiated specimens we studied, as well as that of TM, but otherwise some of these samples showed intermediate immunostaining levels of CD44H/CD44v6. It may be concluded from the present study that the quantitative expression of these adhesion molecules in well established lymph node metastases of pharynx/larynx squamous cell carcinoma is essentially unaltered in relation to their primary sites. PMID:10609562

  1. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  2. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  3. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  4. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  5. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  6. A simple quantitative method analysing amikacin, gentamicin, and vancomycin levels in human newborn plasma using ion-pair liquid chromatography/tandem mass spectrometry and its applicability to a clinical study.

    PubMed

    Bijleveld, Yuma; de Haan, Timo; Toersche, Jan; Jorjani, Sona; van der Lee, Johanna; Groenendaal, Floris; Dijk, Peter; van Heijst, Arno; Gavilanes, Antonio W D; de Jonge, Rogier; Dijkman, Koen P; van Straaten, Henrica; Rijken, Monique; Zonnenberg, Inge; Cools, Filip; Nuytemans, Debbie; Mathôt, Ron

    2014-03-01

    Neuroprotective controlled therapeutic hypothermia is the standard of care for newborns suffering perinatal asphyxia. Antibiotic drugs, such as amikacin, gentamicin, and vancomycin are frequently administered during controlled hypothermia, which possibly alters their pharmacokinetic (PK) and pharmacodynamic (PD) profiles. In order to examine this effect an LC-MS/MS method for the simultaneous quantification of amikacin, the major gentamicin components (gentamicin C, C1a and C2), and vancomycin in plasma was developed. In 25μL plasma proteins were precipitated with trichloroacetic acid (TCA) and detection of the components was achieved using ion-pair reversed phase chromatography coupled with electrospray ionization tandem mass spectrometry. The chromatographic runtime was 7.5min per sample. Calibration standards were prepared over a range of 0.3-50mgL(-1) for amikacin and gentamicin and 1.0-100mgL(-1) for vancomycin. At LLOQ accuracy was between 103 and 120% and imprecision was less than 19%. For concentrations above LLOQ accuracy ranged from 98% to 102% and imprecision was less than 6%. Process efficiency, ionization efficiency, and recovery were acceptable. Samples and stock solutions were stable during the time periods and at the different temperatures examined. The applicability of the method was shown by analysing plasma samples from 3 neonatal patients. The developed method allows accurate and precise simultaneous quantification of amikacin, gentamicin, and vancomycin in a small volume (25μL) of plasma. PMID:24548921

  7. Quantitative Graphics in Newspapers.

    ERIC Educational Resources Information Center

    Tankard, James W., Jr.

    The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…

  8. Quantitative film radiography

    SciTech Connect

    Devine, G.; Dobie, D.; Fugina, J.; Hernandez, J.; Logan, C.; Mohr, P.; Moss, R.; Schumacher, B.; Updike, E.; Weirup, D.

    1991-02-26

    We have developed a system of quantitative radiography in order to produce quantitative images displaying homogeneity of parts. The materials that we characterize are synthetic composites and may contain important subtle density variations not discernible by examining a raw film x-radiograph. In order to quantitatively interpret film radiographs, it is necessary to digitize, interpret, and display the images. Our integrated system of quantitative radiography displays accurate, high-resolution pseudo-color images in units of density. We characterize approximately 10,000 parts per year in hundreds of different configurations and compositions with this system. This report discusses: the method; film processor monitoring and control; verifying film and processor performance; and correction of scatter effects.

  9. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    NASA Astrophysics Data System (ADS)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  10. The Use of a Quantitative Cysteinyl-peptide Enrichment Technology for High-Throughput Quantitative Proteomics

    SciTech Connect

    Liu, Tao; Qian, Weijun; Camp, David G.; Smith, Richard D.

    2007-01-02

    Quantitative proteomic measurements are of significant interest in studies aimed at discovering disease biomarkers and providing new insights into biological pathways. A quantitative cysteinyl-peptide enrichment technology (QCET) can be employed to achieve higher efficiency, greater dynamic range, and higher throughput in quantitative proteomic studies that utilize stable-isotope labeling techniques combined with high-resolution liquid chromatography (LC)-mass spectrometry (MS) measurements. The QCET approach involves specific 16O/18O labeling of tryptic peptides, high-efficiency enrichment of cysteinyl-peptides, and confident protein identification and quantification from high resolution LC-Fourier transform ion cyclotron resonance mass spectrometry (FTICR) measurements and a previously established database of accurate mass and elution time information. This methodology is demonstrated by using proteome profiling of naïve and in vitro-differentiated human mammary epithelial cells (HMEC) as an example, which initially resulted in the identification and quantification of 603 proteins in a single LC-FTICR analysis. QCET provides not only highly efficient enrichment of cysteinyl-peptides for more extensive proteome coverage and improved labeling efficiency for better quantitative measurements, but more importantly, a high-throughput strategy suitable for quantitative proteome analysis where extensive or parallel proteomic measurements are required, such as in time course studies of specific pathways and clinical sample analyses for biomarker discovery.

  11. Quantitative analysis of endogenous compounds.

    PubMed

    Thakare, Rhishikesh; Chhonker, Yashpal S; Gautam, Nagsen; Alamoudi, Jawaher Abdullah; Alnouti, Yazen

    2016-09-01

    Accurate quantitative analysis of endogenous analytes is essential for several clinical and non-clinical applications. LC-MS/MS is the technique of choice for quantitative analyses. Absolute quantification by LC/MS requires preparing standard curves in the same matrix as the study samples so that the matrix effect and the extraction efficiency for analytes are the same in both the standard and study samples. However, by definition, analyte-free biological matrices do not exist for endogenous compounds. To address the lack of blank matrices for the quantification of endogenous compounds by LC-MS/MS, four approaches are used including the standard addition, the background subtraction, the surrogate matrix, and the surrogate analyte methods. This review article presents an overview these approaches, cite and summarize their applications, and compare their advantages and disadvantages. In addition, we discuss in details, validation requirements and compatibility with FDA guidelines to ensure method reliability in quantifying endogenous compounds. The standard addition, background subtraction, and the surrogate analyte approaches allow the use of the same matrix for the calibration curve as the one to be analyzed in the test samples. However, in the surrogate matrix approach, various matrices such as artificial, stripped, and neat matrices are used as surrogate matrices for the actual matrix of study samples. For the surrogate analyte approach, it is required to demonstrate similarity in matrix effect and recovery between surrogate and authentic endogenous analytes. Similarly, for the surrogate matrix approach, it is required to demonstrate similar matrix effect and extraction recovery in both the surrogate and original matrices. All these methods represent indirect approaches to quantify endogenous compounds and regardless of what approach is followed, it has to be shown that none of the validation criteria have been compromised due to the indirect analyses. PMID

  12. Sociopolitical Analyses.

    ERIC Educational Resources Information Center

    Van Galen, Jane, Ed.; And Others

    1992-01-01

    This theme issue of the serial "Educational Foundations" contains four articles devoted to the topic of "Sociopolitical Analyses." In "An Interview with Peter L. McLaren," Mary Leach presented the views of Peter L. McLaren on topics of local and national discourses, values, and the politics of difference. Landon E. Beyer's "Educational Studies and…

  13. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  14. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  15. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  16. The thermodynamic cost of accurate sensory adaptation

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai

    2015-03-01

    Living organisms need to obtain and process environment information accurately in order to make decisions critical for their survival. Much progress have been made in identifying key components responsible for various biological functions, however, major challenges remain to understand system-level behaviors from the molecular-level knowledge of biology and to unravel possible physical principles for the underlying biochemical circuits. In this talk, we will present some recent works in understanding the chemical sensory system of E. coli by combining theoretical approaches with quantitative experiments. We focus on addressing the questions on how cells process chemical information and adapt to varying environment, and what are the thermodynamic limits of key regulatory functions, such as adaptation.

  17. Software for quantitative trait analysis

    PubMed Central

    2005-01-01

    This paper provides a brief overview of software currently available for the genetic analysis of quantitative traits in humans. Programs that implement variance components, Markov Chain Monte Carlo (MCMC), Haseman-Elston (H-E) and penetrance model-based linkage analyses are discussed, as are programs for measured genotype association analyses and quantitative trait transmission disequilibrium tests. The software compared includes LINKAGE, FASTLINK, PAP, SOLAR, SEGPATH, ACT, Mx, MERLIN, GENEHUNTER, Loki, Mendel, SAGE, QTDT and FBAT. Where possible, the paper provides URLs for acquiring these programs through the internet, details of the platforms for which the software is available and the types of analyses performed. PMID:16197737

  18. On study design in neuroimaging heritability analyses

    NASA Astrophysics Data System (ADS)

    Koran, Mary Ellen; Li, Bo; Jahanshad, Neda; Thornton-Wells, Tricia A.; Glahn, David C.; Thompson, Paul M.; Blangero, John; Nichols, Thomas E.; Kochunov, Peter; Landman, Bennett A.

    2014-03-01

    Imaging genetics is an emerging methodology that combines genetic information with imaging-derived metrics to understand how genetic factors impact observable structural, functional, and quantitative phenotypes. Many of the most well-known genetic studies are based on Genome-Wide Association Studies (GWAS), which use large populations of related or unrelated individuals to associate traits and disorders with individual genetic factors. Merging imaging and genetics may potentially lead to improved power of association in GWAS because imaging traits may be more sensitive phenotypes, being closer to underlying genetic mechanisms, and their quantitative nature inherently increases power. We are developing SOLAR-ECLIPSE (SE) imaging genetics software which is capable of performing genetic analyses with both large-scale quantitative trait data and family structures of variable complexity. This program can estimate the contribution of genetic commonality among related subjects to a given phenotype, and essentially answer the question of whether or not the phenotype is heritable. This central factor of interest, heritability, offers bounds on the direct genetic influence over observed phenotypes. In order for a trait to be a good phenotype for GWAS, it must be heritable: at least some proportion of its variance must be due to genetic influences. A variety of family structures are commonly used for estimating heritability, yet the variability and biases for each as a function of the sample size are unknown. Herein, we investigate the ability of SOLAR to accurately estimate heritability models based on imaging data simulated using Monte Carlo methods implemented in R. We characterize the bias and the variability of heritability estimates from SOLAR as a function of sample size and pedigree structure (including twins, nuclear families, and nuclear families with grandparents).

  19. Quantitative Thinking.

    ERIC Educational Resources Information Center

    DuBridge, Lee A.

    An appeal for more research to determine how to educate children as effectively as possible is made. Mathematics teachers can readily examine the educational problems of today in their classrooms since learning progress in mathematics can easily be measured and evaluated. Since mathematics teachers have learned to think in quantitative terms and…

  20. On Quantitizing

    ERIC Educational Resources Information Center

    Sandelowski, Margarete; Voils, Corrine I.; Knafl, George

    2009-01-01

    "Quantitizing", commonly understood to refer to the numerical translation, transformation, or conversion of qualitative data, has become a staple of mixed methods research. Typically glossed are the foundational assumptions, judgments, and compromises involved in converting disparate data sets into each other and whether such conversions advance…

  1. QUANTITATIVE MORPHOLOGY

    EPA Science Inventory

    Abstract: In toxicology, the role of quantitative assessment of brain morphology can be understood in the context of two types of treatment-related alterations. One type of alteration is specifically associated with treatment and is not observed in control animals. Measurement ...

  2. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    PubMed

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid. PMID:25624198

  3. Solution structure of the Z-DNA binding domain of PKR-like protein kinase from Carassius auratus and quantitative analyses of the intermediate complex during B–Z transition

    PubMed Central

    Lee, Ae-Ree; Park, Chin-Ju; Cheong, Hae-Kap; Ryu, Kyoung-Seok; Park, Jin-Wan; Kwon, Mun-Young; Lee, Janghyun; Kim, Kyeong Kyu; Choi, Byong-Seok; Lee, Joon-Hwa

    2016-01-01

    Z-DNA binding proteins (ZBPs) play important roles in RNA editing, innate immune response and viral infection. Structural and biophysical studies show that ZBPs initially form an intermediate complex with B-DNA for B–Z conversion. However, a comprehensive understanding of the mechanism of Z-DNA binding and B–Z transition is still lacking, due to the absence of structural information on the intermediate complex. Here, we report the solution structure of the Zα domain of the ZBP-containing protein kinase from Carassius auratus (caZαPKZ). We quantitatively determined the binding affinity of caZαPKZ for both B-DNA and Z-DNA and characterized its B–Z transition activity, which is modulated by varying the salt concentration. Our results suggest that the intermediate complex formed by caZαPKZ and B-DNA can be used as molecular ruler, to measure the degree to which DNA transitions to the Z isoform. PMID:26792893

  4. Solution structure of the Z-DNA binding domain of PKR-like protein kinase from Carassius auratus and quantitative analyses of the intermediate complex during B-Z transition.

    PubMed

    Lee, Ae-Ree; Park, Chin-Ju; Cheong, Hae-Kap; Ryu, Kyoung-Seok; Park, Jin-Wan; Kwon, Mun-Young; Lee, Janghyun; Kim, Kyeong Kyu; Choi, Byong-Seok; Lee, Joon-Hwa

    2016-04-01

    Z-DNA binding proteins (ZBPs) play important roles in RNA editing, innate immune response and viral infection. Structural and biophysical studies show that ZBPs initially form an intermediate complex with B-DNA for B-Z conversion. However, a comprehensive understanding of the mechanism of Z-DNA binding and B-Z transition is still lacking, due to the absence of structural information on the intermediate complex. Here, we report the solution structure of the Zα domain of the ZBP-containing protein kinase from Carassius auratus(caZαPKZ). We quantitatively determined the binding affinity of caZαPKZ for both B-DNA and Z-DNA and characterized its B-Z transition activity, which is modulated by varying the salt concentration. Our results suggest that the intermediate complex formed by caZαPKZ and B-DNA can be used as molecular ruler, to measure the degree to which DNA transitions to the Z isoform. PMID:26792893

  5. X-Ray Map Analyser: A new ArcGIS® based tool for the quantitative statistical data handling of X-ray maps (Geo- and material-science applications)

    NASA Astrophysics Data System (ADS)

    Ortolano, Gaetano; Zappalà, Luigi; Mazzoleni, Paolo

    2014-11-01

    A new semi-automated image processing procedure based on multivariate statistical analysis of X-ray maps of petrological and material science interest has been developed to generate high contrast pseudo-coloured images highlighting the element distribution between and within detected mineral phases. This new tool package, developed in Python and integrated with ArcGis®, generates in only a few minutes several graphical outputs useful for classifying chemically homogeneous zones as well as extracting quantitative information through the statistical data handling of X-ray maps. The code, largely based on the use of functions implemented in ArcGis® 9.3 equipped with Spatial Analyst and Data Management licences, has been suitably integrated with original cyclic functions that hugely reduce the time taken to complete lengthy procedures. In particular these tools, after the acquisition of any kind of multispectral images allow fast and powerful data processing for efficient illustration and documentation of key compositional and microtextural relationships in rocks and materials.

  6. Using Inequality Measures to Incorporate Environmental Justice into Regulatory Analyses

    EPA Science Inventory

    Abstract: Formally evaluating how specific policy measures influence environmental justice is challenging, especially in the context of regulatory analyses in which quantitative comparisons are the norm. However, there is a large literature on developing and applying quantitative...

  7. Quantitative Analyse von Korallengemeinschaften des Sanganeb-Atolls (mittleres Rotes Meer). II. Vergleich mit einem Riffareal bei Aqaba (nördliches Rotes Meer) am Nordrande des indopazifischen Riffgürtels

    NASA Astrophysics Data System (ADS)

    Schuhmacher, H.; Mergner, H.

    1985-12-01

    Quantitative studies of coral communities in the central and northern Red Sea were designed for comparison of the community structure in both areas. The central Red Sea provides reef-building Scleractinia and reef-inhabiting Alcyonaria with optimal temperature conditions, whereas the north tip of the Gulf of Aqaba (29°30' N) represents the northernmost outpost of coral reefs in the Indian Ocean. It is generally assumed that coral diversity decreases towards the margins of the global reef-belt. In the Red Sea, generic diversity of hermatypic Scleractinia slightly decreases from the central to the northern part (51 : 48 genera); but cnidarian species abundance (species number per 25 m2 area) was found to increase from 62 to 98 species and the Shannon-Wiener diversity index increased from 2.58 to 3.67 with regard to colony number. The mean colony size was 189 cm2 at Sanganeb-Atoll, but only 52 cm2 at Aqaba. The mean numbers of colonies were inversely related: 662 per 25 m2 at Sanganeb-Atoll and 2028 at Aqaba. Uninhabited parts of the studied areas amounted to 47 % at Sanganeb-Atoll and to 56 % at Aqaba. The community structure of the studied areas indicates that occasional perturbations prevent the progress of the community towards a low-diversity equilibrium state. Since severe hydrodynamic damage is extremely rare in 10 m depth, major disturbances may occur by sedimentation, by the interference of grazers (e. g. Diadema setosum) and due to overgrowth by space-competitors (mainly soft corals). These events are to be regarded as throwbacks in the process of monopolization of the area by well adapted species. Recovery from such perturbations (i.e. recolonization of dead areas) obviously takes place at different velocities in the northern and central Red Sea, for the mean water temperature at Aqaba is 5 °C lower than in the central Red Sea. Hence the process of taking over a given space by a few species proceeds further in the central Red Sea than at its northern end

  8. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  9. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  10. Broadband rotor noise analyses

    NASA Technical Reports Server (NTRS)

    George, A. R.; Chou, S. T.

    1984-01-01

    The various mechanisms which generate broadband noise on a range of rotors studied include load fluctuations due to inflow turbulence, due to turbulent boundary layers passing the blades' trailing edges, and due to tip vortex formation. Existing analyses are used and extensions to them are developed to make more accurate predictions of rotor noise spectra and to determine which mechanisms are important in which circumstances. Calculations based on the various prediction methods in existing experiments were compared. The present analyses are adequate to predict the spectra from a wide variety of experiments on fans, full scale and model scale helicopter rotors, wind turbines, and propellers to within about 5 to 10 dB. Better knowledge of the inflow turbulence improves the accuracy of the predictions. Results indicate that inflow turbulence noise depends strongly on ambient conditions and dominates at low frequencies. Trailing edge noise and tip vortex noise are important at higher frequencies if inflow turbulence is weak. Boundary layer trailing edge noise, important, for large sized rotors, increases slowly with angle of attack but not as rapidly as tip vortex noise.

  11. Broadband rotor noise analyses

    NASA Astrophysics Data System (ADS)

    George, A. R.; Chou, S. T.

    1984-04-01

    The various mechanisms which generate broadband noise on a range of rotors studied include load fluctuations due to inflow turbulence, due to turbulent boundary layers passing the blades' trailing edges, and due to tip vortex formation. Existing analyses are used and extensions to them are developed to make more accurate predictions of rotor noise spectra and to determine which mechanisms are important in which circumstances. Calculations based on the various prediction methods in existing experiments were compared. The present analyses are adequate to predict the spectra from a wide variety of experiments on fans, full scale and model scale helicopter rotors, wind turbines, and propellers to within about 5 to 10 dB. Better knowledge of the inflow turbulence improves the accuracy of the predictions. Results indicate that inflow turbulence noise depends strongly on ambient conditions and dominates at low frequencies. Trailing edge noise and tip vortex noise are important at higher frequencies if inflow turbulence is weak. Boundary layer trailing edge noise, important, for large sized rotors, increases slowly with angle of attack but not as rapidly as tip vortex noise.

  12. Analysis of copy number variation using quantitative interspecies competitive PCR.

    PubMed

    Williams, Nigel M; Williams, Hywel; Majounie, Elisa; Norton, Nadine; Glaser, Beate; Morris, Huw R; Owen, Michael J; O'Donovan, Michael C

    2008-10-01

    Over recent years small submicroscopic DNA copy-number variants (CNVs) have been highlighted as an important source of variation in the human genome, human phenotypic diversity and disease susceptibility. Consequently, there is a pressing need for the development of methods that allow the efficient, accurate and cheap measurement of genomic copy number polymorphisms in clinical cohorts. We have developed a simple competitive PCR based method to determine DNA copy number which uses the entire genome of a single chimpanzee as a competitor thus eliminating the requirement for competitive sequences to be synthesized for each assay. This results in the requirement for only a single reference sample for all assays and dramatically increases the potential for large numbers of loci to be analysed in multiplex. In this study we establish proof of concept by accurately detecting previously characterized mutations at the PARK2 locus and then demonstrating the potential of quantitative interspecies competitive PCR (qicPCR) to accurately genotype CNVs in association studies by analysing chromosome 22q11 deletions in a sample of previously characterized patients and normal controls. PMID:18697816

  13. [Quantitative ultrasound].

    PubMed

    Barkmann, R; Glüer, C-C

    2006-10-01

    Methods of quantitative ultrasound (QUS) can be used to obtain knowledge about bone fragility. Comprehensive study results exist showing the power of QUS for the estimation of osteoporotic fracture risk. Nevertheless, the variety of technologies, devices, and variables as well as different degrees of validation of the single devices have to be taken into account. Using methods to simulate ultrasound propagation, the complex interaction between ultrasound and bone could be understood and the propagation could be visualized. Preceding widespread clinical use, it has to be clarified if patients with low QUS values will profit from therapy, as it has been shown for DXA. Moreover, the introduction of quality assurance measures is essential. The user should know the limitations of the methods and be able to interpret the results correctly. Applied in an adequate manner QUS methods could then, due to lower costs and absence of ionizing radiation, become important players in osteoporosis management. PMID:16896637

  14. Toward quantitative proteomics of organ substructures: implications for renal physiology.

    PubMed

    Velic, Ana; Macek, Boris; Wagner, Carsten A

    2010-09-01

    Organs are complex structures that consist of multiple tissues with different levels of gene expression. To achieve comprehensive coverage and accurate quantitation data, organs ideally should be separated into morphologic and/or functional substructures before gene or protein expression analysis. However, because of complex morphology and elaborate isolation protocols, to date this often has been difficult to achieve. Kidneys are organs in which functional and morphologic subdivision is especially important. Each subunit of the kidney, the nephron, consists of more than 10 subsegments with distinct morphologic and functional characteristics. For a full understanding of kidney physiology, global gene and protein expression analyses have to be performed at the level of the nephron subsegments; however, such studies have been extremely rare to date. Here we describe the latest approaches in quantitative high-accuracy mass spectrometry-based proteomics and their application to quantitative proteomics studies of the whole kidney and nephron subsegments, both in human beings and in animal models. We compare these studies with similar studies performed on other organ substructures. We argue that the newest technologies used for preparation, processing, and measurement of small amounts of starting material are finally enabling global and subsegment-specific quantitative measurement of protein levels in the kidney and other organs. These new technologies and approaches are making a decisive impact on our understanding of the (patho)physiological processes at the molecular level. PMID:21044760

  15. MASIC: a software program for fast quantitation and flexible visualization of chromatographic profiles from detected LC-MS(/MS) features

    SciTech Connect

    Monroe, Matthew E.; Shaw, Jason L.; Daly, Don S.; Adkins, Joshua N.; Smith, Richard D.

    2008-06-01

    Quantitative analysis of liquid chromatography (LC)- mass spectrometry (MS) and tandem mass spectrometry (MS/MS) data is essential to many proteomics studies. We have developed MASIC to accurately measure peptide abundances and LC elution times in low-resolution LC-MS/MS analyses. This software program uses an efficient processing algorithm to quickly generate mass specific selected ion chromatograms from a dataset and provides an interactive browser that allows users to examine individual chromatograms in a variety of fashions. The improved elution time estimates afforded by MASIC increase the utility of LC-MS/MS data in the accurate mass and time (AMT) tag approach to proteomics.

  16. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  17. Compilation of Sandia coal char combustion data and kinetic analyses

    SciTech Connect

    Mitchell, R.E.; Hurt, R.H.; Baxter, L.L.; Hardesty, D.R.

    1992-06-01

    An experimental project was undertaken to characterize the physical and chemical processes that govern the combustion of pulverized coal chars. The experimental endeavor establishes a database on the reactivities of coal chars as a function of coal type, particle size, particle temperature, gas temperature, and gas and composition. The project also provides a better understanding of the mechanism of char oxidation, and yields quantitative information on the release rates of nitrogen- and sulfur-containing species during char combustion. An accurate predictive engineering model of the overall char combustion process under technologically relevant conditions in a primary product of this experimental effort. This document summarizes the experimental effort, the approach used to analyze the data, and individual compilations of data and kinetic analyses for each of the parent coals investigates.

  18. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  19. Network Class Superposition Analyses

    PubMed Central

    Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  20. Expanding the Horizons of Quantitative Remote Sensing

    NASA Astrophysics Data System (ADS)

    Christensen, P. R.

    2011-12-01

    Remote sensing of the Earth has made significant progress since its inception in the 1970's. The Landsat, ASTER, MODIS multi-spectral imagers have provided a global, long-term record of the surface at visible through infrared wavelengths, and meter-scale color images can be acquired of regions of interest. However, these systems, and many of the algorithms to analyze them, have advanced surprising little over the past three decades. Very little hyperspectral data are readily available or widely used, and software analysis tools are typically complex or 'black box'. As a result it is often difficult to make quantitative assessments of surface character - for example the accurate mapping of the composition and abundance of surface components. Ironically, planetary observations often have higher spectral resolution, a broader spectral range, and global coverage, with the result that sophisticated tools are routinely applied to these data to make quantitative mineralogy maps. These analyses are driven by the reality that, except for a tiny area explored by rovers, remote sensing provides the only means to determine surface properties. Improved terrestrial hyperspectral imaging systems have long been proposed, and will make great advances. However, these systems remain in the future, and the question exists - what advancements can be made to extract quantitative information from existing data? A case study, inspired by the 1987 work of Sultan et al, was performed to combine all available visible, near-, and thermal-IR multi-spectral data with selected hyperspectral information and limited field verification. Hyperspectral data were obtained from lab observations of collected samples, and the highest spatial resolution images available were used to help interpret the lower-resolution regional imagery. The hyperspectral data were spectrally deconvolved, giving quantitative mineral abundances accurate to 5-10%. These spectra were degraded to the multi-spectral resolution

  1. Quantitative Analyses of Cryptochrome-mBMAL1 Interactions

    PubMed Central

    Czarna, Anna; Breitkreuz, Helena; Mahrenholz, Carsten C.; Arens, Julia; Strauss, Holger M.; Wolf, Eva

    2011-01-01

    The mammalian cryptochromes mCRY1 and mCRY2 act as transcriptional repressors within the 24-h transcription-translational feedback loop of the circadian clock. The C-terminal tail and a preceding predicted coiled coil (CC) of the mCRYs as well as the C-terminal region of the transcription factor mBMAL1 are involved in transcriptional feedback repression. Here we show by fluorescence polarization and isothermal titration calorimetry that purified mCRY1/2CCtail proteins form stable heterodimeric complexes with two C-terminal mBMAL1 fragments. The longer mBMAL1 fragment (BMAL490) includes Lys-537, which is rhythmically acetylated by mCLOCK in vivo. mCRY1 (but not mCRY2) has a lower affinity to BMAL490 than to the shorter mBMAL1 fragment (BMAL577) and a K537Q mutant version of BMAL490. Using peptide scan analysis we identify two mBMAL1 binding epitopes within the coiled coil and tail regions of mCRY1/2 and document the importance of positively charged mCRY1 residues for mBMAL1 binding. A synthetic mCRY coiled coil peptide binds equally well to the short and to the long (wild-type and K537Q mutant) mBMAL1 fragments. In contrast, a peptide including the mCRY1 tail epitope shows a lower affinity to BMAL490 compared with BMAL577 and BMAL490(K537Q). We propose that Lys-537mBMAL1 acetylation enhances mCRY1 binding by affecting electrostatic interactions predominantly with the mCRY1 tail. Our data reveal different molecular interactions of the mCRY1/2 tails with mBMAL1, which may contribute to the non-redundant clock functions of mCRY1 and mCRY2. Moreover, our study suggests the design of peptidic inhibitors targeting the interaction of the mCRY1 tail with mBMAL1. PMID:21521686

  2. Object-oriented fault tree evaluation program for quantitative analyses

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  3. Quantitative Analyses of the Modes of Deformation in Engineering Thermoplastics

    NASA Astrophysics Data System (ADS)

    Landes, B. G.; Bubeck, R. A.; Scott, R. L.; Heaney, M. D.

    1998-03-01

    Synchrotron-based real-time small-angle X-ray scattering (RTSAXS) studies have been performed on rubber-toughened engineering thermoplastics with amorphous and semi-crystalline matrices. Scattering patterns are measured at successive time intervals of 3 ms were analyzed to determine the plastic strain due to crazing. Simultaneous measurements of the absorption of the primary beam by the sample permits the total plastic strain to be concurrently computed. The plastic strain due to other deformation mechanisms (e.g., particle cavitation and macroscopic shear yield can be determined from the difference between the total and craze-derived plastic strains. The contribution from macroscopic shear deformation can be determined from video-based optical data measured simultaneously with the X-ray data. These types of time-resolved experiments result in the generation of prodigious quantities of data, the analysis of which can considerably delay the determination of key results. A newly developed software package that runs in WINDOWSa 95 permits the rapid analysis of the relative contributions of the deformation modes from these time-resolved experiments. Examples of using these techniques on ABS-type and QUESTRAa syndiotactic polystyrene type engineering resins will be given.

  4. Knowledge Discovery in Textual Documentation: Qualitative and Quantitative Analyses.

    ERIC Educational Resources Information Center

    Loh, Stanley; De Oliveira, Jose Palazzo M.; Gastal, Fabio Leite

    2001-01-01

    Presents an application of knowledge discovery in texts (KDT) concerning medical records of a psychiatric hospital. The approach helps physicians to extract knowledge about patients and diseases that may be used for epidemiological studies, for training professionals, and to support physicians to diagnose and evaluate diseases. (Author/AEF)

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  6. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  7. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  8. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  9. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  10. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  11. Quantitative nature of overexpression experiments

    PubMed Central

    Moriya, Hisao

    2015-01-01

    Overexpression experiments are sometimes considered as qualitative experiments designed to identify novel proteins and study their function. However, in order to draw conclusions regarding protein overexpression through association analyses using large-scale biological data sets, we need to recognize the quantitative nature of overexpression experiments. Here I discuss the quantitative features of two different types of overexpression experiment: absolute and relative. I also introduce the four primary mechanisms involved in growth defects caused by protein overexpression: resource overload, stoichiometric imbalance, promiscuous interactions, and pathway modulation associated with the degree of overexpression. PMID:26543202

  12. Quantitative characterisation of sedimentary grains

    NASA Astrophysics Data System (ADS)

    Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.

    2016-04-01

    Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.

  13. Accurate ab initio energy gradients in chemical compound space.

    PubMed

    Anatole von Lilienfeld, O

    2009-10-28

    Analytical potential energy derivatives, based on the Hellmann-Feynman theorem, are presented for any pair of isoelectronic compounds. Since energies are not necessarily monotonic functions between compounds, these derivatives can fail to predict the right trends of the effect of alchemical mutation. However, quantitative estimates without additional self-consistency calculations can be made when the Hellmann-Feynman derivative is multiplied with a linearization coefficient that is obtained from a reference pair of compounds. These results suggest that accurate predictions can be made regarding any molecule's energetic properties as long as energies and gradients of three other molecules have been provided. The linearization coefficent can be interpreted as a quantitative measure of chemical similarity. Presented numerical evidence includes predictions of electronic eigenvalues of saturated and aromatic molecular hydrocarbons. PMID:19894922

  14. Quantitative Decision Support Requires Quantitative User Guidance

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  15. The Chirp - High Resolution, Quantitative Subbottom Profiler.

    NASA Astrophysics Data System (ADS)

    Schock, Steven Gregory

    The chirp sonar is a quantitative subbottom profiler that can generate wide dynamic range, artifact-free seismograms in real time. These high quality seismograms, can be used for quantitative analyses, such as reflectivity and attenuation measurements, and sediment classification. Key features of the chirp sonar include (1) a computer-generated FM pilot signal with a large time-bandwidth product that contains amplitude and phase compensation providing exact control of the transmitted acoustic pulse (2) directional arrays with low backlobe levels and (3) a towed vehicle designed to scatter bottom multiples. Subbottom profiles, acquired in Narragansett Bay, R.I., demonstrated 20 cm vertical resolution, 62 meter subbottom penetration and significant bottom multiple reduction. A new time domain technique for estimating acoustic attenuation, called the autocorrelation method, is described and compared to well known attenuation measurement techniques. The spectral ratio method is most accurate, followed by the autocorrelation and wavelet matching methods for estimating the acoustic attenuation coefficient of sediments from reflection profiles. However, the autocorrelation method is the only technique efficient enough to provide an attenuation measurement for every depth increment in each acoustic return in real time. Multiple reflections, gradual impedance changes and windowing sidelobes degrade the attenuation estimates. Chirp sonar remote measurements off Hope Island were used to estimate the attenuation coefficient for clayey silts (0.091 dB/m/kHz by spectral ratio and 0.125 dB/m/kHz by autocorrelation), values which agree with in situ measurements made by Hamilton, but are significantly higher than the attenuation coefficient (0.019 dB/m/kHz, n = 1.50) calculated from laboratory measurements (250-750 kHz) on a core from the Hope Island site. More ground truth measurements are required to establish the accuracy of remote attenuation measurements using the chirp sonar.

  16. Bioimaging for quantitative phenotype analysis.

    PubMed

    Chen, Weiyang; Xia, Xian; Huang, Yi; Chen, Xingwei; Han, Jing-Dong J

    2016-06-01

    With the development of bio-imaging techniques, an increasing number of studies apply these techniques to generate a myriad of image data. Its applications range from quantification of cellular, tissue, organismal and behavioral phenotypes of model organisms, to human facial phenotypes. The bio-imaging approaches to automatically detect, quantify, and profile phenotypic changes related to specific biological questions open new doors to studying phenotype-genotype associations and to precisely evaluating molecular changes associated with quantitative phenotypes. Here, we review major applications of bioimage-based quantitative phenotype analysis. Specifically, we describe the biological questions and experimental needs addressable by these analyses, computational techniques and tools that are available in these contexts, and the new perspectives on phenotype-genotype association uncovered by such analyses. PMID:26850283

  17. Multidimensional Genome-wide Analyses Show Accurate FVIII Integration by ZFN in Primary Human Cells

    PubMed Central

    Sivalingam, Jaichandran; Kenanov, Dimitar; Han, Hao; Nirmal, Ajit Johnson; Ng, Wai Har; Lee, Sze Sing; Masilamani, Jeyakumar; Phan, Toan Thang; Maurer-Stroh, Sebastian; Kon, Oi Lian

    2016-01-01

    Costly coagulation factor VIII (FVIII) replacement therapy is a barrier to optimal clinical management of hemophilia A. Therapy using FVIII-secreting autologous primary cells is potentially efficacious and more affordable. Zinc finger nucleases (ZFN) mediate transgene integration into the AAVS1 locus but comprehensive evaluation of off-target genome effects is currently lacking. In light of serious adverse effects in clinical trials which employed genome-integrating viral vectors, this study evaluated potential genotoxicity of ZFN-mediated transgenesis using different techniques. We employed deep sequencing of predicted off-target sites, copy number analysis, whole-genome sequencing, and RNA-seq in primary human umbilical cord-lining epithelial cells (CLECs) with AAVS1 ZFN-mediated FVIII transgene integration. We combined molecular features to enhance the accuracy and activity of ZFN-mediated transgenesis. Our data showed a low frequency of ZFN-associated indels, no detectable off-target transgene integrations or chromosomal rearrangements. ZFN-modified CLECs had very few dysregulated transcripts and no evidence of activated oncogenic pathways. We also showed AAVS1 ZFN activity and durable FVIII transgene secretion in primary human dermal fibroblasts, bone marrow- and adipose tissue-derived stromal cells. Our study suggests that, with close attention to the molecular design of genome-modifying constructs, AAVS1 ZFN-mediated FVIII integration in several primary human cell types may be safe and efficacious. PMID:26689265

  18. Assessing the reproducibility of discriminant function analyses

    PubMed Central

    Andrew, Rose L.; Albert, Arianne Y.K.; Renaut, Sebastien; Rennison, Diana J.; Bock, Dan G.

    2015-01-01

    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the

  19. Quantitative Analysis of Radar Returns from Insects

    NASA Technical Reports Server (NTRS)

    Riley, J. R.

    1979-01-01

    When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.

  20. Increasing the quantitative bandwidth of NMR measurements.

    PubMed

    Power, J E; Foroozandeh, M; Adams, R W; Nilsson, M; Coombes, S R; Phillips, A R; Morris, G A

    2016-02-18

    The frequency range of quantitative NMR is increased from tens to hundreds of kHz by a new pulse sequence, CHORUS. It uses chirp pulses to excite uniformly over very large bandwidths, yielding accurate integrals even for nuclei such as (19)F that have very wide spectra. PMID:26789115

  1. Quantitative autoradiography of dot blots using a microwell densitometer

    SciTech Connect

    Ross, P.M.; Woodley, K.; Baird, M. )

    1989-07-01

    We have established conditions for the quantitation of DNA hybridization by reading dot blot autoradiographs with a microwell plate densitometer. This method is more convenient, as accurate, and more sensitive than counting the spots in a liquid scintillation counter.

  2. A quantitative phosphorus loss assessment tool for agricultural fields

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Conservation and nutrient management planners need an assessment tool to accurately predict phosphorus (P) loss from agricultural lands. Available tools are either qualitative indices with limited capability to quantify offsite water quality impacts or prohibitively complex quantitative process-bas...

  3. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  4. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  5. Understanding the Code: keeping accurate records.

    PubMed

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404

  6. Measurement of lentiviral vector titre and copy number by cross-species duplex quantitative PCR.

    PubMed

    Christodoulou, I; Patsali, P; Stephanou, C; Antoniou, M; Kleanthous, M; Lederer, C W

    2016-01-01

    Lentiviruses are the vectors of choice for many preclinical studies and clinical applications of gene therapy. Accurate measurement of biological vector titre before treatment is a prerequisite for vector dosing, and the calculation of vector integration sites per cell after treatment is as critical to the characterisation of modified cell products as it is to long-term follow-up and the assessment of risk and therapeutic efficiency in patients. These analyses are typically based on quantitative real-time PCR (qPCR), but as yet compromise accuracy and comparability between laboratories and experimental systems, the former by using separate simplex reactions for the detection of endogene and lentiviral sequences and the latter by designing different PCR assays for analyses in human cells and animal disease models. In this study, we validate in human and murine cells a qPCR system for the single-tube assessment of lentiviral vector copy numbers that is suitable for analyses in at least 33 different mammalian species, including human and other primates, mouse, pig, cat and domestic ruminants. The established assay combines the accuracy of single-tube quantitation by duplex qPCR with the convenience of one-off assay optimisation for cross-species analyses and with the direct comparability of lentiviral transduction efficiencies in different species. PMID:26202078

  7. Vibration of clamped right triangular thin plates: Accurate simplified solutions

    NASA Astrophysics Data System (ADS)

    Saliba, H. T.

    1994-12-01

    Use of the superposition techniques in the free-vibration analyses of thin plates, as they were first introduced by Gorman, has provided simple and effective solutions to a vast number of rectangular plate problems. A modified superposition method is presented that is a noticeable improvement over existing techniques. It deals only with simple support conditions, leading to a simple, highly accurate, and very economical solution to the free-vibration problem of simply-supported right angle triangular plates. The modified method is also applicable to clamped-edge conditions.

  8. On the importance of having accurate data for astrophysical modelling

    NASA Astrophysics Data System (ADS)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  9. Vibration of clamped right triangular thin plates: Accurate simplified solutions

    NASA Astrophysics Data System (ADS)

    Saliba, H. T.

    1994-12-01

    Use of the superposition techniques in the free-vibration analyses of thin plates, as they were first introduced by Gorman, has provided simple and effective solutions to a vast number of rectangular plate problems. The method has also been extended to nonrectangular plates such as triangular and trapezoidal plates. However, serious difficulties were encountered in some of these analyses. These difficulties were discussed and obviated in Salibra, 1990. This reference, however, dealt only with simple support conditions, leading to a simple, highly accurate, and very economical solution to the free-vibration problem of simply supported right angle triangular plates. The purpose of this Note is to show that the modified superposition method of Salibra, 1990 is also applicable to clamped-edge conditions. This is accomplished through the application of this method to the title problem.

  10. PLIF: A rapid, accurate method to detect and quantitatively assess protein-lipid interactions.

    PubMed

    Ceccato, Laurie; Chicanne, Gaëtan; Nahoum, Virginie; Pons, Véronique; Payrastre, Bernard; Gaits-Iacovoni, Frédérique; Viaud, Julien

    2016-01-01

    Phosphoinositides are a type of cellular phospholipid that regulate signaling in a wide range of cellular and physiological processes through the interaction between their phosphorylated inositol head group and specific domains in various cytosolic proteins. These lipids also influence the activity of transmembrane proteins. Aberrant phosphoinositide signaling is associated with numerous diseases, including cancer, obesity, and diabetes. Thus, identifying phosphoinositide-binding partners and the aspects that define their specificity can direct drug development. However, current methods are costly, time-consuming, or technically challenging and inaccessible to many laboratories. We developed a method called PLIF (for "protein-lipid interaction by fluorescence") that uses fluorescently labeled liposomes and tethered, tagged proteins or peptides to enable fast and reliable determination of protein domain specificity for given phosphoinositides in a membrane environment. We validated PLIF against previously known phosphoinositide-binding partners for various proteins and obtained relative affinity profiles. Moreover, PLIF analysis of the sorting nexin (SNX) family revealed not only that SNXs bound most strongly to phosphatidylinositol 3-phosphate (PtdIns3P or PI3P), which is known from analysis with other methods, but also that they interacted with other phosphoinositides, which had not previously been detected using other techniques. Different phosphoinositide partners, even those with relatively weak binding affinity, could account for the diverse functions of SNXs in vesicular trafficking and protein sorting. Because PLIF is sensitive, semiquantitative, and performed in a high-throughput manner, it may be used to screen for highly specific protein-lipid interaction inhibitors. PMID:27025878

  11. Using an Educational Electronic Documentation System to Help Nursing Students Accurately Identify Nursing Diagnoses

    ERIC Educational Resources Information Center

    Pobocik, Tamara J.

    2013-01-01

    The use of technology and electronic medical records in healthcare has exponentially increased. This quantitative research project used a pretest/posttest design, and reviewed how an educational electronic documentation system helped nursing students to identify the accurate related to statement of the nursing diagnosis for the patient in the case…

  12. Linkage disequilibrium interval mapping of quantitative trait loci

    PubMed Central

    Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte

    2006-01-01

    Background For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Results Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Conclusion Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates. PMID:16542433

  13. Quantitative comparison of delineated structure shape in radiotherapy

    NASA Astrophysics Data System (ADS)

    Price, G. J.; Moore, C. J.

    2006-03-01

    There has been an influx of imaging and treatment technologies into cancer radiotherapy over the past fifteen years. The result is that radiation fields can now be accurately shaped to target disease delineated on pre-treatment planning scans whilst sparing critical healthy structures. Two well known problems remain causes for concern. The first is inter- and intra-observer variability in planning scan delineations, the second is the motion and deformation of a tumour and interacting adjacent organs during the course of radiotherapy which compromise the planned targeting regime. To be able to properly address these problems, and hence accurately shape the margins of error used to account for them, an intuitive and quantitative system of describing this variability must be used. This paper discusses a method of automatically creating correspondence points over similar non-polar delineation volumes, via spherical parameterisation, so that their shape variability can be analysed as a set of independent one dimensional statistical problems. The importance of 'pole' selection to initial parameterisation and hence ease of optimisation is highlighted, the use of sparse anatomical landmarks rather than spherical harmonic expansion for establishing point correspondence discussed, and point variability mapping introduced. A case study is presented to illustrate the method. A group of observers were asked to delineate a rectum on a series of time-of-treatment Cone Beam CT scans over a patient's fractionation schedule. The overall observer variability was calculated using the above method and the significance of the organ motion over time evaluated.

  14. Laser Ablation/Ionisation Mass Spectrometry: Sensitive and Quantitative Chemical Depth Profiling of Solid Materials.

    PubMed

    Riedo, Andreas; Grimaudo, Valentine; Moreno-García, Pavel; Neuland, Maike B; Tulej, Marek; Broekmann, Peter; Wurz, Peter

    2016-01-01

    Direct quantitative and sensitive chemical analysis of solid materials with high spatial resolution, both in lateral and vertical direction is of high importance in various fields of analytical research, ranging from in situ space research to the semiconductor industry. Accurate knowledge of the chemical composition of solid materials allows a better understanding of physical and chemical processes that formed/altered the material and allows e.g. to further improve these processes. So far, state-of-the-art techniques such as SIMS, LA-ICP-MS or GD-MS have been applied for chemical analyses in these fields of research. In this report we review the current measurement capability and the applicability of our Laser Ablation/Ionisation Mass Spectrometer (instrument name LMS) for the chemical analysis of solids with high spatial resolution. The most recent chemical analyses conducted on various solid materials, including e.g. alloys, fossils and meteorites are discussed. PMID:27131112

  15. NOAA's National Snow Analyses

    NASA Astrophysics Data System (ADS)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  16. Speed analyses of stimulus equivalence.

    PubMed Central

    Spencer, T J; Chase, P N

    1996-01-01

    The functional substitutability of stimuli in equivalence classes was examined through analyses of the speed of college students' accurate responding. After training subjects to respond to 18 conditional relations, subjects' accuracy and speed of accurate responding were compared across trial types (baseline, symmetry, transitivity, and combined transitivity and symmetry) and nodal distance (one- through five-node transitive and combined transitive and symmetric relations). Differences in accuracy across nodal distance and trial type were significant only on the first tests of equivalence, whereas differences in speed were significant even after extended testing. Response speed was inversely related to the number of nodes on which the tested relations were based. Significant differences in response speed were also found across trial types, except between transitivity and combined trials. To determine the generality of these comparisons, three groups of subjects were included: An instructed group was given an instruction that specified the interchangeability of stimuli related through training; a queried group was queried about the basis for test-trial responding: and a standard group was neither instructed nor queried. There were no significant differences among groups. These results suggest the use of response speed and response accuracy to measure the strength of matching relations. PMID:8636663

  17. A highly accurate interatomic potential for argon

    NASA Astrophysics Data System (ADS)

    Aziz, Ronald A.

    1993-09-01

    A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.

  18. Accurate and efficient spin integration for particle accelerators

    NASA Astrophysics Data System (ADS)

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  19. Accurate determination of cobalt traces in several biological reference materials.

    PubMed

    Dybczyński, R; Danko, B

    1994-01-01

    A newly devised, very accurate ("definitive") method for the determination of trace amounts of cobalt in biological materials was validated by the analysis of several certified reference materials. The method is based on a combination of neutron activation and selective and quantitative postirradiation isolation of radiocobalt from practically all other radionuclides by ion-exchange and extraction chromatography followed by gamma-ray spectrometric measurement. The significance of criteria that should be fulfilled in order to accept a given result as obtained by the "definitive method" is emphasized. In view of the demonstrated very good accuracy of the method, it is suggested that our values for cobalt content in those reference materials in which it was originally not certified (SRM 1570 spinach, SRM 1571 orchard leaves, SRM 1577 bovine liver, and Czechoslovak bovine liver 12-02-01) might be used as provisional certified values. PMID:7710879

  20. Accurate object tracking system by integrating texture and depth cues

    NASA Astrophysics Data System (ADS)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  1. DATA AND ANALYSES

    EPA Science Inventory

    In order to promote transparency and clarity of the analyses performed in support of EPA's Supplemental Guidance for Assessing Susceptibility from Early-Life Exposure to Carcinogens, the data and the analyses are now available on this web site. The data is presented in two diffe...

  2. Accurate forced-choice recognition without awareness of memory retrieval.

    PubMed

    Voss, Joel L; Baym, Carol L; Paller, Ken A

    2008-06-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory. PMID:18519546

  3. Exploring accurate Poisson–Boltzmann methods for biomolecular simulations

    PubMed Central

    Wang, Changhao; Wang, Jun; Cai, Qin; Li, Zhilin; Zhao, Hong-Kai; Luo, Ray

    2013-01-01

    Accurate and efficient treatment of electrostatics is a crucial step in computational analyses of biomolecular structures and dynamics. In this study, we have explored a second-order finite-difference numerical method to solve the widely used Poisson–Boltzmann equation for electrostatic analyses of realistic bio-molecules. The so-called immersed interface method was first validated and found to be consistent with the classical weighted harmonic averaging method for a diversified set of test biomolecules. The numerical accuracy and convergence behaviors of the new method were next analyzed in its computation of numerical reaction field grid potentials, energies, and atomic solvation forces. Overall similar convergence behaviors were observed as those by the classical method. Interestingly, the new method was found to deliver more accurate and better-converged grid potentials than the classical method on or nearby the molecular surface, though the numerical advantage of the new method is reduced when grid potentials are extrapolated to the molecular surface. Our exploratory study indicates the need for further improving interpolation/extrapolation schemes in addition to the developments of higher-order numerical methods that have attracted most attention in the field. PMID:24443709

  4. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  5. The SILAC Fly Allows for Accurate Protein Quantification in Vivo*

    PubMed Central

    Sury, Matthias D.; Chen, Jia-Xuan; Selbach, Matthias

    2010-01-01

    Stable isotope labeling by amino acids in cell culture (SILAC) is widely used to quantify protein abundance in tissue culture cells. Until now, the only multicellular organism completely labeled at the amino acid level was the laboratory mouse. The fruit fly Drosophila melanogaster is one of the most widely used small animal models in biology. Here, we show that feeding flies with SILAC-labeled yeast leads to almost complete labeling in the first filial generation. We used these “SILAC flies” to investigate sexual dimorphism of protein abundance in D. melanogaster. Quantitative proteome comparison of adult male and female flies revealed distinct biological processes specific for each sex. Using a tudor mutant that is defective for germ cell generation allowed us to differentiate between sex-specific protein expression in the germ line and somatic tissue. We identified many proteins with known sex-specific expression bias. In addition, several new proteins with a potential role in sexual dimorphism were identified. Collectively, our data show that the SILAC fly can be used to accurately quantify protein abundance in vivo. The approach is simple, fast, and cost-effective, making SILAC flies an attractive model system for the emerging field of in vivo quantitative proteomics. PMID:20525996

  6. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  7. Accurate 3D quantification of the bronchial parameters in MDCT

    NASA Astrophysics Data System (ADS)

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  8. Sources of Technical Variability in Quantitative LC-MS Proteomics: Human Brain Tissue Sample Analysis.

    SciTech Connect

    Piehowski, Paul D.; Petyuk, Vladislav A.; Orton, Daniel J.; Xie, Fang; Moore, Ronald J.; Ramirez Restrepo, Manuel; Engel, Anzhelika; Lieberman, Andrew P.; Albin, Roger L.; Camp, David G.; Smith, Richard D.; Myers, Amanda J.

    2013-05-03

    To design a robust quantitative proteomics study, an understanding of both the inherent heterogeneity of the biological samples being studied as well as the technical variability of the proteomics methods and platform is needed. Additionally, accurately identifying the technical steps associated with the largest variability would provide valuable information for the improvement and design of future processing pipelines. We present an experimental strategy that allows for a detailed examination of the variability of the quantitative LC-MS proteomics measurements. By replicating analyses at different stages of processing, various technical components can be estimated and their individual contribution to technical variability can be dissected. This design can be easily adapted to other quantitative proteomics pipelines. Herein, we applied this methodology to our label-free workflow for the processing of human brain tissue. For this application, the pipeline was divided into four critical components: Tissue dissection and homogenization (extraction), protein denaturation followed by trypsin digestion and SPE clean-up (digestion), short-term run-to-run instrumental response fluctuation (instrumental variance), and long-term drift of the quantitative response of the LC-MS/MS platform over the 2 week period of continuous analysis (instrumental stability). From this analysis, we found the following contributions to variability: extraction (72%) >> instrumental variance (16%) > instrumental stability (8.4%) > digestion (3.1%). Furthermore, the stability of the platform and its’ suitability for discovery proteomics studies is demonstrated.

  9. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    PubMed

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  10. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data

    PubMed Central

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  11. SNS shielding analyses overview

    SciTech Connect

    Popova, Irina; Gallmeier, Franz; Iverson, Erik B; Lu, Wei; Remec, Igor

    2015-01-01

    This paper gives an overview on on-going shielding analyses for Spallation Neutron Source. Presently, the most of the shielding work is concentrated on the beam lines and instrument enclosures to prepare for commissioning, save operation and adequate radiation background in the future. There is on-going work for the accelerator facility. This includes radiation-protection analyses for radiation monitors placement, designing shielding for additional facilities to test accelerator structures, redesigning some parts of the facility, and designing test facilities to the main accelerator structure for component testing. Neutronics analyses are required as well to support spent structure management, including waste characterisation analyses, choice of proper transport/storage package and shielding enhancement for the package if required.

  12. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  13. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  14. Reference Gene Selection for Quantitative Real-time PCR Normalization in Quercus suber

    PubMed Central

    Marum, Liliana; Miguel, Andreia; Ricardo, Cândido P.; Miguel, Célia

    2012-01-01

    The use of reverse transcription quantitative PCR technology to assess gene expression levels requires an accurate normalization of data in order to avoid misinterpretation of experimental results and erroneous analyses. Despite being the focus of several transcriptomics projects, oaks, and particularly cork oak (Quercus suber), have not been investigated regarding the identification of reference genes suitable for the normalization of real-time quantitative PCR data. In this study, ten candidate reference genes (Act, CACs, EF-1α, GAPDH, His3, PsaH, Sand, PP2A, ß-Tub and Ubq) were evaluated to determine the most stable internal reference for quantitative PCR normalization in cork oak. The transcript abundance of these genes was analysed in several tissues of cork oak, including leaves, reproduction cork, and periderm from branches at different developmental stages (1-, 2-, and 3-year old) or collected in different dates (active growth period versus dormancy). The three statistical methods (geNorm, NormFinder, and CV method) used in the evaluation of the most suitable combination of reference genes identified Act and CACs as the most stable candidates when all the samples were analysed together, while ß-Tub and PsaH showed the lowest expression stability. However, when different tissues, developmental stages, and collection dates were analysed separately, the reference genes exhibited some variation in their expression levels. In this study, and for the first time, we have identified and validated reference genes in cork oak that can be used for quantification of target gene expression in different tissues and experimental conditions and will be useful as a starting point for gene expression studies in other oaks. PMID:22529976

  15. Quantitative Spectroscopy of Deneb

    NASA Astrophysics Data System (ADS)

    Schiller, Florian; Przybilla, N.

    We use the visually brightest A-type supergiant Deneb (A2 Ia) as benchmark for testing a spectro- scopic analysis technique developed for quantitative studies of BA-type supergiants. Our NLTE spectrum synthesis technique allows us to derive stellar parameters and elemental abundances with unprecedented accuracy. The study is based on a high-resolution and high-S/N spectrum obtained with the Echelle spectrograph FOCES on the Calar Alto 2.2 m telescope. Practically all inconsistencies reported in earlier studies are resolved. A self-consistent view of Deneb is thus obtained, allowing us to discuss its evolutionary state in detail by comparison with the most recent generation of evolution models for massive stars. The basic atmospheric parameters Teff = 8525 ± 75 K and log g = 1.10 ± 0.05 dex (cgs) and the distance imply the following fundamental parameters for Deneb: M spec = 17 ± 3 M⊙ , L = 1.77 ± 0.29 · 105 L⊙ and R = 192 ± 16 R⊙ . The derived He and CNO abundances indicate mixing with nuclear processed matter. The high N/C ratio of 4.64 ± 1.39 and a N/O ratio of 0.88 ± 0.07 (mass fractions) could in principle be explained by evolutionary models with initially very rapid rotation. A mass of ˜ 22 M⊙ is implied for the progenitor on the zero-age main se- quence, i.e. it was a late O-type star. Significant mass-loss has occurred, probably enhanced by pronounced centrifugal forces. The observational constraints favour a scenario for the evolu- tion of Deneb where the effects of rotational mixing may be amplified by an interaction with a magnetic field. Analogous analyses of such highly luminous BA-type supergiants will allow for precision studies of different galaxies in the Local Group and beyond.

  16. Pyrosequencing for Accurate Imprinted Allele Expression Analysis

    PubMed Central

    Yang, Bing; Damaschke, Nathan; Yao, Tianyu; McCormick, Johnathon; Wagner, Jennifer; Jarrard, David

    2016-01-01

    Genomic imprinting is an epigenetic mechanism that restricts gene expression to one inherited allele. Improper maintenance of imprinting has been implicated in a number of human diseases and developmental syndromes. Assays are needed that can quantify the contribution of each paternal allele to a gene expression profile. We have developed a rapid, sensitive quantitative assay for the measurement of individual allelic ratios termed Pyrosequencing for Imprinted Expression (PIE). Advantages of PIE over other approaches include shorter experimental time, decreased labor, avoiding the need for restriction endonuclease enzymes at polymorphic sites, and prevent heteroduplex formation which is problematic in quantitative PCR-based methods. We demonstrate the improved sensitivity of PIE including the ability to detect differences in allelic expression down to 1%. The assay is capable of measuring genomic heterozygosity as well as imprinting in a single run. PIE is applied to determine the status of Insulin-like Growth Factor-2 (IGF2) imprinting in human and mouse tissues. PMID:25581900

  17. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  18. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  19. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  20. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  1. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  2. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  3. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  4. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  5. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  6. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  7. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  8. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  9. Quantitative autoradiography of neurochemicals

    SciTech Connect

    Rainbow, T.C.; Biegon, A.; Bleisch, W.V.

    1982-05-24

    Several new methods have been developed that apply quantitative autoradiography to neurochemistry. These methods are derived from the 2-deoxyglucose (2DG) technique of Sokoloff (1), which uses quantitative autoradiography to measure the rate of glucose utilization in brain structures. The new methods allow the measurement of the rate of cerbral protein synthesis and the levels of particular neurotransmitter receptors by quantitative autoradiography. As with the 2DG method, the new techniques can measure molecular levels in micron-sized brain structures; and can be used in conjunction with computerized systems of image processing. It is possible that many neurochemical measurements could be made by computerized analysis of quantitative autoradiograms.

  10. Quantitative measures for redox signaling.

    PubMed

    Pillay, Ché S; Eagling, Beatrice D; Driscoll, Scott R E; Rohwer, Johann M

    2016-07-01

    Redox signaling is now recognized as an important regulatory mechanism for a number of cellular processes including the antioxidant response, phosphokinase signal transduction and redox metabolism. While there has been considerable progress in identifying the cellular machinery involved in redox signaling, quantitative measures of redox signals have been lacking, limiting efforts aimed at understanding and comparing redox signaling under normoxic and pathogenic conditions. Here we have outlined some of the accepted principles for redox signaling, including the description of hydrogen peroxide as a signaling molecule and the role of kinetics in conferring specificity to these signaling events. Based on these principles, we then develop a working definition for redox signaling and review a number of quantitative methods that have been employed to describe signaling in other systems. Using computational modeling and published data, we show how time- and concentration- dependent analyses, in particular, could be used to quantitatively describe redox signaling and therefore provide important insights into the functional organization of redox networks. Finally, we consider some of the key challenges with implementing these methods. PMID:27151506

  11. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  12. Quantitative analysis of blood vessel geometry

    NASA Astrophysics Data System (ADS)

    Fuhrman, Michael G.; Abdul-Karim, Othman; Shah, Sujal; Gilbert, Steven G.; Van Bibber, Richard

    2001-07-01

    Re-narrowing or restenosis of a human coronary artery occurs within six months in one third of balloon angioplasty procedures. Accurate and repeatable quantitative analysis of vessel shape is important to characterize the progression and type of restenosis, and to evaluate effects new therapies might have. A combination of complicated geometry and image variability, and the need for high resolution and large image size makes visual/manual analysis slow, difficult, and prone to error. The image processing and analysis described here was developed to automate feature extraction of the lumen, internal elastic lamina, neointima, external elastic lamina, and tunica adventitia and to enable an objective, quantitative definition of blood vessel geometry. The quantitative geometrical analysis enables the measurement of several features including perimeter, area, and other metrics of vessel damage. Automation of feature extraction creates a high throughput capability that enables analysis of serial sections for more accurate measurement of restenosis dimensions. Measurement results are input into a relational database where they can be statistically analyzed compared across studies. As part of the integrated process, results are also imprinted on the images themselves to facilitate auditing of the results. The analysis is fast, repeatable and accurate while allowing the pathologist to control the measurement process.

  13. Targeted quantitation of proteins by mass spectrometry.

    PubMed

    Liebler, Daniel C; Zimmerman, Lisa J

    2013-06-01

    Quantitative measurement of proteins is one of the most fundamental analytical tasks in a biochemistry laboratory, but widely used immunochemical methods often have limited specificity and high measurement variation. In this review, we discuss applications of multiple-reaction monitoring (MRM) mass spectrometry, which allows sensitive, precise quantitative analyses of peptides and the proteins from which they are derived. Systematic development of MRM assays is permitted by databases of peptide mass spectra and sequences, software tools for analysis design and data analysis, and rapid evolution of tandem mass spectrometer technology. Key advantages of MRM assays are the ability to target specific peptide sequences, including variants and modified forms, and the capacity for multiplexing that allows analysis of dozens to hundreds of peptides. Different quantitative standardization methods provide options that balance precision, sensitivity, and assay cost. Targeted protein quantitation by MRM and related mass spectrometry methods can advance biochemistry by transforming approaches to protein measurement. PMID:23517332

  14. Spacelab Charcoal Analyses

    NASA Technical Reports Server (NTRS)

    Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.

    1995-01-01

    This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.

  15. Mobile app-based quantitative scanometric analysis.

    PubMed

    Wong, Jessica X H; Liu, Frank S F; Yu, Hua-Zhong

    2014-12-16

    The feasibility of using smartphones and other mobile devices as the detection platform for quantitative scanometric assays is demonstrated. The different scanning modes (color, grayscale, black/white) and grayscale converting protocols (average, weighted average/luminosity, and software specific) have been compared in determining the optical darkness ratio (ODR) values, a conventional quantitation measure for scanometric assays. A mobile app was developed to image and analyze scanometric assays, as demonstrated by paper-printed tests and a biotin-streptavidin assay on a plastic substrate. Primarily for ODR analysis, the app has been shown to perform as well as a traditional desktop scanner, augmenting that smartphones (and other mobile devices) promise to be a practical platform for accurate, quantitative chemical analysis and medical diagnostics. PMID:25420202

  16. Quantitative metallography by electron backscattered diffraction.

    PubMed

    Humphreys

    1999-09-01

    Although electron backscattered diffraction (EBSD) in the scanning electron microscope is used mainly to investigate the relationship between local textures and microstructures, the technique has now developed to the stage where it requires serious consideration as a tool for routine quantitative characterization of microstructures. This paper examines the application of EBSD to the characterization of phase distributions, grain and subgrain structures and also textures. Comparisons are made with the standard methods of quantitative metallography and it is shown that in many cases EBSD can produce more accurate and detailed measurements than the standard methods and that the data may sometimes be obtained more rapidly. The factors which currently limit the use of EBSD for quantitative microstructural characterization, including the speed of data acquisition and the angular and spatial resolutions, are discussed, and future developments are considered. PMID:10460682

  17. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    SciTech Connect

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  18. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  19. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  20. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  1. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  2. Are Kohn-Sham conductances accurate?

    PubMed

    Mera, H; Niquet, Y M

    2010-11-19

    We use Fermi-liquid relations to address the accuracy of conductances calculated from the single-particle states of exact Kohn-Sham (KS) density functional theory. We demonstrate a systematic failure of this procedure for the calculation of the conductance, and show how it originates from the lack of renormalization in the KS spectral function. In certain limits this failure can lead to a large overestimation of the true conductance. We also show, however, that the KS conductances can be accurate for single-channel molecular junctions and systems where direct Coulomb interactions are strongly dominant. PMID:21231333

  3. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  4. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  5. Accurate Sparse-Projection Image Reconstruction via Nonlocal TV Regularization

    PubMed Central

    Zhang, Yi; Zhang, Weihua; Zhou, Jiliu

    2014-01-01

    Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better. PMID:24592168

  6. Accurate oscillator strengths for interstellar ultraviolet lines of Cl I

    NASA Technical Reports Server (NTRS)

    Schectman, R. M.; Federman, S. R.; Beideck, D. J.; Ellis, D. J.

    1993-01-01

    Analyses on the abundance of interstellar chlorine rely on accurate oscillator strengths for ultraviolet transitions. Beam-foil spectroscopy was used to obtain f-values for the astrophysically important lines of Cl I at 1088, 1097, and 1347 A. In addition, the line at 1363 A was studied. Our f-values for 1088, 1097 A represent the first laboratory measurements for these lines; the values are f(1088)=0.081 +/- 0.007 (1 sigma) and f(1097) = 0.0088 +/- 0.0013 (1 sigma). These results resolve the issue regarding the relative strengths for 1088, 1097 A in favor of those suggested by astronomical measurements. For the other lines, our results of f(1347) = 0.153 +/- 0.011 (1 sigma) and f(1363) = 0.055 +/- 0.004 (1 sigma) are the most precisely measured values available. The f-values are somewhat greater than previous experimental and theoretical determinations.

  7. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  8. Apollo 14 microbial analyses

    NASA Technical Reports Server (NTRS)

    Taylor, G. R.

    1972-01-01

    Extensive microbiological analyses that were performed on the Apollo 14 prime and backup crewmembers and ancillary personnel are discussed. The crewmembers were subjected to four separate and quite different environments during the 137-day monitoring period. The relation between each of these environments and observed changes in the microflora of each astronaut are presented.

  9. Efficient design, accurate fabrication and effective characterization of plasmonic quasicrystalline arrays of nano-spherical particles

    NASA Astrophysics Data System (ADS)

    Namin, Farhad A.; Yuwen, Yu A.; Liu, Liu; Panaretos, Anastasios H.; Werner, Douglas H.; Mayer, Theresa S.

    2016-02-01

    In this paper, the scattering properties of two-dimensional quasicrystalline plasmonic lattices are investigated. We combine a newly developed synthesis technique, which allows for accurate fabrication of spherical nanoparticles, with a recently published variation of generalized multiparticle Mie theory to develop the first quantitative model for plasmonic nano-spherical arrays based on quasicrystalline morphologies. In particular, we study the scattering properties of Penrose and Ammann- Beenker gold spherical nanoparticle array lattices. We demonstrate that by using quasicrystalline lattices, one can obtain multi-band or broadband plasmonic resonances which are not possible in periodic structures. Unlike previously published works, our technique provides quantitative results which show excellent agreement with experimental measurements.

  10. Accurate perception of negative emotions predicts functional capacity in schizophrenia.

    PubMed

    Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J

    2014-04-30

    Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration. PMID:24524947

  11. Information Omitted From Analyses.

    PubMed

    2015-08-01

    In the Original Article titled “Higher- Order Genetic and Environmental Structure of Prevalent Forms of Child and Adolescent Psychopathology” published in the February 2011 issue of JAMA Psychiatry (then Archives of General Psychiatry) (2011;68[2]:181-189), there were 2 errors. Although the article stated that the dimensions of psychopathology were measured using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder, major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder, all dimensional scores used in the reported analyses were actually based on parent reports of symptoms; youth reports were not used. In addition, whereas the article stated that each symptom dimension was residualized on age, sex, age-squared, and age by sex, the dimensions actually were only residualized on age, sex, and age-squared. All analyses were repeated using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder,major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder; these dimensional scores were residualized on age, age-squared, sex, sex by age, and sex by age-squared. The results of the new analyses were qualitatively the same as those reported in the article, with no substantial changes in conclusions. The only notable small difference was that major depression and generalized anxiety disorder dimensions had small but significant loadings on the internalizing factor in addition to their substantial loadings on the general factor in the analyses of both genetic and non-shared covariances in the selected models in the new analyses. Corrections were made to the

  12. A male-specific quantitative trait locus on 1p21 controlling human stature

    PubMed Central

    Sammalisto, S; Hiekkalinna, T; Suviolahti, E; Sood, K; Metzidis, A; Pajukanta, P; Lilja, H; Soro-Paavonen, A; Taskinen, M; Tuomi, T; Almgren, P; Orho-Melander, M; Groop, L; Peltonen, L; Perola, M

    2005-01-01

    Background: Many genome-wide scans aimed at complex traits have been statistically underpowered due to small sample size. Combining data from several genome-wide screens with comparable quantitative phenotype data should improve statistical power for the localisation of genomic regions contributing to these traits. Objective: To perform a genome-wide screen for loci affecting adult stature by combined analysis of four previously performed genome-wide scans. Methods: We developed a web based computer tool, Cartographer, for combining genetic marker maps which positions genetic markers accurately using the July 2003 release of the human genome sequence and the deCODE genetic map. Using Cartographer, we combined the primary genotype data from four genome-wide scans and performed variance components (VC) linkage analyses for human stature on the pooled dataset of 1417 individuals from 277 families and performed VC analyses for males and females separately. Results: We found significant linkage to stature on 1p21 (multipoint LOD score 4.25) and suggestive linkages on 9p24 and 18q21 (multipoint LOD scores 2.57 and 2.39, respectively) in males-only analyses. We also found suggestive linkage to 4q35 and 22q13 (multipoint LOD scores 2.18 and 2.85, respectively) when we analysed both females and males and to 13q12 (multipoint LOD score 2.66) in females-only analyses. Conclusions: We strengthened the evidence for linkage to previously reported quantitative trait loci (QTL) for stature and also found significant evidence of a novel male-specific QTL on 1p21. Further investigation of several interesting candidate genes in this region will help towards characterisation of this first sex-specific locus affecting human stature. PMID:15827092

  13. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  14. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  15. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  16. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  17. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  18. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  19. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  20. Comparison of STIM and particle backscattering spectrometry mass determination for quantitative microanalysis of cultured cells

    NASA Astrophysics Data System (ADS)

    Devès, G.; Ortega, R.

    2001-07-01

    In biological sample microanalysis, a mass-normalisation method is commonly used as a quantitative index of elemental concentrations determined by particle-induced X-ray emission (PIXE). The organic mass can either be determined using particle backscattering spectrometry (BS) or scanning transmission ion microscopy (STIM). However, the accuracy of quantitative microanalysis in samples such as cultured cells is affected by beam-induced loss of organic mass during analysis. The aim of this paper is to compare mass measurements determined by particle BS or by STIM. In order to calibrate STIM and BS analyses, we measured by both techniques the thickness of standard foils of polycarbonate (3 and 6 μm) , Mylar ®(4 μm) , Kapton ®(7.5 μm) and Nylon ®(15 μm) , as well as biological samples of mono-layered cultured cells. Non-damaging STIM analysis of samples before PIXE irradiation is certainly one of the most accurate ways to determine the sample mass, however, this requires strong experimental handling. On the other hand, BS performed simultaneously to PIXE is the simplest method to determine the local mass in polymer foils, but appears less accurate in the case of cultured cells.

  1. 3-D Cavern Enlargement Analyses

    SciTech Connect

    EHGARTNER, BRIAN L.; SOBOLIK, STEVEN R.

    2002-03-01

    Three-dimensional finite element analyses simulate the mechanical response of enlarging existing caverns at the Strategic Petroleum Reserve (SPR). The caverns are located in Gulf Coast salt domes and are enlarged by leaching during oil drawdowns as fresh water is injected to displace the crude oil from the caverns. The current criteria adopted by the SPR limits cavern usage to 5 drawdowns (leaches). As a base case, 5 leaches were modeled over a 25 year period to roughly double the volume of a 19 cavern field. Thirteen additional leaches where then simulated until caverns approached coalescence. The cavern field approximated the geometries and geologic properties found at the West Hackberry site. This enabled comparisons are data collected over nearly 20 years to analysis predictions. The analyses closely predicted the measured surface subsidence and cavern closure rates as inferred from historic well head pressures. This provided the necessary assurance that the model displacements, strains, and stresses are accurate. However, the cavern field has not yet experienced the large scale drawdowns being simulated. Should they occur in the future, code predictions should be validated with actual field behavior at that time. The simulations were performed using JAS3D, a three dimensional finite element analysis code for nonlinear quasi-static solids. The results examine the impacts of leaching and cavern workovers, where internal cavern pressures are reduced, on surface subsidence, well integrity, and cavern stability. The results suggest that the current limit of 5 oil drawdowns may be extended with some mitigative action required on the wells and later on to surface structure due to subsidence strains. The predicted stress state in the salt shows damage to start occurring after 15 drawdowns with significant failure occurring at the 16th drawdown, well beyond the current limit of 5 drawdowns.

  2. Accurate response surface approximations for weight equations based on structural optimization

    NASA Astrophysics Data System (ADS)

    Papila, Melih

    Accurate weight prediction methods are vitally important for aircraft design optimization. Therefore, designers seek weight prediction techniques with low computational cost and high accuracy, and usually require a compromise between the two. The compromise can be achieved by combining stress analysis and response surface (RS) methodology. While stress analysis provides accurate weight information, RS techniques help to transmit effectively this information to the optimization procedure. The focus of this dissertation is structural weight equations in the form of RS approximations and their accuracy when fitted to results of structural optimizations that are based on finite element analyses. Use of RS methodology filters out the numerical noise in structural optimization results and provides a smooth weight function that can easily be used in gradient-based configuration optimization. In engineering applications RS approximations of low order polynomials are widely used, but the weight may not be modeled well by low-order polynomials, leading to bias errors. In addition, some structural optimization results may have high-amplitude errors (outliers) that may severely affect the accuracy of the weight equation. Statistical techniques associated with RS methodology are sought in order to deal with these two difficulties: (1) high-amplitude numerical noise (outliers) and (2) approximation model inadequacy. The investigation starts with reducing approximation error by identifying and repairing outliers. A potential reason for outliers in optimization results is premature convergence, and outliers of such nature may be corrected by employing different convergence settings. It is demonstrated that outlier repair can lead to accuracy improvements over the more standard approach of removing outliers. The adequacy of approximation is then studied by a modified lack-of-fit approach, and RS errors due to the approximation model are reduced by using higher order polynomials. In

  3. Accurate, fully-automated NMR spectral profiling for metabolomics.

    PubMed

    Ravanbakhsh, Siamak; Liu, Philip; Bjorndahl, Trent C; Bjordahl, Trent C; Mandal, Rupasri; Grant, Jason R; Wilson, Michael; Eisner, Roman; Sinelnikov, Igor; Hu, Xiaoyu; Luchinat, Claudio; Greiner, Russell; Wishart, David S

    2015-01-01

    Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of NMR in

  4. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  5. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  6. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  7. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  8. Accurate numerical solutions of conservative nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan

    2014-12-01

    The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.

  9. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  10. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  11. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  12. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  13. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  14. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities. PMID:12747164

  15. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  16. Recapturing Quantitative Biology.

    ERIC Educational Resources Information Center

    Pernezny, Ken; And Others

    1996-01-01

    Presents a classroom activity on estimating animal populations. Uses shoe boxes and candies to emphasize the importance of mathematics in biology while introducing the methods of quantitative ecology. (JRH)

  17. On Quantitative Rorschach Scales.

    ERIC Educational Resources Information Center

    Haggard, Ernest A.

    1978-01-01

    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  18. Application of Mixed-Methods Approaches to Higher Education and Intersectional Analyses

    ERIC Educational Resources Information Center

    Griffin, Kimberly A.; Museus, Samuel D.

    2011-01-01

    In this article, the authors discuss the utility of combining quantitative and qualitative methods in conducting intersectional analyses. First, they discuss some of the paradigmatic underpinnings of qualitative and quantitative research, and how these methods can be used in intersectional analyses. They then consider how paradigmatic pragmatism…

  19. RECENT ADVANCES IN QUANTITATIVE NEUROPROTEOMICS

    PubMed Central

    Craft, George E; Chen, Anshu; Nairn, Angus C

    2014-01-01

    The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson’s disease and Alzheimer’s disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to

  20. Recent advances in quantitative neuroproteomics.

    PubMed

    Craft, George E; Chen, Anshu; Nairn, Angus C

    2013-06-15

    The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson's disease and Alzheimer's disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to shed

  1. Quantitative receptor autoradiography

    SciTech Connect

    Boast, C.A.; Snowhill, E.W.; Altar, C.A.

    1986-01-01

    Quantitative receptor autoradiography addresses the topic of technical and scientific advances in the sphere of quantitative autoradiography. The volume opens with a overview of the field from a historical and critical perspective. Following is a detailed discussion of in vitro data obtained from a variety of neurotransmitter systems. The next section explores applications of autoradiography, and the final two chapters consider experimental models. Methodological considerations are emphasized, including the use of computers for image analysis.

  2. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  3. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  4. Accurate quantification of cells recovered by bronchoalveolar lavage.

    PubMed

    Saltini, C; Hance, A J; Ferrans, V J; Basset, F; Bitterman, P B; Crystal, R G

    1984-10-01

    Quantification of the differential cell count and total number of cells recovered from the lower respiratory tract by bronchoalveolar lavage is a valuable technique for evaluating the alveolitis of patients with inflammatory disorders of the lower respiratory tract. The most commonly used technique for the evaluation of cells recovered by lavage has been to concentrate cells by centrifugation and then to determine total cell number using a hemocytometer and differential cell count from a Wright-Glemsa-stained cytocentrifuge preparation. However, we have noted that the percentage of small cells present in the original cell suspension recovered by lavage is greater than the percentage of lymphocytes identified on cytocentrifuge preparations. Therefore, we developed procedures for determining differential cell counts on lavage cells collected on Millipore filters and stained with hematoxylin-eosin (filter preparations) and compared the results of differential cell counts performed on filter preparations with those obtained using cytocentrifuge preparations. When cells recovered by lavage were collected on filter preparations, accurate differential cell counts were obtained, as confirmed by performing differential cell counts on cell mixtures of known composition, and by comparing differential cell counts obtained using filter preparations stained with hematoxylin-eosin with those obtained using filter preparations stained with a peroxidase cytochemical stain. The morphology of cells displayed on filter preparations was excellent, and interobserver variability in quantitating cell types recovered by lavage was less than 3%.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:6385789

  5. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  6. Rapid Accurate Identification of Bacterial and Viral Pathogens

    SciTech Connect

    Dunn, John

    2007-03-09

    The goals of this program were to develop two assays for rapid, accurate identification of pathogenic organisms at the strain level. The first assay "Quantitative Genome Profiling or QGP" is a real time PCR assay with a restriction enzyme-based component. Its underlying concept is that certain enzymes should cleave genomic DNA at many sites and that in some cases these cuts will interrupt the connection on the genomic DNA between flanking PCR primer pairs thereby eliminating selected PCR amplifications. When this occurs the appearance of the real-time PCR threshold (Ct) signal during DNA amplification is totally eliminated or, if cutting is incomplete, greatly delayed compared to an uncut control. This temporal difference in appearance of the Ct signal relative to undigested control DNA provides a rapid, high-throughput approach for DNA-based identification of different but closely related pathogens depending upon the nucleotide sequence of the target region. The second assay we developed uses the nucleotide sequence of pairs of shmi identifier tags (-21 bp) to identify DNA molecules. Subtle differences in linked tag pair combinations can also be used to distinguish between closely related isolates..

  7. Subvoxel accurate graph search using non-Euclidean graph space.

    PubMed

    Abràmoff, Michael D; Wu, Xiaodong; Lee, Kyungmoo; Tang, Li

    2014-01-01

    Graph search is attractive for the quantitative analysis of volumetric medical images, and especially for layered tissues, because it allows globally optimal solutions in low-order polynomial time. However, because nodes of graphs typically encode evenly distributed voxels of the volume with arcs connecting orthogonally sampled voxels in Euclidean space, segmentation cannot achieve greater precision than a single unit, i.e. the distance between two adjoining nodes, and partial volume effects are ignored. We generalize the graph to non-Euclidean space by allowing non-equidistant spacing between nodes, so that subvoxel accurate segmentation is achievable. Because the number of nodes and edges in the graph remains the same, running time and memory use are similar, while all the advantages of graph search, including global optimality and computational efficiency, are retained. A deformation field calculated from the volume data adaptively changes regional node density so that node density varies with the inverse of the expected cost. We validated our approach using optical coherence tomography (OCT) images of the retina and 3-D MR of the arterial wall, and achieved statistically significant increased accuracy. Our approach allows improved accuracy in volume data acquired with the same hardware, and also, preserved accuracy with lower resolution, more cost-effective, image acquisition equipment. The method is not limited to any specific imaging modality and readily extensible to higher dimensions. PMID:25314272

  8. Identification and Quantitation of Flavanols and Proanthocyanidins in Foods: How Good are the Datas?

    PubMed Central

    Kelm, Mark A.; Hammerstone, John F.; Schmitz, Harold H.

    2005-01-01

    Evidence suggesting that dietary polyphenols, flavanols, and proanthocyanidins in particular offer significant cardiovascular health benefits is rapidly increasing. Accordingly, reliable and accurate methods are needed to provide qualitative and quantitative food composition data necessary for high quality epidemiological and clinical research. Measurements for flavonoids and proanthocyanidins have employed a range of analytical techniques, with various colorimetric assays still being popular for estimating total polyphenolic content in foods and other biological samples despite advances made with more sophisticated analyses. More crudely, estimations of polyphenol content as well as antioxidant activity are also reported with values relating to radical scavenging activity. High-performance liquid chromatography (HPLC) is the method of choice for quantitative analysis of individual polyphenols such as flavanols and proanthocyanidins. Qualitative information regarding proanthocyanidin structure has been determined by chemical methods such as thiolysis and by HPLC-mass spectrometry (MS) techniques at present. The lack of appropriate standards is the single most important factor that limits the aforementioned analyses. However, with ever expanding research in the arena of flavanols, proanthocyanidins, and health and the importance of their future inclusion in food composition databases, the need for standards becomes more critical. At present, sufficiently well-characterized standard material is available for selective flavanols and proanthocyanidins, and construction of at least a limited food composition database is feasible. PMID:15712597

  9. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    Model calculations and analyses have been carried out to compare with several sets of data (dose, induced radioactivity in various experiment samples and spacecraft components, fission foil measurements, and LET spectra) from passive radiation dosimetry on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The calculations and data comparisons are used to estimate the accuracy of current models and methods for predicting the ionizing radiation environment in low earth orbit. The emphasis is on checking the accuracy of trapped proton flux and anisotropy models.

  10. Accurate Measurement of the Relative Abundance of Different DNA Species in Complex DNA Mixtures

    PubMed Central

    Jeong, Sangkyun; Yu, Hyunjoo; Pfeifer, Karl

    2012-01-01

    A molecular tool that can compare the abundances of different DNA sequences is necessary for comparing intergenic or interspecific gene expression. We devised and verified such a tool using a quantitative competitive polymerase chain reaction approach. For this approach, we adapted a competitor array, an artificially made plasmid DNA in which all the competitor templates for the target DNAs are arranged with a defined ratio, and melting analysis for allele quantitation for accurate quantitation of the fractional ratios of competitively amplified DNAs. Assays on two sets of DNA mixtures with explicitly known compositional structures of the test sequences were performed. The resultant average relative errors of 0.059 and 0.021 emphasize the highly accurate nature of this method. Furthermore, the method's capability of obtaining biological data is demonstrated by the fact that it can illustrate the tissue-specific quantitative expression signatures of the three housekeeping genes G6pdx, Ubc, and Rps27 by using the forms of the relative abundances of their transcripts, and the differential preferences of Igf2 enhancers for each of the multiple Igf2 promoters for the transcription. PMID:22334570

  11. Accurate simulation of optical properties in dyes.

    PubMed

    Jacquemin, Denis; Perpète, Eric A; Ciofini, Ilaria; Adamo, Carlo

    2009-02-17

    Since Antiquity, humans have produced and commercialized dyes. To this day, extraction of natural dyes often requires lengthy and costly procedures. In the 19th century, global markets and new industrial products drove a significant effort to synthesize artificial dyes, characterized by low production costs, huge quantities, and new optical properties (colors). Dyes that encompass classes of molecules absorbing in the UV-visible part of the electromagnetic spectrum now have a wider range of applications, including coloring (textiles, food, paintings), energy production (photovoltaic cells, OLEDs), or pharmaceuticals (diagnostics, drugs). Parallel to the growth in dye applications, researchers have increased their efforts to design and synthesize new dyes to customize absorption and emission properties. In particular, dyes containing one or more metallic centers allow for the construction of fairly sophisticated systems capable of selectively reacting to light of a given wavelength and behaving as molecular devices (photochemical molecular devices, PMDs).Theoretical tools able to predict and interpret the excited-state properties of organic and inorganic dyes allow for an efficient screening of photochemical centers. In this Account, we report recent developments defining a quantitative ab initio protocol (based on time-dependent density functional theory) for modeling dye spectral properties. In particular, we discuss the importance of several parameters, such as the methods used for electronic structure calculations, solvent effects, and statistical treatments. In addition, we illustrate the performance of such simulation tools through case studies. We also comment on current weak points of these methods and ways to improve them. PMID:19113946

  12. Accurate Fission Data for Nuclear Safety

    NASA Astrophysics Data System (ADS)

    Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.

    2014-05-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.

  13. Fast and Provably Accurate Bilateral Filtering

    NASA Astrophysics Data System (ADS)

    Chaudhury, Kunal N.; Dabhade, Swapnil D.

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.

  14. Accurate Prediction of Docked Protein Structure Similarity.

    PubMed

    Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit

    2015-09-01

    One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807

  15. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  16. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  17. How Accurate are SuperCOSMOS Positions?

    NASA Astrophysics Data System (ADS)

    Schaefer, Adam; Hunstead, Richard; Johnston, Helen

    2014-02-01

    Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1 373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than BJ = 20 and Galactic latitudes |b| > 10°. Position differences showed an rms scatter of 0.16 arcsec in right ascension and declination. While overall systematic offsets were < 0.1 arcsec in each hemisphere, both the systematics and scatter were greater in the north.

  18. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  19. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  20. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  1. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  2. Accurate lineshape spectroscopy and the Boltzmann constant.

    PubMed

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  3. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  4. 2D map projections for visualization and quantitative analysis of 3D fluorescence micrographs

    PubMed Central

    Sendra, G. Hernán; Hoerth, Christian H.; Wunder, Christian; Lorenz, Holger

    2015-01-01

    We introduce Map3-2D, a freely available software to accurately project up to five-dimensional (5D) fluorescence microscopy image data onto full-content 2D maps. Similar to the Earth’s projection onto cartographic maps, Map3-2D unfolds surface information from a stack of images onto a single, structurally connected map. We demonstrate its applicability for visualization and quantitative analyses of spherical and uneven surfaces in fixed and dynamic live samples by using mammalian and yeast cells, and giant unilamellar vesicles. Map3-2D software is available at http://www.zmbh.uni-heidelberg.de//Central_Services/Imaging_Facility/Map3-2D.html. PMID:26208256

  5. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    PubMed

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  6. Towards Quantitative Spatial Models of Seabed Sediment Composition

    PubMed Central

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom’s parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  7. Quantitative analysis of flagellar proteins in Drosophila sperm tails.

    PubMed

    Mendes Maia, Teresa; Paul-Gilloteaux, Perrine; Basto, Renata

    2015-01-01

    The cilium has a well-defined structure, which can still accommodate some morphological and molecular composition diversity to suit the functional requirements of different cell types. The sperm flagellum of the fruit fly Drosophila melanogaster appears as a good model to study the genetic regulation of axoneme assembly and motility, due to the wealth of genetic tools publically available for this organism. In addition, the fruit fly's sperm flagellum displays quite a long axoneme (∼1.8mm), which may facilitate both histological and biochemical analyses. Here, we present a protocol for imaging and quantitatively analyze proteins, which associate with the fly differentiating, and mature sperm flagella. We will use as an example the quantification of tubulin polyglycylation in wild-type testes and in Bug22 mutant testes, which present defects in the deposition of this posttranslational modification. During sperm biogenesis, flagella appear tightly bundled, which makes it more challenging to get accurate measurements of protein levels from immunostained specimens. The method we present is based on the use of a novel semiautomated, macro installed in the image processing software ImageJ. It allows to measure fluorescence levels in closely associated sperm tails, through an exact distinction between positive and background signals, and provides background-corrected pixel intensity values that can directly be used for data analysis. PMID:25837396

  8. Quantitative lipopolysaccharide analysis using HPLC/MS/MS and its combination with the limulus amebocyte lysate assay.

    PubMed

    Pais de Barros, Jean-Paul; Gautier, Thomas; Sali, Wahib; Adrie, Christophe; Choubley, Hélène; Charron, Emilie; Lalande, Caroline; Le Guern, Naig; Deckert, Valérie; Monchi, Mehran; Quenot, Jean-Pierre; Lagrost, Laurent

    2015-07-01

    Quantitation of plasma lipopolysaccharides (LPSs) might be used to document Gram-negative bacterial infection. In the present work, LPS-derived 3-hydroxymyristate was extracted from plasma samples with an organic solvent, separated by reversed phase HPLC, and quantitated by MS/MS. This mass assay was combined with the limulus amebocyte lysate (LAL) bioassay to monitor neutralization of LPS activity in biological samples. The described HPLC/MS/MS method is a reliable, practical, accurate, and sensitive tool to quantitate LPS. The combination of the LAL and HPLC/MS/MS analyses provided new evidence for the intrinsic capacity of plasma lipoproteins and phospholipid transfer protein to neutralize the activity of LPS. In a subset of patients with systemic inflammatory response syndrome, with documented infection but with a negative plasma LAL test, significant amounts of LPS were measured by the HPLC/MS/MS method. Patients with the highest plasma LPS concentration were more severely ill. HPLC/MS/MS is a relevant method to quantitate endotoxin in a sample, to assess the efficacy of LPS neutralization, and to evaluate the proinflammatory potential of LPS in vivo. PMID:26023073

  9. Quantitative lipopolysaccharide analysis using HPLC/MS/MS and its combination with the limulus amebocyte lysate assay[S

    PubMed Central

    Pais de Barros, Jean-Paul; Gautier, Thomas; Sali, Wahib; Adrie, Christophe; Choubley, Hélène; Charron, Emilie; Lalande, Caroline; Le Guern, Naig; Deckert, Valérie; Monchi, Mehran; Quenot, Jean-Pierre; Lagrost, Laurent

    2015-01-01

    Quantitation of plasma lipopolysaccharides (LPSs) might be used to document Gram-negative bacterial infection. In the present work, LPS-derived 3-hydroxymyristate was extracted from plasma samples with an organic solvent, separated by reversed phase HPLC, and quantitated by MS/MS. This mass assay was combined with the limulus amebocyte lysate (LAL) bioassay to monitor neutralization of LPS activity in biological samples. The described HPLC/MS/MS method is a reliable, practical, accurate, and sensitive tool to quantitate LPS. The combination of the LAL and HPLC/MS/MS analyses provided new evidence for the intrinsic capacity of plasma lipoproteins and phospholipid transfer protein to neutralize the activity of LPS. In a subset of patients with systemic inflammatory response syndrome, with documented infection but with a negative plasma LAL test, significant amounts of LPS were measured by the HPLC/MS/MS method. Patients with the highest plasma LPS concentration were more severely ill. HPLC/MS/MS is a relevant method to quantitate endotoxin in a sample, to assess the efficacy of LPS neutralization, and to evaluate the proinflammatory potential of LPS in vivo. PMID:26023073

  10. Quantitative SPECT/CT: SPECT joins PET as a quantitative imaging modality.

    PubMed

    Bailey, Dale L; Willowson, Kathy P

    2014-05-01

    The introduction of combined modality single photon emission computed tomography (SPECT)/CT cameras has revived interest in quantitative SPECT. Schemes to mitigate the deleterious effects of photon attenuation and scattering in SPECT imaging have been developed over the last 30 years but have been held back by lack of ready access to data concerning the density of the body and photon transport, which we see as key to producing quantitative data. With X-ray CT data now routinely available, validations of techniques to produce quantitative SPECT reconstructions have been undertaken. While still suffering from inferior spatial resolution and sensitivity compared to positron emission tomography (PET) imaging, SPECT scans nevertheless can be produced that are as quantitative as PET scans. Routine corrections are applied for photon attenuation and scattering, resolution recovery, instrumental dead time, radioactive decay and cross-calibration to produce SPECT images in units of kBq.ml(-1). Though clinical applications of quantitative SPECT imaging are lacking due to the previous non-availability of accurately calibrated SPECT reconstructions, these are beginning to emerge as the community and industry focus on producing SPECT/CT systems that are intrinsically quantitative. PMID:24037503

  11. EEG analyses with SOBI.

    SciTech Connect

    Glickman, Matthew R.; Tang, Akaysha

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  12. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    This report covers work performed by Science Applications International Corporation (SAIC) under contract NAS8-39386 from the NASA Marshall Space Flight Center entitled LDEF Satellite Radiation Analyses. The basic objective of the study was to evaluate the accuracy of present models and computational methods for defining the ionizing radiation environment for spacecraft in Low Earth Orbit (LEO) by making comparisons with radiation measurements made on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The emphasis of the work here is on predictions and comparisons with LDEF measurements of induced radioactivity and Linear Energy Transfer (LET) measurements. These model/data comparisons have been used to evaluate the accuracy of current models for predicting the flux and directionality of trapped protons for LEO missions.

  13. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    farmers carried out quantitative visual observations all independently from each other. All observers assessed five sites, having a sand, peat or clay soil. For almost all quantitative visual observations the spread of observed values was low (coefficient of variation < 1.0), except for the number of biopores and gley mottles. Furthermore, farmers' observed mean values were significantly higher than soil scientists' mean values, for soil structure, amount of gley mottles and compaction. This study showed that VSA could be a valuable tool to assess soil quality. Subjectivity, due to the background of the observer, might influence the outcome of visual assessment of some soil properties. In countries where soil analyses can easily be carried out, VSA might be a good replenishment to available soil chemical analyses, and in countries where it is not feasible to carry out soil analyses, VSA might be a good start to assess soil quality.

  14. Population variability complicates the accurate detection of climate change responses.

    PubMed

    McCain, Christy; Szewczyk, Tim; Bracy Knight, Kevin

    2016-06-01

    The rush to assess species' responses to anthropogenic climate change (CC) has underestimated the importance of interannual population variability (PV). Researchers assume sampling rigor alone will lead to an accurate detection of response regardless of the underlying population fluctuations of the species under consideration. Using population simulations across a realistic, empirically based gradient in PV, we show that moderate to high PV can lead to opposite and biased conclusions about CC responses. Between pre- and post-CC sampling bouts of modeled populations as in resurvey studies, there is: (i) A 50% probability of erroneously detecting the opposite trend in population abundance change and nearly zero probability of detecting no change. (ii) Across multiple years of sampling, it is nearly impossible to accurately detect any directional shift in population sizes with even moderate PV. (iii) There is up to 50% probability of detecting a population extirpation when the species is present, but in very low natural abundances. (iv) Under scenarios of moderate to high PV across a species' range or at the range edges, there is a bias toward erroneous detection of range shifts or contractions. Essentially, the frequency and magnitude of population peaks and troughs greatly impact the accuracy of our CC response measurements. Species with moderate to high PV (many small vertebrates, invertebrates, and annual plants) may be inaccurate 'canaries in the coal mine' for CC without pertinent demographic analyses and additional repeat sampling. Variation in PV may explain some idiosyncrasies in CC responses detected so far and urgently needs more careful consideration in design and analysis of CC responses. PMID:26725404

  15. Motor equivalence during multi-finger accurate force production

    PubMed Central

    Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2014-01-01

    We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311

  16. Scallops skeletons as tools for accurate proxy calibration

    NASA Astrophysics Data System (ADS)

    Lorrain, A.; Paulet, Y.-M.; Chauvaud, L.; Dunbar, R.; Mucciarone, D.; Pécheyran, C.; Amouroux, D.; Fontugne, M.

    2003-04-01

    Bivalves skeletons are able to produce great geochemical proxies. But general calibration of those proxies are based on approximate time basis because of misunderstanding of growth rhythm. In this context, the Great scallop, Pecten maximus, appears to be a powerful tool as a daily growth deposit has been clearly identified for this species (Chauvaud et al, 1998; Lorrain et al, 2000), allowing accurate environmental calibration. Indeed, using this species, a date can be affiliated to each growth increment, and as a consequence environmental parameters can be closely compared (at a daily scale) to observed chemical and structural shell variations. This daily record provides an unequivocal basis to calibrate proxies. Isotopic (Delta-13C and Delta-15N) and trace element analysis (LA-ICP-MS) have been performed on several individuals and different years depending on the analysed parameter. Seawater parameters measured one meter above the sea-bottom were compared to chemical variations in the calcitic shell. Their confrontation showed that even with a daily basis for data interpretation, calibration is still a challenge. Inter-individual variations are found and correlations are not always reproducible from one year to the others. The first explanation could be an inaccurate appreciation of the proximate environment of the animal, notably the water-sediment interface could best represent Pecten maximus environment. Secondly, physiological parameters could be inferred for those discrepancies. In particular, calcification takes places in the extrapallial fluid, which composition might be very different from external environment. Accurate calibration of chemical proxies should consider biological aspects to gain better insights into the processes controlling the incorporation of those chemical elements. The characterisation of isotopic and trace element composition of the extrapallial fluid and hemolymph could greatly help our understanding of chemical shell variations.

  17. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  18. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. PMID:26689962

  19. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  20. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  1. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  2. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  3. NOx analyser interefence from alkenes

    NASA Astrophysics Data System (ADS)

    Bloss, W. J.; Alam, M. S.; Lee, J. D.; Vazquez, M.; Munoz, A.; Rodenas, M.

    2012-04-01

    Nitrogen oxides (NO and NO2, collectively NOx) are critical intermediates in atmospheric chemistry. NOx abundance controls the levels of the primary atmospheric oxidants OH, NO3 and O3, and regulates the ozone production which results from the degradation of volatile organic compounds. NOx are also atmospheric pollutants in their own right, and NO2 is commonly included in air quality objectives and regulations. In addition to their role in controlling ozone formation, NOx levels affect the production of other pollutants such as the lachrymator PAN, and the nitrate component of secondary aerosol particles. Consequently, accurate measurement of nitrogen oxides in the atmosphere is of major importance for understanding our atmosphere. The most widely employed approach for the measurement of NOx is chemiluminescent detection of NO2* from the NO + O3 reaction, combined with NO2 reduction by either a heated catalyst or photoconvertor. The reaction between alkenes and ozone is also chemiluminescent; therefore alkenes may contribute to the measured NOx signal, depending upon the instrumental background subtraction cycle employed. This interference has been noted previously, and indeed the effect has been used to measure both alkenes and ozone in the atmosphere. Here we report the results of a systematic investigation of the response of a selection of NOx analysers, ranging from systems used for routine air quality monitoring to atmospheric research instrumentation, to a series of alkenes ranging from ethene to the biogenic monoterpenes, as a function of conditions (co-reactants, humidity). Experiments were performed in the European Photoreactor (EUPHORE) to ensure common calibration, a common sample for the monitors, and to unequivocally confirm the alkene (via FTIR) and NO2 (via DOAS) levels present. The instrument responses ranged from negligible levels up to 10 % depending upon the alkene present and conditions used. Such interferences may be of substantial importance

  4. Quantitative Hydrocarbon Surface Analysis

    NASA Technical Reports Server (NTRS)

    Douglas, Vonnie M.

    2000-01-01

    The elimination of ozone depleting substances, such as carbon tetrachloride, has resulted in the use of new analytical techniques for cleanliness verification and contamination sampling. The last remaining application at Rocketdyne which required a replacement technique was the quantitative analysis of hydrocarbons by infrared spectrometry. This application, which previously utilized carbon tetrachloride, was successfully modified using the SOC-400, a compact portable FTIR manufactured by Surface Optics Corporation. This instrument can quantitatively measure and identify hydrocarbons from solvent flush of hardware as well as directly analyze the surface of metallic components without the use of ozone depleting chemicals. Several sampling accessories are utilized to perform analysis for various applications.

  5. Quantitative photoacoustic tomography

    PubMed Central

    Yuan, Zhen; Jiang, Huabei

    2009-01-01

    In this paper, several algorithms that allow for quantitative photoacoustic reconstruction of tissue optical, acoustic and physiological properties are described in a finite-element method based framework. These quantitative reconstruction algorithms are compared, and the merits and limitations associated with these methods are discussed. In addition, a multispectral approach is presented for concurrent reconstructions of multiple parameters including deoxyhaemoglobin, oxyhaemoglobin and water concentrations as well as acoustic speed. Simulation and in vivo experiments are used to demonstrate the effectiveness of the reconstruction algorithms presented. PMID:19581254

  6. Mass spectrometry-based protein identification with accurate statistical significance assignment

    PubMed Central

    Alves, Gelio; Yu, Yi-Kuo

    2015-01-01

    Motivation: Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. Results: We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. Availability and implementation: The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Contact: yyu@ncbi.nlm.nih.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25362092

  7. How accurate are Scottish cancer registration data?

    PubMed Central

    Brewster, D.; Crichton, J.; Muir, C.

    1994-01-01

    In order to assess the accuracy of Scottish cancer registration data, a random sample of 2,200 registrations, attributed to the year 1990, was generated. Relevant medical records were available for review in 2,021 (92%) cases. Registration details were reabstracted from available records and compared with data in the registry. Discrepancies in identifying items of data (surname, forename, sex and date of birth) were found in 3.5% of cases. Most were trivial and would not disturb record linkage. Discrepancy rates of 7.1% in post code of residence at the time of diagnosis (excluding differences arising through boundary changes), 11.0% in anniversary date (excluding differences of 6 weeks or less), 7.7% in histological verification status, 5.4% in ICD-9 site codes (the first three digits) and 14.5% in ICD-O morphology codes (excluding 'inferred' morphology codes) were recorded. Overall, serious discrepancies were judged to have occurred in 2.8% of cases. In many respects, therefore, Scottish cancer registration data show a high level of accuracy that compares favourably to the reported accuracy of the few other cancer registries undertaking such analyses. PMID:7947104

  8. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  9. Accurate First-Principles Spectra Predictions for Planetological and Astrophysical Applications at Various T-Conditions

    NASA Astrophysics Data System (ADS)

    Rey, M.; Nikitin, A. V.; Tyuterev, V.

    2014-06-01

    Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical

  10. Uncertainty Quantification for Quantitative Imaging Holdup Measurements

    SciTech Connect

    Bevill, Aaron M; Bledsoe, Keith C

    2016-01-01

    In nuclear fuel cycle safeguards, special nuclear material "held up" in pipes, ducts, and glove boxes causes significant uncertainty in material-unaccounted-for estimates. Quantitative imaging is a proposed non-destructive assay technique with potential to estimate the holdup mass more accurately and reliably than current techniques. However, uncertainty analysis for quantitative imaging remains a significant challenge. In this work we demonstrate an analysis approach for data acquired with a fast-neutron coded aperture imager. The work includes a calibrated forward model of the imager. Cross-validation indicates that the forward model predicts the imager data typically within 23%; further improvements are forthcoming. A new algorithm based on the chi-squared goodness-of-fit metric then uses the forward model to calculate a holdup confidence interval. The new algorithm removes geometry approximations that previous methods require, making it a more reliable uncertainty estimator.

  11. A quantitative description for efficient financial markets

    NASA Astrophysics Data System (ADS)

    Immonen, Eero

    2015-09-01

    In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.

  12. Quantitative Decision Making.

    ERIC Educational Resources Information Center

    Baldwin, Grover H.

    The use of quantitative decision making tools provides the decision maker with a range of alternatives among which to decide, permits acceptance and use of the optimal solution, and decreases risk. Training line administrators in the use of these tools can help school business officials obtain reliable information upon which to base district…

  13. Quantitative Simulation Games

    NASA Astrophysics Data System (ADS)

    Černý, Pavol; Henzinger, Thomas A.; Radhakrishna, Arjun

    While a boolean notion of correctness is given by a preorder on systems and properties, a quantitative notion of correctness is defined by a distance function on systems and properties, where the distance between a system and a property provides a measure of "fit" or "desirability." In this article, we explore several ways how the simulation preorder can be generalized to a distance function. This is done by equipping the classical simulation game between a system and a property with quantitative objectives. In particular, for systems that satisfy a property, a quantitative simulation game can measure the "robustness" of the satisfaction, that is, how much the system can deviate from its nominal behavior while still satisfying the property. For systems that violate a property, a quantitative simulation game can measure the "seriousness" of the violation, that is, how much the property has to be modified so that it is satisfied by the system. These distances can be computed in polynomial time, since the computation reduces to the value problem in limit average games with constant weights. Finally, we demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.

  14. Critical Quantitative Inquiry in Context

    ERIC Educational Resources Information Center

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  15. Indian Ocean analyses

    NASA Technical Reports Server (NTRS)

    Meyers, Gary

    1992-01-01

    The background and goals of Indian Ocean thermal sampling are discussed from the perspective of a national project which has research goals relevant to variation of climate in Australia. The critical areas of SST variation are identified. The first goal of thermal sampling at this stage is to develop a climatology of thermal structure in the areas and a description of the annual variation of major currents. The sampling strategy is reviewed. Dense XBT sampling is required to achieve accurate, monthly maps of isotherm-depth because of the high level of noise in the measurements caused by aliasing of small scale variation. In the Indian Ocean ship routes dictate where adequate sampling can be achieved. An efficient sampling rate on available routes is determined based on objective analysis. The statistical structure required for objective analysis is described and compared at 95 locations in the tropical Pacific and 107 in the tropical Indian Oceans. XBT data management and quality control methods at CSIRO are reviewed. Results on the mean and annual variation of temperature and baroclinic structure in the South Equatorial Current and Pacific/Indian Ocean Throughflow are presented for the region between northwest Australia and Java-Timor. The mean relative geostrophic transport (0/400 db) of Throughflow is approximately 5 x 106 m3/sec. A nearly equal volume transport is associated with the reference velocity at 400 db. The Throughflow feeds the South Equatorial Current, which has maximum westward flow in August/September, at the end of the southeasterly Monsoon season. A strong semiannual oscillation in the South Java Current is documented. The results are in good agreement with the Semtner and Chervin (1988) ocean general circulation model. The talk concludes with comments on data inadequacies (insufficient coverage, timeliness) particular to the Indian Ocean and suggestions on the future role that can be played by Data Centers, particularly with regard to quality

  16. Quantitative Phase Retrieval in Transmission Electron Microscopy

    NASA Astrophysics Data System (ADS)

    McLeod, Robert Alexander

    Phase retrieval in the transmission electron microscope offers the unique potential to collect quantitative data regarding the electric and magnetic properties of materials at the nanoscale. Substantial progress in the field of quantitative phase imaging was made by improvements to the technique of off-axis electron holography. In this thesis, several breakthroughs have been achieved that improve the quantitative analysis of phase retrieval. An accurate means of measuring the electron wavefront coherence in two-dimensions was developed and pratical applications demonstrated. The detector modulation-transfer function (MTF) was assessed by slanted-edge, noise, and the novel holographic techniques. It was shown the traditional slanted-edge technique underestimates the MTF. In addition, progress was made in dark and gain reference normalization of images, and it was shown that incomplete read-out is a concern for slow-scan CCD detectors. Last, the phase error due to electron shot noise was reduced by the technique of summation of hologram series. The phase error, which limits the finest electric and magnetic phenomena which can be investigated, was reduced by over 900 % with no loss of spatial resolution. Quantitative agreement between the experimental root-mean-square phase error and the analytical prediction of phase error was achieved.

  17. DNA DAMAGE QUANTITATION BY ALKALINE GEL ELECTROPHORESIS.

    SciTech Connect

    SUTHERLAND,B.M.; BENNETT,P.V.; SUTHERLAND, J.C.

    2004-03-24

    Physical and chemical agents in the environment, those used in clinical applications, or encountered during recreational exposures to sunlight, induce damages in DNA. Understanding the biological impact of these agents requires quantitation of the levels of such damages in laboratory test systems as well as in field or clinical samples. Alkaline gel electrophoresis provides a sensitive (down to {approx} a few lesions/5Mb), rapid method of direct quantitation of a wide variety of DNA damages in nanogram quantities of non-radioactive DNAs from laboratory, field, or clinical specimens, including higher plants and animals. This method stems from velocity sedimentation studies of DNA populations, and from the simple methods of agarose gel electrophoresis. Our laboratories have developed quantitative agarose gel methods, analytical descriptions of DNA migration during electrophoresis on agarose gels (1-6), and electronic imaging for accurate determinations of DNA mass (7-9). Although all these components improve sensitivity and throughput of large numbers of samples (7,8,10), a simple version using only standard molecular biology equipment allows routine analysis of DNA damages at moderate frequencies. We present here a description of the methods, as well as a brief description of the underlying principles, required for a simplified approach to quantitation of DNA damages by alkaline gel electrophoresis.

  18. Quantitative Detection of Spiroplasma Citri by Real Time PCR

    Technology Transfer Automated Retrieval System (TEKTRAN)

    There is a need to develop an accurate and rapid method to detect Spiroplasma citri, the causal agent of citrus stubborn disease for use in epidemiology studies. Quantitative real-time PCR was developed for detection of S. citri. Two sets of primers based on sequences from the P58 putative adhesin ...

  19. Quantitative transverse flow measurement using OCT speckle decorrelation analysis

    PubMed Central

    Liu, Xuan; Huang, Yong; Ramella-Roman, Jessica C.; Mathews, Scott A.; Kang, Jin U.

    2014-01-01

    We propose an inter-Ascan speckle decorrelation based method that can quantitatively assess blood flow normal to the direction of the OCT imaging beam. To validate this method, we performed a systematic study using both phantom and in vivo animal models. Results show that our speckle analysis method can accurately extract transverse flow speed with high spatial and temporal resolution. PMID:23455305

  20. Quantify Resonance Inspection with Finite Element-Based Modal Analyses

    SciTech Connect

    Lai, Canhai; Sun, Xin; Dasch, Cameron; Harmon, George; Jones, Martin

    2011-06-01

    Resonance inspection uses the natural acoustic resonances of a part to identify anomalous parts. Modern instrumentation can measure the many resonant frequencies rapidly and accurately. Sophisticated sorting algorithms trained on sets of good and anomalous parts can rapidly and reliably inspect and sort parts. This paper aims at using finite-element-based modal analysis to put resonance inspection on a more quantitative basis. A production-level automotive steering knuckle is used as the example part for our study. First, the resonance frequency spectra for the knuckle are measured with two different experimental techniques. Next, scanning laser vibrometry is used to determine the mode shape corresponding to each resonance. The material properties including anisotropy are next measured to high accuracy using resonance spectroscopy on cuboids cut from the part. Then, finite element model (FEM) of the knuckle is generated by meshing the actual part geometry obtained with computed tomography (CT). The resonance frequencies and mode shapes are next predicted with a natural frequency extraction analysis after extensive mesh size sensitivity study. The good comparison between the predicted and the experimentally measured resonance spectra indicate that finite-element-based modal analyses have the potential to be a powerful tool in shortening the training process and improving the accuracy of the resonance inspection process for a complex, production level part. The finite element based analysis can also provide a means to computationally test the sensitivity of the frequencies to various possible defects such as porosity or oxide inclusions especially in the high stress regions that the part will experience in service.

  1. Quantify Resonance Inspection with Finite Element-Based Modal Analyses

    SciTech Connect

    Sun, Xin; Lai, Canhai; Dasch, Cameron

    2010-11-10

    Resonance inspection uses the natural acoustic resonances of a part to identify anomalous parts. Modern instrumentation can measure the many resonant frequencies rapidly and accurately. Sophisticated sorting algorithms trained on sets of good and anomalous parts can rapidly and reliably inspect and sort parts. This paper aims at using finite-element-based modal analysis to put resonance inspection on a more quantitative basis. A production-level automotive steering knuckle is used as the example part for our study. First, the resonance frequency spectra for the knuckle are measured with two different experimental techniques. Next, scanning laser vibrometry is used to determine the mode shape corresponding to each resonance. The material properties including anisotropy are next measured to high accuracy using resonance spectroscopy on cuboids cut from the part. Then, finite element model (FEM) of the knuckle is generated by meshing the actual part geometry obtained with computed tomography (CT). The resonance frequencies and mode shapes are next predicted with a natural frequency extraction analysis after extensive mesh size sensitivity study. The good comparison between the predicted and the experimentally measured resonance spectra indicate that finite-element-based modal analyses have the potential to be a powerful tool in shortening the training process and improving the accuracy of the resonance inspection process for a complex, production level part. The finite element based analysis can also provide a means to computationally test the sensitivity of the frequencies to various possible defects such as porosity or oxide inclusions especially in the high stress regions that the part will experience in service.

  2. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution

    PubMed Central

    Nielsen, Rasmus

    2015-01-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  3. An electrochemical calibration unit for hydrogen analysers.

    PubMed

    Merzlikin, Sergiy V; Mingers, Andrea M; Kurz, Daniel; Hassel, Achim Walter

    2014-07-01

    Determination of hydrogen in solids such as high strength steels or other metals in the ppb or ppm range requires hot-extraction or melt-extraction. Calibration of commercially available hydrogen analysers is performed either by certified reference materials CRMs, often having limited availability and reliability or by gas dosing for which the determined value significantly depends on atmospheric pressure and the construction of the gas dosing valve. The sharp and sudden appearance of very high gas concentrations from gas dosing is very different from real effusion transients and is therefore another source of errors. To overcome these limitations, an electrochemical calibration method for hydrogen analysers was developed and employed in this work. Exactly quantifiable, faradaic amounts of hydrogen can be produced in an electrochemical reaction and detected by the hydrogen analyser. The amount of hydrogen is exactly known from the transferred charge in the reaction following Faradays law; and the current time program determines the apparent hydrogen effusion transient. Random effusion transient shaping becomes possible to fully comply with real samples. Evolution time and current were varied for determining a quantitative relationship. The device was used to produce either diprotium (H2) or dideuterium (D2) from the corresponding electrolytes. The functional principle is electrochemical in nature and thus an automation is straightforward, can be easily implemented at an affordable price of 1-5% of the hydrogen analysers price. PMID:24840442

  4. FAME: Software for analysing rock microstructures

    NASA Astrophysics Data System (ADS)

    Hammes, Daniel M.; Peternell, Mark

    2016-05-01

    Determination of rock microstructures leads to a better understanding of the formation and deformation of polycrystalline solids. Here, we present FAME (Fabric Analyser based Microstructure Evaluation), an easy-to-use MATLAB®-based software for processing datasets recorded by an automated fabric analyser microscope. FAME is provided as a MATLAB®-independent Windows® executable with an intuitive graphical user interface. Raw data from the fabric analyser microscope can be automatically loaded, filtered and cropped before analysis. Accurate and efficient rock microstructure analysis is based on an advanced user-controlled grain labelling algorithm. The preview and testing environments simplify the determination of appropriate analysis parameters. Various statistic and plotting tools allow a graphical visualisation of the results such as grain size, shape, c-axis orientation and misorientation. The FAME2elle algorithm exports fabric analyser data to an elle (modelling software)-supported format. FAME supports batch processing for multiple thin section analysis or large datasets that are generated for example during 2D in-situ deformation experiments. The use and versatility of FAME is demonstrated on quartz and deuterium ice samples.

  5. Leadership and Culture-Building in Schools: Quantitative and Qualitative Understandings.

    ERIC Educational Resources Information Center

    Sashkin, Marshall; Sashkin, Molly G.

    Understanding effective school leadership as a function of culture building through quantitative and qualitative analyses is the purpose of this paper. The two-part quantitative phase of the research focused on statistical measures of culture and leadership behavior directed toward culture building in the school. The first quantitative part…

  6. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGESBeta

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  7. Extremely accurate sequential verification of RELAP5-3D

    SciTech Connect

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method of manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.

  8. Waste Stream Analyses for Nuclear Fuel Cycles

    SciTech Connect

    N. R. Soelberg

    2010-08-01

    A high-level study was performed in Fiscal Year 2009 for the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE) Advanced Fuel Cycle Initiative (AFCI) to provide information for a range of nuclear fuel cycle options (Wigeland 2009). At that time, some fuel cycle options could not be adequately evaluated since they were not well defined and lacked sufficient information. As a result, five families of these fuel cycle options are being studied during Fiscal Year 2010 by the Systems Analysis Campaign for the DOE NE Fuel Cycle Research and Development (FCRD) program. The quality and completeness of data available to date for the fuel cycle options is insufficient to perform quantitative radioactive waste analyses using recommended metrics. This study has been limited thus far to qualitative analyses of waste streams from the candidate fuel cycle options, because quantitative data for wastes from the front end, fuel fabrication, reactor core structure, and used fuel for these options is generally not yet available.

  9. Impact of reconstruction parameters on quantitative I-131 SPECT.

    PubMed

    van Gils, C A J; Beijst, C; van Rooij, R; de Jong, H W A M

    2016-07-21

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR

  10. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,‑26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated

  11. Accurate phase measurements for thick spherical objects using optical quadrature microscopy

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; DiMarzio, Charles A.

    2009-02-01

    In vitro fertilization (IVF) procedures have resulted in the birth of over three million babies since 1978. Yet the live birth rate in the United States was only 34% in 2005, with 32% of the successful pregnancies resulting in multiple births. These multiple pregnancies were directly attributed to the transfer of multiple embryos to increase the probability that a single, healthy embryo was included. Current viability markers used for IVF, such as the cell number, symmetry, size, and fragmentation, are analyzed qualitatively with differential interference contrast (DIC) microscopy. However, this method is not ideal for quantitative measures beyond the 8-cell stage of development because the cells overlap and obstruct the view within and below the cluster of cells. We have developed the phase-subtraction cell-counting method that uses the combination of DIC and optical quadrature microscopy (OQM) to count the number of cells accurately in live mouse embryos beyond the 8-cell stage. We have also created a preliminary analysis to measure the cell symmetry, size, and fragmentation quantitatively by analyzing the relative dry mass from the OQM image in conjunction with the phase-subtraction count. In this paper, we will discuss the characterization of OQM with respect to measuring the phase accurately for spherical samples that are much larger than the depth of field. Once fully characterized and verified with human embryos, this methodology could provide the means for a more accurate method to score embryo viability.

  12. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments

    PubMed Central

    Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  13. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments.

    PubMed

    Eter, Wael A; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, (111)In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of (111)In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  14. Energy & Climate: Getting Quantitative

    NASA Astrophysics Data System (ADS)

    Wolfson, Richard

    2011-11-01

    A noted environmentalist claims that buying an SUV instead of a regular car is energetically equivalent to leaving your refrigerator door open for seven years. A fossil-fuel apologist argues that solar energy is a pie-in-the-sky dream promulgated by na"ive environmentalists, because there's nowhere near enough solar energy to meet humankind's energy demand. A group advocating shutdown of the Vermont Yankee nuclear plant claims that 70% of its electrical energy is lost in transmission lines. Around the world, thousands agitate for climate action, under the numerical banner ``350.'' Neither the environmentalist, the fossil-fuel apologist, the antinuclear activists, nor most of those marching under the ``350'' banner can back up their assertions with quantitative arguments. Yet questions about energy and its environmental impacts almost always require quantitative answers. Physics can help! This poster gives some cogent examples, based on the newly published 2^nd edition of the author's textbook Energy, Environment, and Climate.

  15. Berkeley Quantitative Genome Browser

    SciTech Connect

    Hechmer, Aaron

    2008-02-29

    The Berkeley Quantitative Genome Browser provides graphical browsing functionality for genomic data organized, at a minimum, by sequence and position. While supporting the annotation browsing features typical of many other genomic browsers, additional emphasis is placed on viewing and utilizing quantitative data. Data may be read from GFF, SGR, FASTA or any column delimited format. Once the data has been read into the browser's buffer, it may be searched. filtered or subjected to mathematical transformation. The browser also supplies some graphical design manipulation functionality geared towards preparing figures for presentations or publication. A plug-in mechanism enables development outside the core functionality that adds more advanced or esoteric analysis capabilities. BBrowse's development and distribution is open-source and has been built to run on Linux, OSX and MS Windows operating systems.

  16. Berkeley Quantitative Genome Browser

    2008-02-29

    The Berkeley Quantitative Genome Browser provides graphical browsing functionality for genomic data organized, at a minimum, by sequence and position. While supporting the annotation browsing features typical of many other genomic browsers, additional emphasis is placed on viewing and utilizing quantitative data. Data may be read from GFF, SGR, FASTA or any column delimited format. Once the data has been read into the browser's buffer, it may be searched. filtered or subjected to mathematical transformation.more » The browser also supplies some graphical design manipulation functionality geared towards preparing figures for presentations or publication. A plug-in mechanism enables development outside the core functionality that adds more advanced or esoteric analysis capabilities. BBrowse's development and distribution is open-source and has been built to run on Linux, OSX and MS Windows operating systems.« less

  17. DEMOGRAPHY AND VIABILITY ANALYSES OF A DIAMONDBACK TERRAPIN POPULATION

    EPA Science Inventory

    The diamondback terrapin Malaclemys terrapin is a long-lived species with special management requirements, but quantitative analyses to support management are lacking. I analyzed mark-recapture data and constructed an age-classified matrix population model to determine the status...

  18. Guidelines for Meta-Analyses of Counseling Psychology Research

    ERIC Educational Resources Information Center

    Quintana, Stephen M.; Minami, Takuya

    2006-01-01

    This article conceptually describes the steps in conducting quantitative meta-analyses of counseling psychology research with minimal reliance on statistical formulas. The authors identify sources that describe necessary statistical formula for various meta-analytic calculations and describe recent developments in meta-analytic techniques. The…

  19. Note on the chromatographic analyses of marine polyunsaturated fatty acids

    USGS Publications Warehouse

    Schultz, D.M.; Quinn, J.G.

    1977-01-01

    Gas-liquid chromatography was used to study the effects of saponification/methylation and thin-layer chromatographic isolation on the analyses of polyunsaturated fatty acids. Using selected procedures, the qualitative and quantitative distribution of these acids in marine organisms can be determined with a high degree of accuracy. ?? 1977 Springer-Verlag.

  20. Deficiencies of Reporting in Meta-Analyses and Some Remedies

    ERIC Educational Resources Information Center

    Harwell, Michael; Maeda, Yukiko

    2008-01-01

    There is general agreement that meta-analysis is an important tool for synthesizing study results in quantitative educational research. Yet, a shared feature of many meta-analyses is a failure to report sufficient information for readers to fully judge the reported findings, such as the populations to which generalizations are to be made,…

  1. Primary enzyme quantitation

    DOEpatents

    Saunders, G.C.

    1982-03-04

    The disclosure relates to the quantitation of a primary enzyme concentration by utilizing a substrate for the primary enzyme labeled with a second enzyme which is an indicator enzyme. Enzyme catalysis of the substrate occurs and results in release of the indicator enzyme in an amount directly proportional to the amount of primary enzyme present. By quantifying the free indicator enzyme one determines the amount of primary enzyme present.

  2. Computational vaccinology: quantitative approaches.

    PubMed

    Flower, Darren R; McSparron, Helen; Blythe, Martin J; Zygouri, Christianna; Taylor, Debra; Guan, Pingping; Wan, Shouzhan; Coveney, Peter V; Walshe, Valerie; Borrow, Persephone; Doytchinova, Irini A

    2003-01-01

    The immune system is hierarchical and has many levels, exhibiting much emergent behaviour. However, at its heart are molecular recognition events that are indistinguishable from other types of biomacromolecular interaction. These can be addressed well by quantitative experimental and theoretical biophysical techniques, and particularly by methods from drug design. We review here our approach to computational immunovaccinology. In particular, we describe the JenPep database and two new techniques for T cell epitope prediction. One is based on quantitative structure-activity relationships (a 3D-QSAR method based on CoMSIA and another 2D method based on the Free-Wilson approach) and the other on atomistic molecular dynamic simulations using high performance computing. JenPep (http://www.jenner.ar.uk/ JenPep) is a relational database system supporting quantitative data on peptide binding to major histocompatibility complexes, TAP transporters, TCR-pMHC complexes, and an annotated list of B cell and T cell epitopes. Our 2D-QSAR method factors the contribution to peptide binding from individual amino acids as well as 1-2 and 1-3 residue interactions. In the 3D-QSAR approach, the influence of five physicochemical properties (volume, electrostatic potential, hydrophobicity, hydrogen-bond donor and acceptor abilities) on peptide affinity were considered. Both methods are exemplified through their application to the well-studied problem of peptide binding to the human class I MHC molecule HLA-A*0201. PMID:14712934

  3. Use of EBSD data in mesoscale numerical analyses

    SciTech Connect

    Becker, R; Wiland, H

    2000-03-30

    and FEM studies will provide impetus for further development of microstructure models and theories of microstructure evolution. Early studies connecting EBSD data to detailed finite element models used manual measurements to define initial orientations for the simulations. In one study, manual measurements of the deformed structure were also obtained for comparison with the model predictions. More recent work has taken advantage of automated data collection on deformed specimens as a means of collecting detailed and spatially correlated data for FEM model validation. Although it will not be discussed here, EBSD data can also be incorporated in FEM analyses in a less direct manner that is suitable for simulations where the element size is much larger than the grain size. The purpose of such models is to account for the effects of evolving material anisotropy in macro-scale simulations. In these analyses, a polycrystal plasticity model (e.g., a Taylor model or a self-consistent model), or a yield surface constructed from a polycrystal plasticity model, is used to determine the constitutive response of each element. The initial orientations used in the polycrystal plasticity model can be obtained from EBSD analyses or by fitting distributions of discrete orientations to x-ray data. The use of EBSD data is advantageous in that it is easier to account for spatial gradients of orientation distribution within a part. Another area in which EBSD data is having a great impact is on recrystallization modeling. EBSD techniques can be used to collect data for quantitative microstructural analysis (Humphreys, 1998). This data can be used to infer growth kinetics of specific orientations, and this information can be synthesized into more accurate grain growth or recrystallization models (Vogel et al., 1996). A second role which EBSD techniques may play in recrystallization modeling is in determining initial structures for the models. A realistic starting structure is vital for

  4. Quantitative analysis of myocardial tissue with digital autofluorescence microscopy

    PubMed Central

    Jensen, Thomas; Holten-Rossing, Henrik; Svendsen, Ida M H; Jacobsen, Christina; Vainer, Ben

    2016-01-01

    Background: The opportunity offered by whole slide scanners of automated histological analysis implies an ever increasing importance of digital pathology. To go beyond the importance of conventional pathology, however, digital pathology may need a basic histological starting point similar to that of hematoxylin and eosin staining in conventional pathology. This study presents an automated fluorescence-based microscopy approach providing highly detailed morphological data from unstained microsections. This data may provide a basic histological starting point from which further digital analysis including staining may benefit. Methods: This study explores the inherent tissue fluorescence, also known as autofluorescence, as a mean to quantitate cardiac tissue components in histological microsections. Data acquisition using a commercially available whole slide scanner and an image-based quantitation algorithm are presented. Results: It is shown that the autofluorescence intensity of unstained microsections at two different wavelengths is a suitable starting point for automated digital analysis of myocytes, fibrous tissue, lipofuscin, and the extracellular compartment. The output of the method is absolute quantitation along with accurate outlines of above-mentioned components. The digital quantitations are verified by comparison to point grid quantitations performed on the microsections after Van Gieson staining. Conclusion: The presented method is amply described as a prestain multicomponent quantitation and outlining tool for histological sections of cardiac tissue. The main perspective is the opportunity for combination with digital analysis of stained microsections, for which the method may provide an accurate digital framework. PMID:27141321

  5. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams. PMID:27505659

  6. A Rapid and Accurate Extraction Procedure for Analysing Free Amino Acids in Meat Samples by GC-MS

    PubMed Central

    Barroso, Miguel A.; Ruiz, Jorge; Antequera, Teresa

    2015-01-01

    This study evaluated the use of a mixer mill as the homogenization tool for the extraction of free amino acids in meat samples, with the main goal of analyzing a large number of samples in the shortest time and minimizing sample amount and solvent volume. Ground samples (0.2 g) were mixed with 1.5 mL HCl 0.1 M and homogenized in the mixer mill. The final biphasic system was separated by centrifugation. The supernatant was deproteinized, derivatized and analyzed by gas chromatography. This procedure showed a high extracting ability, especially in samples with high free amino acid content (recovery = 88.73–104.94%). It also showed a low limit of detection and quantification (3.8 · 10−4–6.6 · 10−4 μg μL−1 and 1.3 · 10−3–2.2 · 10−2 μg μL−1, resp.) for most amino acids, an adequate precision (2.15–20.15% for run-to-run), and a linear response for all amino acids (R2 = 0.741–0.998) in the range of 1–100 µg mL−1. Moreover, it takes less time and requires lower amount of sample and solvent than conventional techniques. Thus, this is a cost and time efficient tool for homogenizing in the extraction procedure of free amino acids from meat samples, being an adequate option for routine analysis. PMID:25873963

  7. A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS

    EPA Science Inventory

    While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...

  8. The nature and identification of quantitative trait loci

    PubMed Central

    2007-01-01

    This white paper by eighty members of the Complex Trait Consortium presents a community’s view on the approaches and statistical analyses that are needed for the identification of genetic loci that determine quantitative traits. Quantitative trait loci (QTLs) can be identified in several ways, but is there a definitive test of whether a candidate locus actually corresponds to a specific QTL? PMID:14634638

  9. Metabolomic and lipidomic analyses of chronologically aging yeast.

    PubMed

    Richard, Vincent R; Bourque, Simon D; Titorenko, Vladimir I

    2014-01-01

    Metabolomic and lipidomic analyses of yeast cells provide comprehensive empirical datasets for unveiling mechanisms underlying complex biological processes. In this chapter, we describe detailed protocols for using such analyses to study the age-related dynamics of changes in intracellular and extracellular levels of various metabolites and membrane lipids in chronologically aging yeast. The protocols for the following high-throughput analyses are described: (1) microanalytic biochemical assays for monitoring intracellular concentrations of trehalose and glycogen; (2) gas chromatographic quantitative assessment of extracellular concentrations of ethanol and acetic acid; and (3) mass spectrometric identification and quantitation of the entire complement of cellular lipids. These protocols are applicable to the exploration of the metabolic patterns associated not only with aging but also with many other vital processes in yeast. The described here methodology complements the powerful genetic approaches available for mechanistic studies of fundamental aspects of yeast biology. PMID:25213255

  10. Efficient design, accurate fabrication and effective characterization of plasmonic quasicrystalline arrays of nano-spherical particles.

    PubMed

    Namin, Farhad A; Yuwen, Yu A; Liu, Liu; Panaretos, Anastasios H; Werner, Douglas H; Mayer, Theresa S

    2016-01-01

    In this paper, the scattering properties of two-dimensional quasicrystalline plasmonic lattices are investigated. We combine a newly developed synthesis technique, which allows for accurate fabrication of spherical nanoparticles, with a recently published variation of generalized multiparticle Mie theory to develop the first quantitative model for plasmonic nano-spherical arrays based on quasicrystalline morphologies. In particular, we study the scattering properties of Penrose and Ammann- Beenker gold spherical nanoparticle array lattices. We demonstrate that by using quasicrystalline lattices, one can obtain multi-band or broadband plasmonic resonances which are not possible in periodic structures. Unlike previously published works, our technique provides quantitative results which show excellent agreement with experimental measurements. PMID:26911709

  11. Efficient design, accurate fabrication and effective characterization of plasmonic quasicrystalline arrays of nano-spherical particles

    PubMed Central

    Namin, Farhad A.; Yuwen, Yu A.; Liu, Liu; Panaretos, Anastasios H.; Werner, Douglas H.; Mayer, Theresa S.

    2016-01-01

    In this paper, the scattering properties of two-dimensional quasicrystalline plasmonic lattices are investigated. We combine a newly developed synthesis technique, which allows for accurate fabrication of spherical nanoparticles, with a recently published variation of generalized multiparticle Mie theory to develop the first quantitative model for plasmonic nano-spherical arrays based on quasicrystalline morphologies. In particular, we study the scattering properties of Penrose and Ammann- Beenker gold spherical nanoparticle array lattices. We demonstrate that by using quasicrystalline lattices, one can obtain multi-band or broadband plasmonic resonances which are not possible in periodic structures. Unlike previously published works, our technique provides quantitative results which show excellent agreement with experimental measurements. PMID:26911709

  12. Unsteady aerodynamic analyses for turbomachinery aeroelastic predictions

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Barnett, M.; Ayer, T. C.

    1994-01-01

    Applications for unsteady aerodynamics analysis in this report are: (1) aeroelastic: blade flutter and forced vibration; (2) aeroacoustic: noise generation; (3) vibration and noise control; and (4) effects of unsteadiness on performance. This requires that the numerical simulations and analytical modeling be accurate and efficient and contain realistic operating conditions and arbitrary modes of unsteady excitation. The assumptions of this application contend that: (1) turbulence and transition can be modeled with the Reynolds averaged and using Navier-Stokes equations; (2) 'attached' flow with high Reynolds number will require thin-layer Navier-Stokes equations, or inviscid/viscid interaction analyses; (3) small-amplitude unsteady excitations will need nonlinear steady and linearized unsteady analyses; and (4) Re to infinity will concern inviscid flow. Several computer programs (LINFLO, CLT, UNSVIS, AND SFLOW-IVI) are utilized for these analyses. Results and computerized grid examples are shown. This report was given during NASA LeRC Workshop on Forced Response in Turbomachinery in August of 1993.

  13. Quantitative biomedical mass spectrometry

    NASA Astrophysics Data System (ADS)

    de Leenheer, Andrép; Thienpont, Linda M.

    1992-09-01

    The scope of this contribution is an illustration of the capabilities of isotope dilution mass spectrometry (IDMS) for quantification of target substances in the biomedical field. After a brief discussion of the general principles of quantitative MS in biological samples, special attention will be paid to new technological developments or trends in IDMS from selected examples from the literature. The final section will deal with the use of IDMS for accuracy assessment in clinical chemistry. Methodological aspects considered crucial for avoiding sources of error will be discussed.

  14. Quantitative rainbow schlieren deflectometry.

    PubMed

    Greenberg, P S; Klimek, R B; Buchele, D R

    1995-07-01

    In the rainbow schlieren apparatus, a continuously graded rainbow filter is placed in the back focal plane of the decollimating lens. Refractive-index gradients in the test section thus appear as gradations in huerather than irradiance. Asimple system is described wherein a conventional color CCD array and video digitizer are used to quantify accurately the color attributes of the resulting image, and hence the associated ray deflections. The present system provides a sensitivity comparable with that of conventional interferometry, while being simpler to implement and less sensitive to mechanical misalignment. PMID:21052205

  15. Quantitative rainbow schlieren deflectometry

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Klimek, Robert B.; Buchele, Donald R.

    1995-01-01

    In the rainbow schlieren apparatus, a continuously graded rainbow filter is placed in the back focal plane of the decollimating lens. Refractive-index gradients in the test section thus appear as gradations in hue rather than irradiance. A simple system is described wherein a conventional color CCD array and video digitizer are used to quantify accurately the color attributes of the resulting image, and hence the associated ray deflections. The present system provides a sensitivity comparable with that of conventional interferometry, while being simpler to implement and less sensitive to mechanical misalignment.

  16. Accurate numerical verification of the instanton method for macroscopic quantum tunneling: Dynamics of phase slips

    SciTech Connect

    Danshita, Ippei; Polkovnikov, Anatoli

    2010-09-01

    We study the quantum dynamics of supercurrents of one-dimensional Bose gases in a ring optical lattice to verify instanton methods applied to coherent macroscopic quantum tunneling (MQT). We directly simulate the real-time quantum dynamics of supercurrents, where a coherent oscillation between two macroscopically distinct current states occurs due to MQT. The tunneling rate extracted from the coherent oscillation is compared with that given by the instanton method. We find that the instanton method is quantitatively accurate when the effective Planck's constant is sufficiently small. We also find phase slips associated with the oscillations.

  17. Quantitative analysis of retinal OCT.

    PubMed

    Sonka, Milan; Abràmoff, Michael D

    2016-10-01

    Clinical acceptance of 3-D OCT retinal imaging brought rapid development of quantitative 3-D analysis of retinal layers, vasculature, retinal lesions as well as facilitated new research in retinal diseases. One of the cornerstones of many such analyses is segmentation and thickness quantification of retinal layers and the choroid, with an inherently 3-D simultaneous multi-layer LOGISMOS (Layered Optimal Graph Image Segmentation for Multiple Objects and Surfaces) segmentation approach being extremely well suited for the task. Once retinal layers are segmented, regional thickness, brightness, or texture-based indices of individual layers can be easily determined and thus contribute to our understanding of retinal or optic nerve head (ONH) disease processes and can be employed for determination of disease status, treatment responses, visual function, etc. Out of many applications, examples provided in this paper focus on image-guided therapy and outcome prediction in age-related macular degeneration and on assessing visual function from retinal layer structure in glaucoma. PMID:27503080

  18. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    NASA Astrophysics Data System (ADS)

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    Monitoring sulphur chemistry is thought to be of great importance for exoplanets. Doing this requires detailed knowledge of the spectroscopic properties of sulphur containing molecules such as hydrogen sulphide (H2S) [1], sulphur dioxide (SO2), and sulphur trioxide (SO3). Each of these molecules can be found in terrestrial environments, produced in volcano emissions on Earth, and analysis of their spectroscopic data can prove useful to the characterisation of exoplanets, as well as the study of planets in our own solar system, with both having a possible presence on Venus. A complete, high temperature list of line positions and intensities for H32 2 S is presented. The DVR3D program suite is used to calculate the bound ro-vibration energy levels, wavefunctions, and dipole transition intensities using Radau coordinates. The calculations are based on a newly determined, spectroscopically refined potential energy surface (PES) and a new, high accuracy, ab initio dipole moment surface (DMS). Tests show that the PES enables us to calculate the line positions accurately and the DMS gives satisfactory results for line intensities. Comparisons with experiment as well as with previous theoretical spectra will be presented. The results of this study will form an important addition to the databases which are considered as sources of information for space applications; especially, in analysing the spectra of extrasolar planets, and remote sensing studies for Venus and Earth, as well as laboratory investigations and pollution studies. An ab initio line list for SO3 was previously computed using the variational nuclear motion program TROVE [2], and was suitable for modelling room temperature SO3 spectra. The calculations considered transitions in the region of 0-4000 cm-1 with rotational states up to J = 85, and includes 174,674,257 transitions. A list of 10,878 experimental transitions had relative intensities placed on an absolute scale, and were provided in a form suitable

  19. Quantitative non-destructive testing

    NASA Technical Reports Server (NTRS)

    Welch, C. S.

    1985-01-01

    The work undertaken during this period included two primary efforts. The first is a continuation of theoretical development from the previous year of models and data analyses for NDE using the Optical Thermal Infra-Red Measurement System (OPTITHIRMS) system, which involves heat injection with a laser and observation of the resulting thermal pattern with an infrared imaging system. The second is an investigation into the use of the thermoelastic effect as an effective tool for NDE. As in the past, the effort is aimed towards NDE techniques applicable to composite materials in structural applications. The theoretical development described produced several models of temperature patterns over several geometries and material types. Agreement between model data and temperature observations was obtained. A model study with one of these models investigated some fundamental difficulties with the proposed method (the primitive equation method) for obtaining diffusivity values in plates of thickness and supplied guidelines for avoiding these difficulties. A wide range of computing speeds was found among the various models, with a one-dimensional model based on Laplace's integral solution being both very fast and very accurate.

  20. Quantitative Imaging with a Mobile Phone Microscope

    PubMed Central

    Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  1. Evaluation of the quantitative performances of supercritical fluid chromatography: from method development to validation.

    PubMed

    Dispas, Amandine; Lebrun, Pierre; Ziemons, Eric; Marini, Roland; Rozet, Eric; Hubert, Philippe

    2014-08-01

    Recently, the number of papers about SFC increased drastically but scientists did not truly focus their work on quantitative performances of this technique. In order to prove the potential of UHPSFC, the present work discussed about the different steps of the analytical life cycle of a method: from development to validation and application. Moreover, the UHPSFC quantitative performances were evaluated in comparison with UHPLC, which is the main technique used for quality control in the pharmaceutical industry and then could be considered as a reference. The methods were developed using Design Space strategy, leading to the optimization of robust method. In this context, when the Design Space optimization shows guarantee of quality, no more robustness study is required prior to the validation. Then, the methods were geometrically transferred in order to reduce the analysis time. The UHPSFC and UHPLC methods were validated based on the total error approach using accuracy profile. Even if UHPLC showed better precision and sensitivity, UHPSFC method is able to give accurate results in a dosing range larger than the 80-120% range required by the European Medicines Agency. Consequently, UHPSFC results are valid and could be used for the control of active substance in a finished pharmaceutical product. Finally, UHPSFC validated method was used to analyse real samples and gave similar results than the reference method (UHPLC). PMID:24513349

  2. A Quantitative System-Scale Characterization of the Metabolism of Clostridium acetobutylicum

    PubMed Central

    Yoo, Minyeong; Bestel-Corre, Gwenaelle; Croux, Christian; Riviere, Antoine; Meynial-Salles, Isabelle

    2015-01-01

    ABSTRACT Engineering industrial microorganisms for ambitious applications, for example, the production of second-generation biofuels such as butanol, is impeded by a lack of knowledge of primary metabolism and its regulation. A quantitative system-scale analysis was applied to the biofuel-producing bacterium Clostridium acetobutylicum, a microorganism used for the industrial production of solvent. An improved genome-scale model, iCac967, was first developed based on thorough biochemical characterizations of 15 key metabolic enzymes and on extensive literature analysis to acquire accurate fluxomic data. In parallel, quantitative transcriptomic and proteomic analyses were performed to assess the number of mRNA molecules per cell for all genes under acidogenic, solventogenic, and alcohologenic steady-state conditions as well as the number of cytosolic protein molecules per cell for approximately 700 genes under at least one of the three steady-state conditions. A complete fluxomic, transcriptomic, and proteomic analysis applied to different metabolic states allowed us to better understand the regulation of primary metabolism. Moreover, this analysis enabled the functional characterization of numerous enzymes involved in primary metabolism, including (i) the enzymes involved in the two different butanol pathways and their cofactor specificities, (ii) the primary hydrogenase and its redox partner, (iii) the major butyryl coenzyme A (butyryl-CoA) dehydrogenase, and (iv) the major glyceraldehyde-3-phosphate dehydrogenase. This study provides important information for further metabolic engineering of C. acetobutylicum to develop a commercial process for the production of n-butanol. PMID:26604256

  3. Validating quantitative precipitation forecast for the Flood Meteorological Office, Patna region during 2011-2014

    NASA Astrophysics Data System (ADS)

    Giri, R. K.; Panda, Jagabandhu; Rath, Sudhansu S.; Kumar, Ravindra

    2016-05-01

    In order to issue an accurate warning for flood, a better or appropriate quantitative forecasting of precipitation is required. In view of this, the present study intends to validate the quantitative precipitation forecast (QPF) issued during southwest monsoon season for six river catchments (basin) under the flood meteorological office, Patna region. The forecast is analysed statistically by computing various skill scores of six different precipitation ranges during the years 2011-2014. The analysis of QPF validation indicates that the multi-model ensemble (MME) based forecasting is more reliable in the precipitation ranges of 1-10 and 11-25 mm. However, the reliability decreases for higher ranges of rainfall and also for the lowest range, i.e., below 1 mm. In order to testify synoptic analogue method based MME forecasting for QPF during an extreme weather event, a case study of tropical cyclone Phailin is performed. It is realized that in case of extreme events like cyclonic storms, the MME forecasting is qualitatively useful for issue of warning for the occurrence of floods, though it may not be reliable for the QPF. However, QPF may be improved using satellite and radar products.

  4. A quantitative method for estimation of volume changes in arachnoid foveae with age.

    PubMed

    Duray, Stephen M; Martel, Stacie S

    2006-03-01

    Age-related changes of arachnoid foveae have been described, but objective, quantitative analyses are lacking. A new quantitative method is presented for estimation of change in total volume of arachnoid foveae with age. The pilot sample consisted of nine skulls from the Palmer Anatomy Laboratory. Arachnoid foveae were filled with sand, which was extracted using a vacuum pump. Mass was determined with an analytical balance and converted to volume. A reliability analysis was performed using intraclass correlation coefficients. The method was found to be highly reliable (intraobserver ICC = 0.9935, interobserver ICC = 0.9878). The relationship between total volume and age was then examined in a sample of 63 males of accurately known age from the Hamann-Todd collection. Linear regression analysis revealed no statistically significant relationship between total volume and age, or foveae frequency and age (alpha = 0.05). Development of arachnoid foveae may be influenced by health factors, which could limit its usefulness in aging. PMID:16566755

  5. Validating quantitative precipitation forecast for the Flood Meteorological Office, Patna region during 2011-2014

    NASA Astrophysics Data System (ADS)

    Giri, R. K.; Panda, Jagabandhu; Rath, Sudhansu S.; Kumar, Ravindra

    2016-06-01

    In order to issue an accurate warning for flood, a better or appropriate quantitative forecasting of precipitation is required. In view of this, the present study intends to validate the quantitative precipitation forecast (QPF) issued during southwest monsoon season for six river catchments (basin) under the flood meteorological office, Patna region. The forecast is analysed statistically by computing various skill scores of six different precipitation ranges during the years 2011-2014. The analysis of QPF validation indicates that the multi-model ensemble (MME) based forecasting is more reliable in the precipitation ranges of 1-10 and 11-25 mm. However, the reliability decreases for higher ranges of rainfall and also for the lowest range, i.e., below 1 mm. In order to testify synoptic analogue method based MME forecasting for QPF during an extreme weather event, a case study of tropical cyclone Phailin is performed. It is realized that in case of extreme events like cyclonic storms, the MME forecasting is qualitatively useful for issue of warning for the occurrence of floods, though it may not be reliable for the QPF. However, QPF may be improved using satellite and radar products.

  6. Accurate calculation of (31)P NMR chemical shifts in polyoxometalates.

    PubMed

    Pascual-Borràs, Magda; López, Xavier; Poblet, Josep M

    2015-04-14

    We search for the best density functional theory strategy for the determination of (31)P nuclear magnetic resonance (NMR) chemical shifts, δ((31)P), in polyoxometalates. Among the variables governing the quality of the quantum modelling, we tackle herein the influence of the functional and the basis set. The spin-orbit and solvent effects were routinely included. To do so we analysed the family of structures α-[P2W18-xMxO62](n-) with M = Mo(VI), V(V) or Nb(V); [P2W17O62(M'R)](n-) with M' = Sn(IV), Ge(IV) and Ru(II) and [PW12-xMxO40](n-) with M = Pd(IV), Nb(V) and Ti(IV). The main results suggest that, to date, the best procedure for the accurate calculation of δ((31)P) in polyoxometalates is the combination of TZP/PBE//TZ2P/OPBE (for NMR//optimization step). The hybrid functionals (PBE0, B3LYP) tested herein were applied to the NMR step, besides being more CPU-consuming, do not outperform pure GGA functionals. Although previous studies on (183)W NMR suggested that the use of very large basis sets like QZ4P were needed for geometry optimization, the present results indicate that TZ2P suffices if the functional is optimal. Moreover, scaling corrections were applied to the results providing low mean absolute errors below 1 ppm for δ((31)P), which is a step forward in order to confirm or predict chemical shifts in polyoxometalates. Finally, via a simplified molecular model, we establish how the small variations in δ((31)P) arise from energy changes in the occupied and virtual orbitals of the PO4 group. PMID:25738630

  7. How accurate are the weather forecasts for Bierun (southern Poland)?

    NASA Astrophysics Data System (ADS)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why

  8. Anatomical Brain Images Alone Can Accurately Diagnose Chronic Neuropsychiatric Illnesses

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Hao, Xuejun; Xu, Dongrong; Liu, Jun; Weissman, Myrna; Peterson, Bradley S.

    2012-01-01

    Objective Diagnoses using imaging-based measures alone offer the hope of improving the accuracy of clinical diagnosis, thereby reducing the costs associated with incorrect treatments. Previous attempts to use brain imaging for diagnosis, however, have had only limited success in diagnosing patients who are independent of the samples used to derive the diagnostic algorithms. We aimed to develop a classification algorithm that can accurately diagnose chronic, well-characterized neuropsychiatric illness in single individuals, given the availability of sufficiently precise delineations of brain regions across several neural systems in anatomical MR images of the brain. Methods We have developed an automated method to diagnose individuals as having one of various neuropsychiatric illnesses using only anatomical MRI scans. The method employs a semi-supervised learning algorithm that discovers natural groupings of brains based on the spatial patterns of variation in the morphology of the cerebral cortex and other brain regions. We used split-half and leave-one-out cross-validation analyses in large MRI datasets to assess the reproducibility and diagnostic accuracy of those groupings. Results In MRI datasets from persons with Attention-Deficit/Hyperactivity Disorder, Schizophrenia, Tourette Syndrome, Bipolar Disorder, or persons at high or low familial risk for Major Depressive Disorder, our method discriminated with high specificity and nearly perfect sensitivity the brains of persons who had one specific neuropsychiatric disorder from the brains of healthy participants and the brains of persons who had a different neuropsychiatric disorder. Conclusions Although the classification algorithm presupposes the availability of precisely delineated brain regions, our findings suggest that patterns of morphological variation across brain surfaces, extracted from MRI scans alone, can successfully diagnose the presence of chronic neuropsychiatric disorders. Extensions of these

  9. Sample normalization methods in quantitative metabolomics.

    PubMed

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. PMID:26763302

  10. NMR quantitation: influence of RF inhomogeneity

    PubMed Central

    Mo, Huaping; Harwood, John; Raftery, Daniel

    2016-01-01

    The NMR peak integral is ideally linearly dependent on the sine of excitation angle (θ), which has provided unsurpassed flexibility in quantitative NMR by allowing the use of a signal of any concentration as the internal concentration reference. Controlling the excitation angle is particularly critical for solvent proton concentration referencing to minimize the negative impact of radiation damping, and to reduce the risk of receiver gain compression. In practice, due to the influence of RF inhomogeneity for any given probe, the observed peak integral is not exactly proportional to sin θ. To evaluate the impact quantitatively, we introduce a RF inhomogeneity factor I(θ) as a function of the nominal pulse excitation angle and propose a simple calibration procedure. Alternatively, I(θ) can be calculated from the probe’s RF profile, which can be readily obtained as a gradient image of an aqueous sample. Our results show that without consideration of I(θ), even for a probe with good RF homogeneity, up to 5% error can be introduced due to different excitation pulse angles used for the analyte and the reference. Hence, a simple calibration of I(θ) can eliminate such errors and allow an accurate description of the observed NMR signal’s dependence on the excitation angle in quantitative analysis. PMID:21919056

  11. Quantitative Hyperspectral Reflectance Imaging

    PubMed Central

    Klein, Marvin E.; Aalderink, Bernard J.; Padoan, Roberto; de Bruin, Gerrit; Steemers, Ted A.G.

    2008-01-01

    Hyperspectral imaging is a non-destructive optical analysis technique that can for instance be used to obtain information from cultural heritage objects unavailable with conventional colour or multi-spectral photography. This technique can be used to distinguish and recognize materials, to enhance the visibility of faint or obscured features, to detect signs of degradation and study the effect of environmental conditions on the object. We describe the basic concept, working principles, construction and performance of a laboratory instrument specifically developed for the analysis of historical documents. The instrument measures calibrated spectral reflectance images at 70 wavelengths ranging from 365 to 1100 nm (near-ultraviolet, visible and near-infrared). By using a wavelength tunable narrow-bandwidth light-source, the light energy used to illuminate the measured object is minimal, so that any light-induced degradation can be excluded. Basic analysis of the hyperspectral data includes a qualitative comparison of the spectral images and the extraction of quantitative data such as mean spectral reflectance curves and statistical information from user-defined regions-of-interest. More sophisticated mathematical feature extraction and classification techniques can be used to map areas on the document, where different types of ink had been applied or where one ink shows various degrees of degradation. The developed quantitative hyperspectral imager is currently in use by the Nationaal Archief (National Archives of The Netherlands) to study degradation effects of artificial samples and original documents, exposed in their permanent exhibition area or stored in their deposit rooms.

  12. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  13. The effects of AVIRIS atmospheric calibration methodology on identification and quantitative mapping of surface mineralogy, Drum Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Dwyer, John L.

    1993-01-01

    The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.

  14. Semi-quantitative prediction of a multiple API solid dosage form with a combination of vibrational spectroscopy methods.

    PubMed

    Hertrampf, A; Sousa, R M; Menezes, J C; Herdling, T

    2016-05-30

    Quality control (QC) in the pharmaceutical industry is a key activity in ensuring medicines have the required quality, safety and efficacy for their intended use. QC departments at pharmaceutical companies are responsible for all release testing of final products but also all incoming raw materials. Near-infrared spectroscopy (NIRS) and Raman spectroscopy are important techniques for fast and accurate identification and qualification of pharmaceutical samples. Tablets containing two different active pharmaceutical ingredients (API) [bisoprolol, hydrochlorothiazide] in different commercially available dosages were analysed using Raman- and NIR Spectroscopy. The goal was to define multivariate models based on each vibrational spectroscopy to discriminate between different dosages (identity) and predict their dosage (semi-quantitative). Furthermore the combination of spectroscopic techniques was investigated. Therefore, two different multiblock techniques based on PLS have been applied: multiblock PLS (MB-PLS) and sequential-orthogonalised PLS (SO-PLS). NIRS showed better results compared to Raman spectroscopy for both identification and quantitation. The multiblock techniques investigated showed that each spectroscopy contains information not present or captured with the other spectroscopic technique, thus demonstrating that there is a potential benefit in their combined use for both identification and quantitation purposes. PMID:26970593

  15. Quantitative Ultrasound Characterization of Silicon Carbide Mirrors

    NASA Astrophysics Data System (ADS)

    Portune, A. R.; Haber, R. A.

    2010-02-01

    Silicon carbide mirrors were characterized using several qualitative and quantitative nondestructive ultrasound techniques in order to determine the most efficient method for rapid performance evaluations. Ultrasound testing was performed in immersion using both phased array and single transducer systems in pulse-echo configuration. C-scan images of top and bottom surface reflected signal peak amplitudes were used to qualitatively locate and identify homogeneity variations within the mirror materials. Quantitative analysis of normalized amplitude histograms revealed significant differences in homogeneity estimations between phased array and single transducer test methods. Acoustic spectroscopy over the 10-33 MHz regime identified bulk microstructural differences between high and low amplitude regions in the samples. While ultrasound phased array performed well at rapidly locating surface and subsurface heterogeneities, it could not match the resolution and clarity of single transducer C-scan images or the insight of acoustic spectroscopy analyses.

  16. Nonspectroscopic imaging for quantitative chlorophyll sensing

    NASA Astrophysics Data System (ADS)

    Kim, Taehoon; Kim, Jeong-Im; Visbal-Onufrak, Michelle A.; Chapple, Clint; Kim, Young L.

    2016-01-01

    Nondestructive imaging of physiological changes in plants has been intensively used as an invaluable tool for visualizing heterogeneous responses to various types of abiotic and biotic stress. However, conventional approaches often have intrinsic limitations for quantitative analyses, requiring bulky and expensive optical instruments for capturing full spectral information. We report a spectrometerless (or spectrometer-free) reflectance imaging method that allows for nondestructive and quantitative chlorophyll imaging in individual leaves in situ in a handheld device format. The combination of a handheld-type imaging system and a hyperspectral reconstruction algorithm from an RGB camera offers simple instrumentation and operation while avoiding the use of an imaging spectrograph or tunable color filter. This platform could potentially be integrated into a compact, inexpensive, and portable system, while being of great value in high-throughput phenotyping facilities and laboratory settings.

  17. Arctic Stratospheric Temperature In The Winters 1999/2000 and 2000/2001: A Quantitative Assessment and Microphysical Implications

    NASA Astrophysics Data System (ADS)

    Buss, S.; Wernli, H.; Peter, T.; Kivi, R.; Bui, T. P.; Kleinböhl, A.; Schiller, C.

    Stratospheric winter temperatures play a key role in the chain of microphysical and chemical processes that lead to the formation of polar stratospheric clouds (PSCs), chlorine activation and eventually to stratospheric ozone depletion. Here the tempera- ture conditions during the Arctic winters 1999/2000 and 2000/2001 are quantitatively investigated using observed profiles of water vapour and nitric acid, and tempera- tures from high-resolution radiosondes and aircraft observations, global ECMWF and UKMO analyses and mesoscale model simulations over Scandinavia and Greenland. The ECMWF model resolves parts of the gravity wave activity and generally agrees well with the observations. However, for the very cold temperatures near the ice frost point the ECMWF analyses have a warm bias of 1-6 K compared to radiosondes. For the mesoscale model HRM, this bias is generally reduced due to a more accurate rep- resentation of gravity waves. Quantitative estimates of the impact of the mesoscale temperature perturbations indicates that over Scandinavia and Greenland the wave- induced stratospheric cooling (as simulated by the HRM) affects only moderately the estimated chlorine activation and homogeneous NAT particle formation, but strongly enhances the potential for ice formation.

  18. Project analysis and integration economic analyses summary

    NASA Technical Reports Server (NTRS)

    Macomber, H. L.

    1986-01-01

    An economic-analysis summary was presented for the manufacture of crystalline-silicon modules involving silicon ingot/sheet, growth, slicing, cell manufacture, and module assembly. Economic analyses provided: useful quantitative aspects for complex decision-making to the Flat-plate Solar Array (FSA) Project; yardsticks for design and performance to industry; and demonstration of how to evaluate and understand the worth of research and development both to JPL and other government agencies and programs. It was concluded that future research and development funds for photovoltaics must be provided by the Federal Government because the solar industry today does not reap enough profits from its present-day sales of photovoltaic equipment.

  19. BWR core melt progression phenomena: Experimental analyses

    SciTech Connect

    Ott, L.J.

    1992-06-01

    In the BWR Core Melt in Progression Phenomena Program, experimental results concerning severe fuel damage and core melt progression in BWR core geometry are used to evaluate existing models of the governing phenomena. These include control blade eutectic liquefaction and the subsequent relocation and attack on the channel box structure; oxidation heating and hydrogen generation; Zircaloy melting and relocation; and the continuing oxidation of zirconium with metallic blockage formation. Integral data have been obtained from the BWR DF-4 experiment in the ACRR and from BWR tests in the German CORA exreactor fuel-damage test facility. Additional integral data will be obtained from new CORA BWR test, the full-length FLHT-6 BWR test in the NRU test reactor, and the new program of exreactor experiments at Sandia National Laboratories (SNL) on metallic melt relocation and blockage formation. an essential part of this activity is interpretation and use of the results of the BWR tests. The Oak Ridge National Laboratory (ORNL) has developed experiment-specific models for analysis of the BWR experiments; to date, these models have permitted far more precise analyses of the conditions in these experiments than has previously been available. These analyses have provided a basis for more accurate interpretation of the phenomena that the experiments are intended to investigate. The results of posttest analyses of BWR experiments are discussed and significant findings from these analyses are explained. The ORNL control blade/canister models with materials interaction, relocation and blockage models are currently being implemented in SCDAP/RELAP5 as an optional structural component.

  20. BWR core melt progression phenomena: Experimental analyses

    SciTech Connect

    Ott, L.J.

    1992-01-01

    In the BWR Core Melt in Progression Phenomena Program, experimental results concerning severe fuel damage and core melt progression in BWR core geometry are used to evaluate existing models of the governing phenomena. These include control blade eutectic liquefaction and the subsequent relocation and attack on the channel box structure; oxidation heating and hydrogen generation; Zircaloy melting and relocation; and the continuing oxidation of zirconium with metallic blockage formation. Integral data have been obtained from the BWR DF-4 experiment in the ACRR and from BWR tests in the German CORA exreactor fuel-damage test facility. Additional integral data will be obtained from new CORA BWR test, the full-length FLHT-6 BWR test in the NRU test reactor, and the new program of exreactor experiments at Sandia National Laboratories (SNL) on metallic melt relocation and blockage formation. an essential part of this activity is interpretation and use of the results of the BWR tests. The Oak Ridge National Laboratory (ORNL) has developed experiment-specific models for analysis of the BWR experiments; to date, these models have permitted far more precise analyses of the conditions in these experiments than has previously been available. These analyses have provided a basis for more accurate interpretation of the phenomena that the experiments are intended to investigate. The results of posttest analyses of BWR experiments are discussed and significant findings from these analyses are explained. The ORNL control blade/canister models with materials interaction, relocation and blockage models are currently being implemented in SCDAP/RELAP5 as an optional structural component.

  1. A Proteomic Study of the HUPO Plasma Proteome Project's Pilot Samples using an Accurate Mass and Time Tag Strategy

    SciTech Connect

    Adkins, Joshua N.; Monroe, Matthew E.; Auberry, Kenneth J.; Shen, Yufeng; Jacobs, Jon M.; Camp, David G.; Vitzthum, Frank; Rodland, Karin D.; Zangar, Richard C.; Smith, Richard D.; Pounds, Joel G.

    2005-08-01

    Characterization of the human blood plasma proteome is critical to the discovery of routinely useful clinical biomarkers. We used an Accurate Mass and Time (AMT) tag strategy with high-resolution mass accuracy capillary liquid chromatography Fourier-Transform Ion Cyclotron Resonance Mass Spectrometry (cLC-FTICR MS) to perform a global proteomic analysis of pilot study samples as part of the HUPO Plasma Proteome Project. HUPO reference serum and citrated plasma samples from African Americans, Asian Americans, and Caucasian Americans were analyzed, in addition to a Pacific Northwest National Laboratory reference serum and plasma. The AMT tag strategy allowed us to leverage two previously published “shotgun” proteomics experiments to perform global analyses on these samples in triplicate in less than 4 days total analysis time. A total of 722 (22% with multiple peptide identifications) International Protein Index (IPI) redundant proteins, or 377 protein families by ProteinProphet, were identified over the 6 individual HUPO serum and plasma samples. The samples yielded a similar number of identified redundant proteins in the plasma samples (average 446 +/-23) as found in the serum samples (average 440+/-20). These proteins were identified by an average of 956+/-35 unique peptides in plasma and 930+/-11 unique peptides in serum. In addition to this high-throughput analysis, the AMT tag approach was used with a Z-score normalization to compare relative protein abundances. This analysis highlighted both known differences in serum and citrated plasma such as fibrinogens, and reproducible differences in peptide abundances from proteins such as soluble activin receptor-like kinase 7b and glycoprotein m6b. The AMT tag strategy not only improved our sample throughput, and provided a basis for estimated quantitation.

  2. Spectral analyses of solar-like stars

    NASA Astrophysics Data System (ADS)

    Doyle, Amanda P.

    2015-03-01

    Accurate stellar parameters are important not just to understand the stars themselves, but also for understanding the planets that orbit them. Despite the availability of high quality spectra, there are still many uncertainties in stellar spectroscopy. In this thesis, the finer details of spectroscopic analyses are discussed and critically evaluated, with a focus on improving the stellar parameters. Using high resolution, high signal-to-noise HARPS spectra, accurate parameters were determined for 22 WASP stars. It is shown that there is a limit to the accuracy of stellar parameters that can be achieved, despite using high S/N spectra. It is also found that the selection of spectral lines used and the accuracy of atomic data is crucial, and different line lists can result in different values of parameters. Different spectral analysis methods often give vastly different results even for the same spectrum of the same star. Here it is shown that many of these discrepancies can be explained by the choice of lines used and by the various assumptions made. This will enable a more reliable homogeneous study of solar-like stars in the future. The Rossiter-McLaughlin effect observed for transiting exoplanets often requires prior knowledge of the projected rotational velocity (vsini). This is usually provided via spectroscopy, however this method has uncertainties as spectral lines are also broadened by photospheric velocity fields known as "macroturbulence". Using rotational splitting frequencies for 28 Kepler stars that were provided via asteroseismology, accurate vsini values have been determined. By inferring the macroturbulence for 28 Kepler stars, it was possible to obtain a new calibration between macroturbulence, effective temperature and surface gravity. Therefore macroturbulence, and thus vsini, can now be determined with confidence for stars that do not have asteroseismic data available. New spectroscopic vsini values were then determined for the WASP planet host

  3. Quantitative velocity modulation spectroscopy.

    PubMed

    Hodges, James N; McCall, Benjamin J

    2016-05-14

    Velocity Modulation Spectroscopy (VMS) is arguably the most important development in the 20th century for spectroscopic study of molecular ions. For decades, interpretation of VMS lineshapes has presented challenges due to the intrinsic covariance of fit parameters including velocity modulation amplitude, linewidth, and intensity. This limitation has stifled the growth of this technique into the quantitative realm. In this work, we show that subtle changes in the lineshape can be used to help address this complexity. This allows for determination of the linewidth, intensity relative to other transitions, velocity modulation amplitude, and electric field strength in the positive column of a glow discharge. Additionally, we explain the large homogeneous component of the linewidth that has been previously described. Using this component, the ion mobility can be determined. PMID:27179476

  4. Quantitative velocity modulation spectroscopy

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; McCall, Benjamin J.

    2016-05-01

    Velocity Modulation Spectroscopy (VMS) is arguably the most important development in the 20th century for spectroscopic study of molecular ions. For decades, interpretation of VMS lineshapes has presented challenges due to the intrinsic covariance of fit parameters including velocity modulation amplitude, linewidth, and intensity. This limitation has stifled the growth of this technique into the quantitative realm. In this work, we show that subtle changes in the lineshape can be used to help address this complexity. This allows for determination of the linewidth, intensity relative to other transitions, velocity modulation amplitude, and electric field strength in the positive column of a glow discharge. Additionally, we explain the large homogeneous component of the linewidth that has been previously described. Using this component, the ion mobility can be determined.

  5. Accuracy of finite element analyses of CT scans in predictions of vertebral failure patterns under axial compression and anterior flexion.

    PubMed

    Jackman, Timothy M; DelMonaco, Alex M; Morgan, Elise F

    2016-01-25

    Finite element (FE) models built from quantitative computed tomography (QCT) scans can provide patient-specific estimates of bone strength and fracture risk in the spine. While prior studies demonstrate accurate QCT-based FE predictions of vertebral stiffness and strength, the accuracy of the predicted failure patterns, i.e., the locations where failure occurs within the vertebra and the way in which the vertebra deforms as failure progresses, is less clear. This study used digital volume correlation (DVC) analyses of time-lapse micro-computed tomography (μCT) images acquired during mechanical testing (compression and anterior flexion) of thoracic spine segments (T7-T9, n=28) to measure displacements occurring throughout the T8 vertebral body at the ultimate point. These displacements were compared to those simulated by QCT-based FE analyses of T8. We hypothesized that the FE predictions would be more accurate when the boundary conditions are based on measurements of pressure distributions within intervertebral discs of similar level of disc degeneration vs. boundary conditions representing rigid platens. The FE simulations captured some of the general, qualitative features of the failure patterns; however, displacement errors ranged 12-279%. Contrary to our hypothesis, no differences in displacement errors were found when using boundary conditions representing measurements of disc pressure vs. rigid platens. The smallest displacement errors were obtained using boundary conditions that were measured directly by DVC at the T8 endplates. These findings indicate that further work is needed to develop methods of identifying physiological loading conditions for the vertebral body, for the purpose of achieving robust, patient-specific FE analyses of failure mechanisms. PMID:26792288

  6. Validating carbonation parameters of alkaline solid wastes via integrated thermal analyses: Principles and applications.

    PubMed

    Pan, Shu-Yuan; Chang, E-E; Kim, Hyunook; Chen, Yi-Hung; Chiang, Pen-Chi

    2016-04-15

    Accelerated carbonation of alkaline solid wastes is an attractive method for CO2 capture and utilization. However, the evaluation criteria of CaCO3 content in solid wastes and the way to interpret thermal analysis profiles were found to be quite different among the literature. In this investigation, an integrated thermal analyses for determining carbonation parameters in basic oxygen furnace slag (BOFS) were proposed based on thermogravimetric (TG), derivative thermogravimetric (DTG), and differential scanning calorimetry (DSC) analyses. A modified method of TG-DTG interpretation was proposed by considering the consecutive weight loss of sample with 200-900°C because the decomposition of various hydrated compounds caused variances in estimates by using conventional methods of TG interpretation. Different quantities of reference CaCO3 standards, carbonated BOFS samples and synthetic CaCO3/BOFS mixtures were prepared for evaluating the data quality of the modified TG-DTG interpretation, in terms of precision and accuracy. The quantitative results of the modified TG-DTG method were also validated by DSC analysis. In addition, to confirm the TG-DTG results, the evolved gas analysis was performed by mass spectrometer and Fourier transform infrared spectroscopy for detection of the gaseous compounds released during heating. Furthermore, the decomposition kinetics and thermodynamics of CaCO3 in BOFS was evaluated using Arrhenius equation and Kissinger equation. The proposed integrated thermal analyses for determining CaCO3 content in alkaline wastes was precise and accurate, thereby enabling to effectively assess the CO2 capture capacity of alkaline wastes for mineral carbonation. PMID:26785217

  7. Dissolved methane profiles in marine sediments observed in situ differ greatly from analyses of recovered cores

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Brewer, P. G.; Hester, K.; Ussler, W.; Walz, P. M.; Peltzer, E. T.; Ripmeester, J.

    2009-12-01

    The flux of dissolved methane through continental margin sediments is of importance in marine geochemistry due to its role in massive hydrate formation with enigmatic climate consequences, and for the huge and complex microbial assemblage it supports. Yet the actual dissolved methane concentration driving this flux is poorly known since strong degassing during sample recovery from depth is commonplace. Thus, pore water analyses from high CH4 environments typically show values clustered around the one-atmosphere equilibrium value of 1-2 mM, erasing the original pore water profile and frustrating model calculations. We show that accurate measurement of pore water profiles of dissolved CH4, SO4, and H2S can be made rapidly in situ using a Raman-based probe. While Raman spectra were formerly believed to yield only qualitative data we show that by using a peak area ratio technique to known H2O bands and a form of Beer’s Law quantitative data may be readily obtained. Results from Hydrate Ridge, Oregon clearly show coherent profiles of all three species in this high flux environment, and while in situ Raman and conventional analyses of SO4 in recovered cores agree well, very large differences in CH4 are found. The in situ CH4 results show up to 35 mM in the upper 30cm of seafloor sediments and are inversely correlated with SO4. This is below the methane hydrate saturation value, yet disturbing the sediments clearly released hydrate fragments suggesting that true saturation values may exist only in the hydrate molecular boundary layer, and that lower values may typically characterize the bulk pore fluid of hydrate-hosting sediments. The in situ Raman measurement protocols developed take only a few minutes. Profiles obtained in situ showed minimal fluorescence while pore water samples from recovered cores quickly developed strong fluorescence making laboratory analyses using Raman spectroscopy challenging and raising questions over the reaction sequence responsible for

  8. Quantitative approach of speleothems fluorescence

    NASA Astrophysics Data System (ADS)

    Quiers, Marine; Perrette, Yves; Poulenard, Jérôme; Chalmin, Emilie; Revol, Morgane

    2014-05-01

    In this study, we propose a framework to interpret quantitatively the fluorescence of speleothems organic matter (OM) by the way of a bank of water-extracted organic matter. Due to its efficiency to described dissolved organic matter (DOM) characteritics, fluorescence has been used to determined DOM signatures in natural systems, water circulations, OM transfer from soils, OM evolution in soils or recently, DOM changes in engineered treatment systems. Fluorescence has also been used in speleothems studies, mainly as a growth indicator. Only few studies interpret it as an environmental proxy. Indeed, the fluorescence of OM provides information on the type of organic molecules trapped in speleothems and their evolutions. But the most direct information given by fluorescence is the variation of OM quantities. Actually, increase of fluorescence intensity is generally related to an increase in OM quantity but may also be induced by calcite optical effect or qualitative change of OM. However, analytical technics used in water environments cannot be used for speleothem samples. In this study we propose to give a frame to interpret quantitatively the fluorescence signal of speleothems. 3 different samples of stalagmites from french northern Prealps were used. To allow the quantification of the fluorescence signal, we need to measure the fluorescence and the quantity of organic matter on the same sample. OM of speleothems was extracted by an acid digestion method and analysed with a spectrofluorimeter. However, it was not possible to quantify directly the OM, as the extract solvant was a high-concentrated acid. To solve this problem, a calibration using soil extracts was realised. Soils were chosen in order to represent the diversity of OM present in the environment above the caves. Attention was focused on soil and vegetation types, and landuse. Organic material was water extracted from soils and its fluorescence was also measured. Total organic carbon was performed on the

  9. Quantitative evolutionary design

    PubMed Central

    Diamond, Jared

    2002-01-01

    The field of quantitative evolutionary design uses evolutionary reasoning (in terms of natural selection and ultimate causation) to understand the magnitudes of biological reserve capacities, i.e. excesses of capacities over natural loads. Ratios of capacities to loads, defined as safety factors, fall in the range 1.2-10 for most engineered and biological components, even though engineered safety factors are specified intentionally by humans while biological safety factors arise through natural selection. Familiar examples of engineered safety factors include those of buildings, bridges and elevators (lifts), while biological examples include factors of bones and other structural elements, of enzymes and transporters, and of organ metabolic performances. Safety factors serve to minimize the overlap zone (resulting in performance failure) between the low tail of capacity distributions and the high tail of load distributions. Safety factors increase with coefficients of variation of load and capacity, with capacity deterioration with time, and with cost of failure, and decrease with costs of initial construction, maintenance, operation, and opportunity. Adaptive regulation of many biological systems involves capacity increases with increasing load; several quantitative examples suggest sublinear increases, such that safety factors decrease towards 1.0. Unsolved questions include safety factors of series systems, parallel or branched pathways, elements with multiple functions, enzyme reaction chains, and equilibrium enzymes. The modest sizes of safety factors imply the existence of costs that penalize excess capacities. Those costs are likely to involve wasted energy or space for large or expensive components, but opportunity costs of wasted space at the molecular level for minor components. PMID:12122135

  10. Quantitative Radiological Diagnosis Of The Temporomandibular Joint

    NASA Astrophysics Data System (ADS)

    Jordan, Steven L.; Heffez, Leslie B.

    1989-05-01

    Recent impressive technological advances in imaging techniques for the human temporomandibular (tm) joint, and in enabling geometric algorithms have outpaced diagnostic analyses. The authors present a basis for systematic quantitative diagnoses that exploit the imaging advancements. A reference line, coordinate system, and transformations are described that are appropriate for tomography of the tm joint. These yield radiographic measurements (disk displacement) and observations (beaking of radiopaque dye and disk shape) that refine diagnostic classifications of anterior displacement of the condylar disk. The relevance of these techniques has been clinically confirmed. Additional geometric invariants and procedures are proposed for future clinical verification.

  11. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  12. Quantitative MRI techniques of cartilage composition

    PubMed Central

    Matzat, Stephen J.; van Tiel, Jasper; Gold, Garry E.

    2013-01-01

    Due to aging populations and increasing rates of obesity in the developed world, the prevalence of osteoarthritis (OA) is continually increasing. Decreasing the societal and patient burden of this disease motivates research in prevention, early detection of OA, and novel treatment strategies against OA. One key facet of this effort is the need to track the degradation of tissues within joints, especially cartilage. Currently, conventional imaging techniques provide accurate means to detect morphological deterioration of cartilage in the later stages of OA, but these methods are not sensitive to the subtle biochemical changes during early disease stages. Novel quantitative techniques with magnetic resonance imaging (MRI) provide direct and indirect assessments of cartilage composition, and thus allow for earlier detection and tracking of OA. This review describes the most prominent quantitative MRI techniques to date—dGEMRIC, T2 mapping, T1rho mapping, and sodium imaging. Other, less-validated methods for quantifying cartilage composition are also described—Ultrashort echo time (UTE), gagCEST, and diffusion-weighted imaging (DWI). For each technique, this article discusses the proposed biochemical correlates, as well its advantages and limitations for clinical and research use. The article concludes with a detailed discussion of how the field of quantitative MRI has progressed to provide information regarding two specific patient populations through clinical research—patients with anterior cruciate ligament rupture and patients with impingement in the hip. While quantitative imaging techniques continue to rapidly evolve, specific challenges for each technique as well as challenges to clinical applications remain. PMID:23833729

  13. Using GPS To Teach More Than Accurate Positions.

    ERIC Educational Resources Information Center

    Johnson, Marie C.; Guth, Peter L.

    2002-01-01

    Undergraduate science majors need practice in critical thinking, quantitative analysis, and judging whether their calculated answers are physically reasonable. Develops exercises using handheld Global Positioning System (GPS) receivers. Reinforces students' abilities to think quantitatively, make realistic "back of the envelope" assumptions, and…

  14. 78 FR 34604 - Submitting Complete and Accurate Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  15. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  16. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  17. Quantitative Photoacoustic Image Reconstruction using Fluence Dependent Chromophores

    PubMed Central

    Cox, B.T.; Laufer, J.G.; Beard, P.C.

    2010-01-01

    In biomedical photoacoustic imaging the images are proportional to the absorbed optical energy density, and not the optical absorption, which makes it difficult to obtain a quantitatively accurate image showing the concentration of a particular absorbing chromophore from photoacoustic measurements alone. Here it is shown that the spatially varying concentration of a chromophore whose absorption becomes zero above a threshold light fluence can be estimated from photoacoustic images obtained at increasing illumination strengths. This technique provides an alternative to model-based multiwavelength approaches to quantitative photoacoustic imaging, and a new approach to photoacoustic molecular and functional imaging. PMID:21258458

  18. Electronic imaging systems for quantitative electrophoresis of DNA

    SciTech Connect

    Sutherland, J.C.

    1989-01-01

    Gel electrophoresis is one of the most powerful and widely used methods for the separation of DNA. During the last decade, instruments have been developed that accurately quantitate in digital form the distribution of materials in a gel or on a blot prepared from a gel. In this paper, I review the various physical properties that can be used to quantitate the distribution of DNA on gels or blots and the instrumentation that has been developed to perform these tasks. The emphasis here is on DNA, but much of what is said also applies to RNA, proteins and other molecules. 36 refs.

  19. The accurate assessment of small-angle X-ray scattering data

    SciTech Connect

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.

    2015-01-01

    A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  20. Automated Selected Reaction Monitoring Software for Accurate Label-Free Protein Quantification

    PubMed Central

    2012-01-01

    Selected reaction monitoring (SRM) is a mass spectrometry method with documented ability to quantify proteins accurately and reproducibly using labeled reference peptides. However, the use of labeled reference peptides becomes impractical if large numbers of peptides are targeted and when high flexibility is desired when selecting peptides. We have developed a label-free quantitative SRM workflow that relies on a new automated algorithm, Anubis, for accurate peak detection. Anubis efficiently removes interfering signals from contaminating peptides to estimate the true signal of the targeted peptides. We evaluated the algorithm on a published multisite data set and achieved results in line with manual data analysis. In complex peptide mixtures from whole proteome digests of Streptococcus pyogenes we achieved a technical variability across the entire proteome abundance range of 6.5–19.2%, which was considerably below the total variation across biological samples. Our results show that the label-free SRM workflow with automated data analysis is feasible for large-scale biological studies, opening up new possibilities for quantitative proteomics and systems biology. PMID:22658081