Science.gov

Sample records for accurate quantitative predictions

  1. Infectious titres of sheep scrapie and bovine spongiform encephalopathy agents cannot be accurately predicted from quantitative laboratory test results.

    PubMed

    González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda

    2012-11-01

    It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).

  2. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  3. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  4. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  5. Quantitative proteomic analysis by accurate mass retention time pairs.

    PubMed

    Silva, Jeffrey C; Denny, Richard; Dorschel, Craig A; Gorenstein, Marc; Kass, Ignatius J; Li, Guo-Zhong; McKenna, Therese; Nold, Michael J; Richardson, Keith; Young, Phillip; Geromanos, Scott

    2005-04-01

    Current methodologies for protein quantitation include 2-dimensional gel electrophoresis techniques, metabolic labeling, and stable isotope labeling methods to name only a few. The current literature illustrates both pros and cons for each of the previously mentioned methodologies. Keeping with the teachings of William of Ockham, "with all things being equal the simplest solution tends to be correct", a simple LC/MS based methodology is presented that allows relative changes in abundance of proteins in highly complex mixtures to be determined. Utilizing a reproducible chromatographic separations system along with the high mass resolution and mass accuracy of an orthogonal time-of-flight mass spectrometer, the quantitative comparison of tens of thousands of ions emanating from identically prepared control and experimental samples can be made. Using this configuration, we can determine the change in relative abundance of a small number of ions between the two conditions solely by accurate mass and retention time. Employing standard operating procedures for both sample preparation and ESI-mass spectrometry, one typically obtains under 5 ppm mass precision and quantitative variations between 10 and 15%. The principal focus of this paper will demonstrate the quantitative aspects of the methodology and continue with a discussion of the associated, complementary qualitative capabilities.

  6. A gene expression biomarker accurately predicts estrogen ...

    EPA Pesticide Factsheets

    The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1 screening tests. The ToxCast program currently includes 18 HTS in vitro assays that evaluate the ability of chemicals to modulate estrogen receptor α (ERα), an important endocrine target. We propose microarray-based gene expression profiling as a complementary approach to predict ERα modulation and have developed computational methods to identify ERα modulators in an existing database of whole-genome microarray data. The ERα biomarker consisted of 46 ERα-regulated genes with consistent expression patterns across 7 known ER agonists and 3 known ER antagonists. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression data sets from experiments in MCF-7 cells. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% or 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) OECD ER reference chemicals including “very weak” agonists and replicated predictions based on 18 in vitro ER-associated HTS assays. For 114 chemicals present in both the HTS data and the MCF-7 c

  7. You Can Accurately Predict Land Acquisition Costs.

    ERIC Educational Resources Information Center

    Garrigan, Richard

    1967-01-01

    Land acquisition costs were tested for predictability based upon the 1962 assessed valuations of privately held land acquired for campus expansion by the University of Wisconsin from 1963-1965. By correlating the land acquisition costs of 108 properties acquired during the 3 year period with--(1) the assessed value of the land, (2) the assessed…

  8. Towards more accurate vegetation mortality predictions

    DOE PAGES

    Sevanto, Sanna Annika; Xu, Chonggang

    2016-09-26

    Predicting the fate of vegetation under changing climate is one of the major challenges of the climate modeling community. Here, terrestrial vegetation dominates the carbon and water cycles over land areas, and dramatic changes in vegetation cover resulting from stressful environmental conditions such as drought feed directly back to local and regional climate, potentially leading to a vicious cycle where vegetation recovery after a disturbance is delayed or impossible.

  9. A predictable and accurate technique with elastomeric impression materials.

    PubMed

    Barghi, N; Ontiveros, J C

    1999-08-01

    A method for obtaining more predictable and accurate final impressions with polyvinylsiloxane impression materials in conjunction with stock trays is proposed and tested. Heavy impression material is used in advance for construction of a modified custom tray, while extra-light material is used for obtaining a more accurate final impression.

  10. Accurate torque-speed performance prediction for brushless dc motors

    NASA Astrophysics Data System (ADS)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  11. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    PubMed Central

    Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.

    2015-01-01

    Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results

  12. On the Accurate Prediction of CME Arrival At the Earth

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Hess, Phillip

    2016-07-01

    We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.

  13. Passive samplers accurately predict PAH levels in resident crayfish.

    PubMed

    Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A

    2016-02-15

    Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10 years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4 ± 1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests

  14. Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations

    NASA Astrophysics Data System (ADS)

    Bowman, J.; Jensen, S.; McDonald, Mark

    2010-10-01

    High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.

  15. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    SciTech Connect

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.

  16. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGES

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  17. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    PubMed Central

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel C.; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-01-01

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification. PMID:23499924

  18. Universality and predictability in molecular quantitative genetics.

    PubMed

    Nourmohammad, Armita; Held, Torsten; Lässig, Michael

    2013-12-01

    Molecular traits, such as gene expression levels or protein binding affinities, are increasingly accessible to quantitative measurement by modern high-throughput techniques. Such traits measure molecular functions and, from an evolutionary point of view, are important as targets of natural selection. We review recent developments in evolutionary theory and experiments that are expected to become building blocks of a quantitative genetics of molecular traits. We focus on universal evolutionary characteristics: these are largely independent of a trait's genetic basis, which is often at least partially unknown. We show that universal measurements can be used to infer selection on a quantitative trait, which determines its evolutionary mode of conservation or adaptation. Furthermore, universality is closely linked to predictability of trait evolution across lineages. We argue that universal trait statistics extends over a range of cellular scales and opens new avenues of quantitative evolutionary systems biology.

  19. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  20. Mouse models of human AML accurately predict chemotherapy response.

    PubMed

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W

    2009-04-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients.

  1. Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng

    Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.

  2. Fast and accurate predictions of covalent bonds in chemical space

    NASA Astrophysics Data System (ADS)

    Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  3. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-07

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  4. IRIS: Towards an Accurate and Fast Stage Weight Prediction Method

    NASA Astrophysics Data System (ADS)

    Taponier, V.; Balu, A.

    2002-01-01

    The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator

  5. FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads

    PubMed Central

    Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya

    2012-01-01

    The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith–Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets. PMID:22379138

  6. FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads.

    PubMed

    Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya

    2012-06-01

    The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith-Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets.

  7. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis

    PubMed Central

    Smith, William L.; Chadwick, Sean G.; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E.; Aguin, Tina J.; Sobel, Jack D.

    2016-01-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease: Gardnerella vaginalis, Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the order Clostridiales), Megasphaera phylotype 1 or 2, Lactobacillus iners, Lactobacillus crispatus, Lactobacillus gasseri, and Lactobacillus jensenii. We generated a logistic regression model that identified G. vaginalis, A. vaginae, and Megasphaera phylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion of Lactobacillus spp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV. PMID:26818677

  8. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions

    PubMed Central

    Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin

    2015-01-01

    Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale. PMID:26198229

  9. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions.

    PubMed

    Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin

    2015-07-07

    Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale.

  10. Rapid and Highly Accurate Prediction of Poor Loop Diuretic Natriuretic Response in Patients With Heart Failure

    PubMed Central

    Testani, Jeffrey M.; Hanberg, Jennifer S.; Cheng, Susan; Rao, Veena; Onyebeke, Chukwuma; Laur, Olga; Kula, Alexander; Chen, Michael; Wilson, F. Perry; Darlington, Andrew; Bellumkonda, Lavanya; Jacoby, Daniel; Tang, W. H. Wilson; Parikh, Chirag R.

    2015-01-01

    Background Removal of excess sodium and fluid is a primary therapeutic objective in acute decompensated heart failure (ADHF) and commonly monitored with fluid balance and weight loss. However, these parameters are frequently inaccurate or not collected and require a delay of several hours after diuretic administration before they are available. Accessible tools for rapid and accurate prediction of diuretic response are needed. Methods and Results Based on well-established renal physiologic principles an equation was derived to predict net sodium output using a spot urine sample obtained one or two hours following loop diuretic administration. This equation was then prospectively validated in 50 ADHF patients using meticulously obtained timed 6-hour urine collections to quantitate loop diuretic induced cumulative sodium output. Poor natriuretic response was defined as a cumulative sodium output of <50 mmol, a threshold that would result in a positive sodium balance with twice-daily diuretic dosing. Following a median dose of 3 mg (2–4 mg) of intravenous bumetanide, 40% of the population had a poor natriuretic response. The correlation between measured and predicted sodium output was excellent (r=0.91, p<0.0001). Poor natriuretic response could be accurately predicted with the sodium prediction equation (AUC=0.95, 95% CI 0.89–1.0, p<0.0001). Clinically recorded net fluid output had a weaker correlation (r=0.66, p<0.001) and lesser ability to predict poor natriuretic response (AUC=0.76, 95% CI 0.63–0.89, p=0.002). Conclusions In patients being treated for ADHF, poor natriuretic response can be predicted soon after diuretic administration with excellent accuracy using a spot urine sample. PMID:26721915

  11. Prediction of Preoperative Anxiety in Children: Who is Most Accurate?

    PubMed Central

    MacLaren, Jill E.; Thompson, Caitlin; Weinberg, Megan; Fortier, Michelle A.; Morrison, Debra E.; Perret, Danielle; Kain, Zeev N.

    2009-01-01

    Background In this investigation, we sought to assess the ability of pediatric attending anesthesiologists, resident anesthesiologists and mothers to predict anxiety during induction of anesthesia in 2 to 16-year-old children (n=125). Methods Anesthesiologists and mothers provided predictions using a visual analog scale and children's anxiety was assessed using a valid behavior observation tool the Modified Yale Preoperative Anxiety Scale (mYPAS). All mothers were present during anesthetic induction and no child received sedative premedication. Correlational analyses were conducted. Results A total of 125 children aged 2 to 16 years, their mothers, and their attending pediatric anesthesiologists and resident anesthesiologists were studied. Correlational analyses revealed significant associations between attending predictions and child anxiety at induction (rs= 0.38, p<0.001). Resident anesthesiologist and mother predictions were not significantly related to children's anxiety during induction (rs = 0.01 and 0.001, respectively). In terms of accuracy of prediction, 47.2% of predictions made by attending anesthesiologists were within one standard deviation of the observed anxiety exhibited by the child, and 70.4% of predictions were within 2 standard deviations. Conclusions We conclude that attending anesthesiologists who practice in pediatric settings are better than mothers in predicting the anxiety of children during induction of anesthesia. While this finding has significant clinical implications, it is unclear if it can be extended to attending anesthesiologists whose practice is not mostly pediatric anesthesia. PMID:19448201

  12. Is Three-Dimensional Soft Tissue Prediction by Software Accurate?

    PubMed

    Nam, Ki-Uk; Hong, Jongrak

    2015-11-01

    The authors assessed whether virtual surgery, performed with a soft tissue prediction program, could correctly simulate the actual surgical outcome, focusing on soft tissue movement. Preoperative and postoperative computed tomography (CT) data for 29 patients, who had undergone orthognathic surgery, were obtained and analyzed using the Simplant Pro software. The program made a predicted soft tissue image (A) based on presurgical CT data. After the operation, we obtained actual postoperative CT data and an actual soft tissue image (B) was generated. Finally, the 2 images (A and B) were superimposed and analyzed differences between the A and B. Results were grouped in 2 classes: absolute values and vector values. In the absolute values, the left mouth corner was the most significant error point (2.36 mm). The right mouth corner (2.28 mm), labrale inferius (2.08 mm), and the pogonion (2.03 mm) also had significant errors. In vector values, prediction of the right-left side had a left-sided tendency, the superior-inferior had a superior tendency, and the anterior-posterior showed an anterior tendency. As a result, with this program, the position of points tended to be located more left, anterior, and superior than the "real" situation. There is a need to improve the prediction accuracy for soft tissue images. Such software is particularly valuable in predicting craniofacial soft tissues landmarks, such as the pronasale. With this software, landmark positions were most inaccurate in terms of anterior-posterior predictions.

  13. Fast and accurate automatic structure prediction with HHpred.

    PubMed

    Hildebrand, Andrea; Remmert, Michael; Biegert, Andreas; Söding, Johannes

    2009-01-01

    Automated protein structure prediction is becoming a mainstream tool for biological research. This has been fueled by steady improvements of publicly available automated servers over the last decade, in particular their ability to build good homology models for an increasing number of targets by reliably detecting and aligning more and more remotely homologous templates. Here, we describe the three fully automated versions of the HHpred server that participated in the community-wide blind protein structure prediction competition CASP8. What makes HHpred unique is the combination of usability, short response times (typically under 15 min) and a model accuracy that is competitive with those of the best servers in CASP8.

  14. Accurate perception of negative emotions predicts functional capacity in schizophrenia.

    PubMed

    Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J

    2014-04-30

    Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration.

  15. Towards Accurate Ab Initio Predictions of the Spectrum of Methane

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    We have carried out extensive ab initio calculations of the electronic structure of methane, and these results are used to compute vibrational energy levels. We include basis set extrapolations, core-valence correlation, relativistic effects, and Born- Oppenheimer breakdown terms in our calculations. Our ab initio predictions of the lowest lying levels are superb.

  16. Accurate Theoretical Prediction of the Properties of Energetic Materials

    DTIC Science & Technology

    2007-11-02

    calculations (e.g. Cheetah ). 8. Sensitivity. The structure prediction and lattice potential work will serve as a platform to examine impact/shock...nitromethane molecules. (In an extension of the present work, we will freeze the internal coordinates of the molecules and assess the extent to which the

  17. Learning regulatory programs that accurately predict differential expression with MEDUSA.

    PubMed

    Kundaje, Anshul; Lianoglou, Steve; Li, Xuejing; Quigley, David; Arias, Marta; Wiggins, Chris H; Zhang, Li; Leslie, Christina

    2007-12-01

    Inferring gene regulatory networks from high-throughput genomic data is one of the central problems in computational biology. In this paper, we describe a predictive modeling approach for studying regulatory networks, based on a machine learning algorithm called MEDUSA. MEDUSA integrates promoter sequence, mRNA expression, and transcription factor occupancy data to learn gene regulatory programs that predict the differential expression of target genes. Instead of using clustering or correlation of expression profiles to infer regulatory relationships, MEDUSA determines condition-specific regulators and discovers regulatory motifs that mediate the regulation of target genes. In this way, MEDUSA meaningfully models biological mechanisms of transcriptional regulation. MEDUSA solves the problem of predicting the differential (up/down) expression of target genes by using boosting, a technique from statistical learning, which helps to avoid overfitting as the algorithm searches through the high-dimensional space of potential regulators and sequence motifs. Experimental results demonstrate that MEDUSA achieves high prediction accuracy on held-out experiments (test data), that is, data not seen in training. We also present context-specific analysis of MEDUSA regulatory programs for DNA damage and hypoxia, demonstrating that MEDUSA identifies key regulators and motifs in these processes. A central challenge in the field is the difficulty of validating reverse-engineered networks in the absence of a gold standard. Our approach of learning regulatory programs provides at least a partial solution for the problem: MEDUSA's prediction accuracy on held-out data gives a concrete and statistically sound way to validate how well the algorithm performs. With MEDUSA, statistical validation becomes a prerequisite for hypothesis generation and network building rather than a secondary consideration.

  18. Standardized EEG interpretation accurately predicts prognosis after cardiac arrest

    PubMed Central

    Rossetti, Andrea O.; van Rootselaar, Anne-Fleur; Wesenberg Kjaer, Troels; Horn, Janneke; Ullén, Susann; Friberg, Hans; Nielsen, Niklas; Rosén, Ingmar; Åneman, Anders; Erlinge, David; Gasche, Yvan; Hassager, Christian; Hovdenes, Jan; Kjaergaard, Jesper; Kuiper, Michael; Pellis, Tommaso; Stammet, Pascal; Wanscher, Michael; Wetterslev, Jørn; Wise, Matt P.; Cronberg, Tobias

    2016-01-01

    Objective: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. Methods: In this cohort study, 4 EEG specialists, blinded to outcome, evaluated prospectively recorded EEGs in the Target Temperature Management trial (TTM trial) that randomized patients to 33°C vs 36°C. Routine EEG was performed in patients still comatose after rewarming. EEGs were classified into highly malignant (suppression, suppression with periodic discharges, burst-suppression), malignant (periodic or rhythmic patterns, pathological or nonreactive background), and benign EEG (absence of malignant features). Poor outcome was defined as best Cerebral Performance Category score 3–5 until 180 days. Results: Eight TTM sites randomized 202 patients. EEGs were recorded in 103 patients at a median 77 hours after cardiac arrest; 37% had a highly malignant EEG and all had a poor outcome (specificity 100%, sensitivity 50%). Any malignant EEG feature had a low specificity to predict poor prognosis (48%) but if 2 malignant EEG features were present specificity increased to 96% (p < 0.001). Specificity and sensitivity were not significantly affected by targeted temperature or sedation. A benign EEG was found in 1% of the patients with a poor outcome. Conclusions: Highly malignant EEG after rewarming reliably predicted poor outcome in half of patients without false predictions. An isolated finding of a single malignant feature did not predict poor outcome whereas a benign EEG was highly predictive of a good outcome. PMID:26865516

  19. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction.

    PubMed

    Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H

    2017-01-09

    The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively.

  20. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction

    PubMed Central

    Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K.; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G.; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H.

    2017-01-01

    The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively. PMID:27899623

  1. How Accurately Can We Predict Eclipses for Algol? (Poster abstract)

    NASA Astrophysics Data System (ADS)

    Turner, D.

    2016-06-01

    (Abstract only) beta Persei, or Algol, is a very well known eclipsing binary system consisting of a late B-type dwarf that is regularly eclipsed by a GK subgiant every 2.867 days. Eclipses, which last about 8 hours, are regular enough that predictions for times of minima are published in various places, Sky & Telescope magazine and The Observer's Handbook, for example. But eclipse minimum lasts for less than a half hour, whereas subtle mistakes in the current ephemeris for the star can result in predictions that are off by a few hours or more. The Algol system is fairly complex, with the Algol A and Algol B eclipsing system also orbited by Algol C with an orbital period of nearly 2 years. Added to that are complex long-term O-C variations with a periodicity of almost two centuries that, although suggested by Hoffmeister to be spurious, fit the type of light travel time variations expected for a fourth star also belonging to the system. The AB sub-system also undergoes mass transfer events that add complexities to its O-C behavior. Is it actually possible to predict precise times of eclipse minima for Algol months in advance given such complications, or is it better to encourage ongoing observations of the star so that O-C variations can be tracked in real time?

  2. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe

    NASA Astrophysics Data System (ADS)

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-12-01

    Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pHi. Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pHi, in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF4:Yb3+, Tm3+ UCNPs were used as pHi response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pHi value 3.0–7.0 with deviation less than 0.43. This approach shall facilitate the researches in pHi related areas and development of the intracellular drug delivery systems.

  3. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe

    PubMed Central

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-01-01

    Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pHi. Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pHi, in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF4:Yb3+, Tm3+ UCNPs were used as pHi response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pHi value 3.0–7.0 with deviation less than 0.43. This approach shall facilitate the researches in pHi related areas and development of the intracellular drug delivery systems. PMID:27934889

  4. A fluorescence-based quantitative real-time PCR assay for accurate Pocillopora damicornis species identification

    NASA Astrophysics Data System (ADS)

    Thomas, Luke; Stat, Michael; Evans, Richard D.; Kennington, W. Jason

    2016-09-01

    Pocillopora damicornis is one of the most extensively studied coral species globally, but high levels of phenotypic plasticity within the genus make species identification based on morphology alone unreliable. As a result, there is a compelling need to develop cheap and time-effective molecular techniques capable of accurately distinguishing P. damicornis from other congeneric species. Here, we develop a fluorescence-based quantitative real-time PCR (qPCR) assay to genotype a single nucleotide polymorphism that accurately distinguishes P. damicornis from other morphologically similar Pocillopora species. We trial the assay across colonies representing multiple Pocillopora species and then apply the assay to screen samples of Pocillopora spp. collected at regional scales along the coastline of Western Australia. This assay offers a cheap and time-effective alternative to Sanger sequencing and has broad applications including studies on gene flow, dispersal, recruitment and physiological thresholds of P. damicornis.

  5. Quantitative proteomics using the high resolution accurate mass capabilities of the quadrupole-orbitrap mass spectrometer.

    PubMed

    Gallien, Sebastien; Domon, Bruno

    2014-08-01

    High resolution/accurate mass hybrid mass spectrometers have considerably advanced shotgun proteomics and the recent introduction of fast sequencing capabilities has expanded its use for targeted approaches. More specifically, the quadrupole-orbitrap instrument has a unique configuration and its new features enable a wide range of experiments. An overview of the analytical capabilities of this instrument is presented, with a focus on its application to quantitative analyses. The high resolution, the trapping capability and the versatility of the instrument have allowed quantitative proteomic workflows to be redefined and new data acquisition schemes to be developed. The initial proteomic applications have shown an improvement of the analytical performance. However, as quantification relies on ion trapping, instead of ion beam, further refinement of the technique can be expected.

  6. Predictive rendering for accurate material perception: modeling and rendering fabrics

    NASA Astrophysics Data System (ADS)

    Bala, Kavita

    2012-03-01

    In computer graphics, rendering algorithms are used to simulate the appearance of objects and materials in a wide range of applications. Designers and manufacturers rely entirely on these rendered images to previsualize scenes and products before manufacturing them. They need to differentiate between different types of fabrics, paint finishes, plastics, and metals, often with subtle differences, for example, between silk and nylon, formaica and wood. Thus, these applications need predictive algorithms that can produce high-fidelity images that enable such subtle material discrimination.

  7. Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim

    2014-11-01

    Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).

  8. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis.

    PubMed

    Hilbert, David W; Smith, William L; Chadwick, Sean G; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E; Aguin, Tina J; Sobel, Jack D; Gygax, Scott E

    2016-04-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease:Gardnerella vaginalis,Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the orderClostridiales),Megasphaeraphylotype 1 or 2,Lactobacillus iners,Lactobacillus crispatus,Lactobacillus gasseri, andLactobacillus jensenii We generated a logistic regression model that identifiedG. vaginalis,A. vaginae, andMegasphaeraphylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion ofLactobacillusspp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV.

  9. Objective criteria accurately predict amputation following lower extremity trauma.

    PubMed

    Johansen, K; Daines, M; Howey, T; Helfet, D; Hansen, S T

    1990-05-01

    MESS (Mangled Extremity Severity Score) is a simple rating scale for lower extremity trauma, based on skeletal/soft-tissue damage, limb ischemia, shock, and age. Retrospective analysis of severe lower extremity injuries in 25 trauma victims demonstrated a significant difference between MESS values for 17 limbs ultimately salvaged (mean, 4.88 +/- 0.27) and nine requiring amputation (mean, 9.11 +/- 0.51) (p less than 0.01). A prospective trial of MESS in lower extremity injuries managed at two trauma centers again demonstrated a significant difference between MESS values of 14 salvaged (mean, 4.00 +/- 0.28) and 12 doomed (mean, 8.83 +/- 0.53) limbs (p less than 0.01). In both the retrospective survey and the prospective trial, a MESS value greater than or equal to 7 predicted amputation with 100% accuracy. MESS may be useful in selecting trauma victims whose irretrievably injured lower extremities warrant primary amputation.

  10. Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations

    DTIC Science & Technology

    2011-09-30

    System via Accurate Light Calculations Curtis D. Mobley Sequoia Scientific, Inc. 2700 Richards Road, Suite 107 Bellevue, WA 98005 phone: 425...incorporate extremely fast but accurate light calculations into coupled physical-biological-optical ocean ecosystem models as used for operational three...dimensional ecosystem predictions. Improvements in light calculations lead to improvements in predictions of chlorophyll concentrations and other

  11. Generating highly accurate prediction hypotheses through collaborative ensemble learning

    PubMed Central

    Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco

    2017-01-01

    Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination. PMID:28304378

  12. Accurate predictions for the production of vaporized water

    SciTech Connect

    Morin, E.; Montel, F.

    1995-12-31

    The production of water vaporized in the gas phase is controlled by the local conditions around the wellbore. The pressure gradient applied to the formation creates a sharp increase of the molar water content in the hydrocarbon phase approaching the well; this leads to a drop in the pore water saturation around the wellbore. The extent of the dehydrated zone which is formed is the key controlling the bottom-hole content of vaporized water. The maximum water content in the hydrocarbon phase at a given pressure, temperature and salinity is corrected by capillarity or adsorption phenomena depending on the actual water saturation. Describing the mass transfer of the water between the hydrocarbon phases and the aqueous phase into the tubing gives a clear idea of vaporization effects on the formation of scales. Field example are presented for gas fields with temperatures ranging between 140{degrees}C and 180{degrees}C, where water vaporization effects are significant. Conditions for salt plugging in the tubing are predicted.

  13. Generating highly accurate prediction hypotheses through collaborative ensemble learning

    NASA Astrophysics Data System (ADS)

    Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco

    2017-03-01

    Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination.

  14. Change in BMI Accurately Predicted by Social Exposure to Acquaintances

    PubMed Central

    Oloritun, Rahman O.; Ouarda, Taha B. M. J.; Moturu, Sai; Madan, Anmol; Pentland, Alex (Sandy); Khayal, Inas

    2013-01-01

    Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R2. This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends. PMID

  15. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  16. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  17. Optimization of sample preparation for accurate results in quantitative NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Yamazaki, Taichi; Nakamura, Satoe; Saito, Takeshi

    2017-04-01

    Quantitative nuclear magnetic resonance (qNMR) spectroscopy has received high marks as an excellent measurement tool that does not require the same reference standard as the analyte. Measurement parameters have been discussed in detail and high-resolution balances have been used for sample preparation. However, the high-resolution balances, such as an ultra-microbalance, are not general-purpose analytical tools and many analysts may find those balances difficult to use, thereby hindering accurate sample preparation for qNMR measurement. In this study, we examined the relationship between the resolution of the balance and the amount of sample weighed during sample preparation. We were able to confirm the accuracy of the assay results for samples weighed on a high-resolution balance, such as the ultra-microbalance. Furthermore, when an appropriate tare and amount of sample was weighed on a given balance, accurate assay results were obtained with another high-resolution balance. Although this is a fundamental result, it offers important evidence that would enhance the versatility of the qNMR method.

  18. Quantitatively predictable control of Drosophila transcriptional enhancers in vivo with engineered transcription factors.

    PubMed

    Crocker, Justin; Ilsley, Garth R; Stern, David L

    2016-03-01

    Genes are regulated by transcription factors that bind to regions of genomic DNA called enhancers. Considerable effort is focused on identifying transcription factor binding sites, with the goal of predicting gene expression from DNA sequence. Despite this effort, general, predictive models of enhancer function are currently lacking. Here we combine quantitative models of enhancer function with manipulations using engineered transcription factors to examine the extent to which enhancer function can be controlled in a quantitatively predictable manner. Our models, which incorporate few free parameters, can accurately predict the contributions of ectopic transcription factor inputs. These models allow the predictable 'tuning' of enhancers, providing a framework for the quantitative control of enhancers with engineered transcription factors.

  19. Quantitative prediction of stresses during thermoset cure

    SciTech Connect

    Adolf, D.; Chambers, B.; Burchett, S.

    1996-07-01

    Two thin-walled Al tubes were filled with epoxy which were cured isothermally; one tube was instrumented with strain gauges, and the other with thermocouples. Finite element codes were used. Predicted and measured centerline hoop strains are shown; predictions and measurements agree. This is being applied to encapsulated components.

  20. A Global Approach to Accurate and Automatic Quantitative Analysis of NMR Spectra by Complex Least-Squares Curve Fitting

    NASA Astrophysics Data System (ADS)

    Martin, Y. L.

    The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species

  1. SILAC-Based Quantitative Strategies for Accurate Histone Posttranslational Modification Profiling Across Multiple Biological Samples.

    PubMed

    Cuomo, Alessandro; Soldi, Monica; Bonaldi, Tiziana

    2017-01-01

    Histone posttranslational modifications (hPTMs) play a key role in regulating chromatin dynamics and fine-tuning DNA-based processes. Mass spectrometry (MS) has emerged as a versatile technology for the analysis of histones, contributing to the dissection of hPTMs, with special strength in the identification of novel marks and in the assessment of modification cross talks. Stable isotope labeling by amino acid in cell culture (SILAC), when adapted to histones, permits the accurate quantification of PTM changes among distinct functional states; however, its application has been mainly confined to actively dividing cell lines. A spike-in strategy based on SILAC can be used to overcome this limitation and profile hPTMs across multiple samples. We describe here the adaptation of SILAC to the analysis of histones, in both standard and spike-in setups. We also illustrate its coupling to an implemented "shotgun" workflow, by which heavy arginine-labeled histone peptides, produced upon Arg-C digestion, are qualitatively and quantitatively analyzed in an LC-MS/MS system that combines ultrahigh-pressure liquid chromatography (UHPLC) with new-generation Orbitrap high-resolution instrument.

  2. Accurate detection and quantitation of heteroplasmic mitochondrial point mutations by pyrosequencing.

    PubMed

    White, Helen E; Durston, Victoria J; Seller, Anneke; Fratter, Carl; Harvey, John F; Cross, Nicholas C P

    2005-01-01

    Disease-causing mutations in mitochondrial DNA (mtDNA) are typically heteroplasmic and therefore interpretation of genetic tests for mitochondrial disorders can be problematic. Detection of low level heteroplasmy is technically demanding and it is often difficult to discriminate between the absence of a mutation or the failure of a technique to detect the mutation in a particular tissue. The reliable measurement of heteroplasmy in different tissues may help identify individuals who are at risk of developing specific complications and allow improved prognostic advice for patients and family members. We have evaluated Pyrosequencing technology for the detection and estimation of heteroplasmy for six mitochondrial point mutations associated with the following diseases: Leber's hereditary optical neuropathy (LHON), G3460A, G11778A, and T14484C; mitochondrial encephalopathy with lactic acidosis and stroke-like episodes (MELAS), A3243G; myoclonus epilepsy with ragged red fibers (MERRF), A8344G, and neurogenic muscle weakness, ataxia, and retinitis pigmentosa (NARP)/Leighs: T8993G/C. Results obtained from the Pyrosequencing assays for 50 patients with presumptive mitochondrial disease were compared to those obtained using the commonly used diagnostic technique of polymerase chain reaction (PCR) and restriction enzyme digestion. The Pyrosequencing assays provided accurate genotyping and quantitative determination of mutational load with a sensitivity and specificity of 100%. The MELAS A3243G mutation was detected reliably at a level of 1% heteroplasmy. We conclude that Pyrosequencing is a rapid and robust method for detecting heteroplasmic mitochondrial point mutations.

  3. Accurate First-Principles Spectra Predictions for Planetological and Astrophysical Applications at Various T-Conditions

    NASA Astrophysics Data System (ADS)

    Rey, M.; Nikitin, A. V.; Tyuterev, V.

    2014-06-01

    Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical

  4. Possibility of quantitative prediction of cavitation erosion without model test

    SciTech Connect

    Kato, Hiroharu; Konno, Akihisa; Maeda, Masatsugu; Yamaguchi, Hajime

    1996-09-01

    A scenario for quantitative prediction of cavitation erosion was proposed. The key value is the impact force/pressure spectrum on a solid surface caused by cavitation bubble collapse. As the first step of prediction, the authors constructed the scenario from an estimation of the cavity generation rate to the prediction of impact force spectrum, including the estimations of collapsing cavity number and impact pressure. The prediction was compared with measurements of impact force spectra on a partially cavitating hydrofoil. A good quantitative agreement was obtained between the prediction and the experiment. However, the present method predicted a larger effect of main flow velocity than that observed. The present scenario is promising as a method of predicting erosion without using a model test.

  5. Alignment of Lyapunov Vectors: A Quantitative Criterion to Predict Catastrophes?

    PubMed Central

    Beims, Marcus W.; Gallas, Jason A. C.

    2016-01-01

    We argue that the alignment of Lyapunov vectors provides a quantitative criterion to predict catastrophes, i.e. the imminence of large-amplitude events in chaotic time-series of observables generated by sets of ordinary differential equations. Explicit predictions are reported for a Rössler oscillator and for a semiconductor laser with optoelectronic feedback. PMID:27845435

  6. Alignment of Lyapunov Vectors: A Quantitative Criterion to Predict Catastrophes?

    NASA Astrophysics Data System (ADS)

    Beims, Marcus W.; Gallas, Jason A. C.

    2016-11-01

    We argue that the alignment of Lyapunov vectors provides a quantitative criterion to predict catastrophes, i.e. the imminence of large-amplitude events in chaotic time-series of observables generated by sets of ordinary differential equations. Explicit predictions are reported for a Rössler oscillator and for a semiconductor laser with optoelectronic feedback.

  7. Accurate Prediction of One-Dimensional Protein Structure Features Using SPINE-X.

    PubMed

    Faraggi, Eshel; Kloczkowski, Andrzej

    2017-01-01

    Accurate prediction of protein secondary structure and other one-dimensional structure features is essential for accurate sequence alignment, three-dimensional structure modeling, and function prediction. SPINE-X is a software package to predict secondary structure as well as accessible surface area and dihedral angles ϕ and ψ. For secondary structure SPINE-X achieves an accuracy of between 81 and 84 % depending on the dataset and choice of tests. The Pearson correlation coefficient for accessible surface area prediction is 0.75 and the mean absolute error from the ϕ and ψ dihedral angles are 20(∘) and 33(∘), respectively. The source code and a Linux executables for SPINE-X are available from Research and Information Systems at http://mamiris.com .

  8. Accurate prediction of adsorption energies on graphene, using a dispersion-corrected semiempirical method including solvation.

    PubMed

    Vincent, Mark A; Hillier, Ian H

    2014-08-25

    The accurate prediction of the adsorption energies of unsaturated molecules on graphene in the presence of water is essential for the design of molecules that can modify its properties and that can aid its processability. We here show that a semiempirical MO method corrected for dispersive interactions (PM6-DH2) can predict the adsorption energies of unsaturated hydrocarbons and the effect of substitution on these values to an accuracy comparable to DFT values and in good agreement with the experiment. The adsorption energies of TCNE, TCNQ, and a number of sulfonated pyrenes are also predicted, along with the effect of hydration using the COSMO model.

  9. Accurately predicting copper interconnect topographies in foundry design for manufacturability flows

    NASA Astrophysics Data System (ADS)

    Lu, Daniel; Fan, Zhong; Tak, Ki Duk; Chang, Li-Fu; Zou, Elain; Jiang, Jenny; Yang, Josh; Zhuang, Linda; Chen, Kuang Han; Hurat, Philippe; Ding, Hua

    2011-04-01

    This paper presents a model-based Chemical Mechanical Polishing (CMP) Design for Manufacturability (DFM) () methodology that includes an accurate prediction of post-CMP copper interconnect topographies at the advanced process technology nodes. Using procedures of extensive model calibration and validation, the CMP process model accurately predicts post-CMP dimensions, such as erosion, dishing, and copper thickness with excellent correlation to silicon measurements. This methodology provides an efficient DFM flow to detect and fix physical manufacturing hotspots related to copper pooling and Depth of Focus (DOF) failures at both block- and full chip level designs. Moreover, the predicted thickness output is used in the CMP-aware RC extraction and Timing analysis flows for better understanding of performance yield and timing impact. In addition, the CMP model can be applied to the verification of model-based dummy fill flows.

  10. Cas9-chromatin binding information enables more accurate CRISPR off-target prediction

    PubMed Central

    Singh, Ritambhara; Kuscu, Cem; Quinlan, Aaron; Qi, Yanjun; Adli, Mazhar

    2015-01-01

    The CRISPR system has become a powerful biological tool with a wide range of applications. However, improving targeting specificity and accurately predicting potential off-targets remains a significant goal. Here, we introduce a web-based CRISPR/Cas9 Off-target Prediction and Identification Tool (CROP-IT) that performs improved off-target binding and cleavage site predictions. Unlike existing prediction programs that solely use DNA sequence information; CROP-IT integrates whole genome level biological information from existing Cas9 binding and cleavage data sets. Utilizing whole-genome chromatin state information from 125 human cell types further enhances its computational prediction power. Comparative analyses on experimentally validated datasets show that CROP-IT outperforms existing computational algorithms in predicting both Cas9 binding as well as cleavage sites. With a user-friendly web-interface, CROP-IT outputs scored and ranked list of potential off-targets that enables improved guide RNA design and more accurate prediction of Cas9 binding or cleavage sites. PMID:26032770

  11. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises.

  12. An effective method for accurate prediction of the first hyperpolarizability of alkalides.

    PubMed

    Wang, Jia-Nan; Xu, Hong-Liang; Sun, Shi-Ling; Gao, Ting; Li, Hong-Zhi; Li, Hui; Su, Zhong-Min

    2012-01-15

    The proper theoretical calculation method for nonlinear optical (NLO) properties is a key factor to design the excellent NLO materials. Yet it is a difficult task to obatin the accurate NLO property of large scale molecule. In present work, an effective intelligent computing method, as called extreme learning machine-neural network (ELM-NN), is proposed to predict accurately the first hyperpolarizability (β(0)) of alkalides from low-accuracy first hyperpolarizability. Compared with neural network (NN) and genetic algorithm neural network (GANN), the root-mean-square deviations of the predicted values obtained by ELM-NN, GANN, and NN with their MP2 counterpart are 0.02, 0.08, and 0.17 a.u., respectively. It suggests that the predicted values obtained by ELM-NN are more accurate than those calculated by NN and GANN methods. Another excellent point of ELM-NN is the ability to obtain the high accuracy level calculated values with less computing cost. Experimental results show that the computing time of MP2 is 2.4-4 times of the computing time of ELM-NN. Thus, the proposed method is a potentially powerful tool in computational chemistry, and it may predict β(0) of the large scale molecules, which is difficult to obtain by high-accuracy theoretical method due to dramatic increasing computational cost.

  13. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  14. Hash: a Program to Accurately Predict Protein Hα Shifts from Neighboring Backbone Shifts3

    PubMed Central

    Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall

    2012-01-01

    Chemical shifts provide not only peak identities for analyzing NMR data, but also an important source of conformational information for studying protein structures. Current structural studies requiring Hα chemical shifts suffer from the following limitations. (1) For large proteins, the Hα chemical shifts can be difficult to assign using conventional NMR triple-resonance experiments, mainly due to the fast transverse relaxation rate of Cα that restricts the signal sensitivity. (2) Previous chemical shift prediction approaches either require homologous models with high sequence similarity or rely heavily on accurate backbone and side-chain structural coordinates. When neither sequence homologues nor structural coordinates are available, we must resort to other information to predict Hα chemical shifts. Predicting accurate Hα chemical shifts using other obtainable information, such as the chemical shifts of nearby backbone atoms (i.e., adjacent atoms in the sequence), can remedy the above dilemmas, and hence advance NMR-based structural studies of proteins. By specifically exploiting the dependencies on chemical shifts of nearby backbone atoms, we propose a novel machine learning algorithm, called Hash, to predict Hα chemical shifts. Hash combines a new fragment-based chemical shift search approach with a non-parametric regression model, called the generalized additive model, to effectively solve the prediction problem. We demonstrate that the chemical shifts of nearby backbone atoms provide a reliable source of information for predicting accurate Hα chemical shifts. Our testing results on different possible combinations of input data indicate that Hash has a wide rage of potential NMR applications in structural and biological studies of proteins. PMID:23242797

  15. Accurate Prediction of Ligand Affinities for a Proton-Dependent Oligopeptide Transporter

    PubMed Central

    Samsudin, Firdaus; Parker, Joanne L.; Sansom, Mark S.P.; Newstead, Simon; Fowler, Philip W.

    2016-01-01

    Summary Membrane transporters are critical modulators of drug pharmacokinetics, efficacy, and safety. One example is the proton-dependent oligopeptide transporter PepT1, also known as SLC15A1, which is responsible for the uptake of the β-lactam antibiotics and various peptide-based prodrugs. In this study, we modeled the binding of various peptides to a bacterial homolog, PepTSt, and evaluated a range of computational methods for predicting the free energy of binding. Our results show that a hybrid approach (endpoint methods to classify peptides into good and poor binders and a theoretically exact method for refinement) is able to accurately predict affinities, which we validated using proteoliposome transport assays. Applying the method to a homology model of PepT1 suggests that the approach requires a high-quality structure to be accurate. Our study provides a blueprint for extending these computational methodologies to other pharmaceutically important transporter families. PMID:27028887

  16. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    NASA Astrophysics Data System (ADS)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2017-04-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  17. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    NASA Astrophysics Data System (ADS)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2016-12-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  18. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  19. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations

    NASA Astrophysics Data System (ADS)

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-01

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  20. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations.

    PubMed

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-15

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  1. Bicluster Sampled Coherence Metric (BSCM) provides an accurate environmental context for phenotype predictions

    PubMed Central

    2015-01-01

    Background Biclustering is a popular method for identifying under which experimental conditions biological signatures are co-expressed. However, the general biclustering problem is NP-hard, offering room to focus algorithms on specific biological tasks. We hypothesize that conditional co-regulation of genes is a key factor in determining cell phenotype and that accurately segregating conditions in biclusters will improve such predictions. Thus, we developed a bicluster sampled coherence metric (BSCM) for determining which conditions and signals should be included in a bicluster. Results Our BSCM calculates condition and cluster size specific p-values, and we incorporated these into the popular integrated biclustering algorithm cMonkey. We demonstrate that incorporation of our new algorithm significantly improves bicluster co-regulation scores (p-value = 0.009) and GO annotation scores (p-value = 0.004). Additionally, we used a bicluster based signal to predict whether a given experimental condition will result in yeast peroxisome induction. Using the new algorithm, the classifier accuracy improves from 41.9% to 76.1% correct. Conclusions We demonstrate that the proposed BSCM helps determine which signals ought to be co-clustered, resulting in more accurately assigned bicluster membership. Furthermore, we show that BSCM can be extended to more accurately detect under which experimental conditions the genes are co-clustered. Features derived from this more accurate analysis of conditional regulation results in a dramatic improvement in the ability to predict a cellular phenotype in yeast. The latest cMonkey is available for download at https://github.com/baliga-lab/cmonkey2. The experimental data and source code featured in this paper is available http://AitchisonLab.com/BSCM. BSCM has been incorporated in the official cMonkey release. PMID:25881257

  2. Quantitative Prediction of Individual Psychopathology in Trauma Survivors Using Resting-State fMRI

    PubMed Central

    Gong, Qiyong; Li, Lingjiang; Du, Mingying; Pettersson-Yeo, William; Crossley, Nicolas; Yang, Xun; Li, Jing; Huang, Xiaoqi; Mechelli, Andrea

    2014-01-01

    Neuroimaging techniques hold the promise that they may one day aid the clinical assessment of individual psychiatric patients. However, the vast majority of studies published so far have been based on average differences between groups. This study employed a multivariate approach to examine the potential of resting-state functional magnetic resonance imaging (MRI) data for making accurate predictions about psychopathology in survivors of the 2008 Sichuan earthquake at an individual level. Resting-state functional MRI data was acquired for 121 survivors of the 2008 Sichuan earthquake each of whom was assessed for symptoms of post-traumatic stress disorder (PTSD) using the 17-item PTSD Checklist (PCL). Using a multivariate analytical method known as relevance vector regression (RVR), we examined the relationship between resting-state functional MRI data and symptom scores. We found that the use of RVR allowed quantitative prediction of clinical scores with statistically significant accuracy (correlation=0.32, P=0.006; mean squared error=176.88, P=0.001). Accurate prediction was based on functional activation in a number of prefrontal, parietal, and occipital regions. This is the first evidence that neuroimaging techniques may inform the clinical assessment of trauma-exposed individuals by providing an accurate and objective quantitative estimation of psychopathology. Furthermore, the significant contribution of parietal and occipital regions to such estimation challenges the traditional view of PTSD as a disorder specific to the fronto-limbic network. PMID:24064470

  3. Quantitative prediction of individual psychopathology in trauma survivors using resting-state FMRI.

    PubMed

    Gong, Qiyong; Li, Lingjiang; Du, Mingying; Pettersson-Yeo, William; Crossley, Nicolas; Yang, Xun; Li, Jing; Huang, Xiaoqi; Mechelli, Andrea

    2014-02-01

    Neuroimaging techniques hold the promise that they may one day aid the clinical assessment of individual psychiatric patients. However, the vast majority of studies published so far have been based on average differences between groups. This study employed a multivariate approach to examine the potential of resting-state functional magnetic resonance imaging (MRI) data for making accurate predictions about psychopathology in survivors of the 2008 Sichuan earthquake at an individual level. Resting-state functional MRI data was acquired for 121 survivors of the 2008 Sichuan earthquake each of whom was assessed for symptoms of post-traumatic stress disorder (PTSD) using the 17-item PTSD Checklist (PCL). Using a multivariate analytical method known as relevance vector regression (RVR), we examined the relationship between resting-state functional MRI data and symptom scores. We found that the use of RVR allowed quantitative prediction of clinical scores with statistically significant accuracy (correlation=0.32, P=0.006; mean squared error=176.88, P=0.001). Accurate prediction was based on functional activation in a number of prefrontal, parietal, and occipital regions. This is the first evidence that neuroimaging techniques may inform the clinical assessment of trauma-exposed individuals by providing an accurate and objective quantitative estimation of psychopathology. Furthermore, the significant contribution of parietal and occipital regions to such estimation challenges the traditional view of PTSD as a disorder specific to the fronto-limbic network.

  4. Highly Accurate Structure-Based Prediction of HIV-1 Coreceptor Usage Suggests Intermolecular Interactions Driving Tropism.

    PubMed

    Kieslich, Chris A; Tamamis, Phanourios; Guzman, Yannis A; Onel, Melis; Floudas, Christodoulos A

    2016-01-01

    HIV-1 entry into host cells is mediated by interactions between the V3-loop of viral glycoprotein gp120 and chemokine receptor CCR5 or CXCR4, collectively known as HIV-1 coreceptors. Accurate genotypic prediction of coreceptor usage is of significant clinical interest and determination of the factors driving tropism has been the focus of extensive study. We have developed a method based on nonlinear support vector machines to elucidate the interacting residue pairs driving coreceptor usage and provide highly accurate coreceptor usage predictions. Our models utilize centroid-centroid interaction energies from computationally derived structures of the V3-loop:coreceptor complexes as primary features, while additional features based on established rules regarding V3-loop sequences are also investigated. We tested our method on 2455 V3-loop sequences of various lengths and subtypes, and produce a median area under the receiver operator curve of 0.977 based on 500 runs of 10-fold cross validation. Our study is the first to elucidate a small set of specific interacting residue pairs between the V3-loop and coreceptors capable of predicting coreceptor usage with high accuracy across major HIV-1 subtypes. The developed method has been implemented as a web tool named CRUSH, CoReceptor USage prediction for HIV-1, which is available at http://ares.tamu.edu/CRUSH/.

  5. Accurate similarity index based on activity and connectivity of node for link prediction

    NASA Astrophysics Data System (ADS)

    Li, Longjie; Qian, Lvjian; Wang, Xiaoping; Luo, Shishun; Chen, Xiaoyun

    2015-05-01

    Recent years have witnessed the increasing of available network data; however, much of those data is incomplete. Link prediction, which can find the missing links of a network, plays an important role in the research and analysis of complex networks. Based on the assumption that two unconnected nodes which are highly similar are very likely to have an interaction, most of the existing algorithms solve the link prediction problem by computing nodes' similarities. The fundamental requirement of those algorithms is accurate and effective similarity indices. In this paper, we propose a new similarity index, namely similarity based on activity and connectivity (SAC), which performs link prediction more accurately. To compute the similarity between two nodes, this index employs the average activity of these two nodes in their common neighborhood and the connectivities between them and their common neighbors. The higher the average activity is and the stronger the connectivities are, the more similar the two nodes are. The proposed index not only commendably distinguishes the contributions of paths but also incorporates the influence of endpoints. Therefore, it can achieve a better predicting result. To verify the performance of SAC, we conduct experiments on 10 real-world networks. Experimental results demonstrate that SAC outperforms the compared baselines.

  6. Relevance of MTF and NPS in quantitative CT: towards developing a predictable model of quantitative performance

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Richard, Samuel; Samei, Ehsan

    2012-03-01

    The quantification of lung nodule volume based on CT images provides valuable information for disease diagnosis and staging. However, the precision of the quantification is protocol, system, and technique dependent and needs to be evaluated for each specific case. To efficiently investigate the quantitative precision and find an optimal operating point, it is important to develop a predictive model based on basic system parameters. In this study, a Fourier-based metric, the estimability index (e') was proposed as such a predictor, and validated across a variety of imaging conditions. To first obtain the ground truth of quantitative precision, an anthropomorphic chest phantom with synthetic spherical nodules were imaged on a 64 slice CT scanner across a range of protocols (five exposure levels and two reconstruction algorithms). The volumes of nodules were quantified from the images using clinical software, with the precision of the quantification calculated for each protocol. To predict the precision, e' was calculated for each protocol based on several Fourier-based figures of merit, which modeled the characteristic of the quantitation task and the imaging condition (resolution, noise, etc.) of a particular protocol. Results showed a strong correlation (R2=0.92) between the measured and predicted precision across all protocols, indicating e' as an effective predictor of the quantitative precision. This study provides a useful framework for quantification-oriented optimization of CT protocols.

  7. Accurate prediction of the linear viscoelastic properties of highly entangled mono and bidisperse polymer melts.

    PubMed

    Stephanou, Pavlos S; Mavrantzas, Vlasis G

    2014-06-07

    We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data.

  8. How accurate is the Kubelka-Munk theory of diffuse reflection? A quantitative answer

    NASA Astrophysics Data System (ADS)

    Joseph, Richard I.; Thomas, Michael E.

    2012-10-01

    The (heuristic) Kubelka-Munk theory of diffuse reflectance and transmittance of a film on a substrate, which is widely used because it gives simple analytic results, is compared to the rigorous radiative transfer model of Chandrasekhar. The rigorous model has to be numerically solved, thus is less intuitive. The Kubelka-Munk theory uses an absorption coefficient and scatter coefficient as inputs, similar to the rigorous model of Chandrasekhar. The relationship between these two sets of coefficients is addressed. It is shown that the Kubelka-Munk theory is remarkably accurate if one uses the proper albedo parameter.

  9. Prediction of Accurate Thermochemistry of Medium and Large Sized Radicals Using Connectivity-Based Hierarchy (CBH).

    PubMed

    Sengupta, Arkajyoti; Raghavachari, Krishnan

    2014-10-14

    Accurate modeling of the chemical reactions in many diverse areas such as combustion, photochemistry, or atmospheric chemistry strongly depends on the availability of thermochemical information of the radicals involved. However, accurate thermochemical investigations of radical systems using state of the art composite methods have mostly been restricted to the study of hydrocarbon radicals of modest size. In an alternative approach, systematic error-canceling thermochemical hierarchy of reaction schemes can be applied to yield accurate results for such systems. In this work, we have extended our connectivity-based hierarchy (CBH) method to the investigation of radical systems. We have calibrated our method using a test set of 30 medium sized radicals to evaluate their heats of formation. The CBH-rad30 test set contains radicals containing diverse functional groups as well as cyclic systems. We demonstrate that the sophisticated error-canceling isoatomic scheme (CBH-2) with modest levels of theory is adequate to provide heats of formation accurate to ∼1.5 kcal/mol. Finally, we predict heats of formation of 19 other large and medium sized radicals for which the accuracy of available heats of formation are less well-known.

  10. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  11. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.

  12. Highly sensitive capillary electrophoresis-mass spectrometry for rapid screening and accurate quantitation of drugs of abuse in urine.

    PubMed

    Kohler, Isabelle; Schappler, Julie; Rudaz, Serge

    2013-05-30

    The combination of capillary electrophoresis (CE) and mass spectrometry (MS) is particularly well adapted to bioanalysis due to its high separation efficiency, selectivity, and sensitivity; its short analytical time; and its low solvent and sample consumption. For clinical and forensic toxicology, a two-step analysis is usually performed: first, a screening step for compound identification, and second, confirmation and/or accurate quantitation in cases of presumed positive results. In this study, a fast and sensitive CE-MS workflow was developed for the screening and quantitation of drugs of abuse in urine samples. A CE with a time-of-flight MS (CE-TOF/MS) screening method was developed using a simple urine dilution and on-line sample preconcentration with pH-mediated stacking. The sample stacking allowed for a high loading capacity (20.5% of the capillary length), leading to limits of detection as low as 2 ng mL(-1) for drugs of abuse. Compound quantitation of positive samples was performed by CE-MS/MS with a triple quadrupole MS equipped with an adapted triple-tube sprayer and an electrospray ionization (ESI) source. The CE-ESI-MS/MS method was validated for two model compounds, cocaine (COC) and methadone (MTD), according to the Guidance of the Food and Drug Administration. The quantitative performance was evaluated for selectivity, response function, the lower limit of quantitation, trueness, precision, and accuracy. COC and MTD detection in urine samples was determined to be accurate over the range of 10-1000 ng mL(-1) and 21-1000 ng mL(-1), respectively.

  13. Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling.

    PubMed

    Boers, Stefan A; Hays, John P; Jansen, Ruud

    2017-04-05

    In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison.

  14. Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling

    PubMed Central

    Boers, Stefan A.; Hays, John P.; Jansen, Ruud

    2017-01-01

    In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison. PMID:28378789

  15. Planar Near-Field Phase Retrieval Using GPUs for Accurate THz Far-Field Prediction

    NASA Astrophysics Data System (ADS)

    Junkin, Gary

    2013-04-01

    With a view to using Phase Retrieval to accurately predict Terahertz antenna far-field from near-field intensity measurements, this paper reports on three fundamental advances that achieve very low algorithmic error penalties. The first is a new Gaussian beam analysis that provides accurate initial complex aperture estimates including defocus and astigmatic phase errors, based only on first and second moment calculations. The second is a powerful noise tolerant near-field Phase Retrieval algorithm that combines Anderson's Plane-to-Plane (PTP) with Fienup's Hybrid-Input-Output (HIO) and Successive Over-Relaxation (SOR) to achieve increased accuracy at reduced scan separations. The third advance employs teraflop Graphical Processing Units (GPUs) to achieve practically real time near-field phase retrieval and to obtain the optimum aperture constraint without any a priori information.

  16. Biomarkers are used to predict quantitative metabolite concentration profiles in human red blood cells

    PubMed Central

    Palsson, Bernhard O.

    2017-01-01

    Deep-coverage metabolomic profiling has revealed a well-defined development of metabolic decay in human red blood cells (RBCs) under cold storage conditions. A set of extracellular biomarkers has been recently identified that reliably defines the qualitative state of the metabolic network throughout this metabolic decay process. Here, we extend the utility of these biomarkers by using them to quantitatively predict the concentrations of other metabolites in the red blood cell. We are able to accurately predict the concentration profile of 84 of the 91 (92%) measured metabolites (p < 0.05) in RBC metabolism using only measurements of these five biomarkers. The median of prediction errors (symmetric mean absolute percent error) across all metabolites was 13%. The ability to predict numerous metabolite concentrations from a simple set of biomarkers offers the potential for the development of a powerful workflow that could be used to evaluate the metabolic state of a biological system using a minimal set of measurements. PMID:28264007

  17. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space.

    PubMed

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O Anatole; Müller, Klaus-Robert; Tkatchenko, Alexandre

    2015-06-18

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.

  18. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  19. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    SciTech Connect

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.

  20. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    SciTech Connect

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  1. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  2. Accurate prediction of severe allergic reactions by a small set of environmental parameters (NDVI, temperature).

    PubMed

    Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.

  3. Microstructure-Dependent Gas Adsorption: Accurate Predictions of Methane Uptake in Nanoporous Carbons

    SciTech Connect

    Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R

    2014-01-01

    We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.

  4. Accurate Prediction of Severe Allergic Reactions by a Small Set of Environmental Parameters (NDVI, Temperature)

    PubMed Central

    Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106

  5. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    NASA Astrophysics Data System (ADS)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  6. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    PubMed Central

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-01-01

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded. PMID:25979264

  7. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    SciTech Connect

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.

  8. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE PAGES

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  9. Accurate verification of the conserved-vector-current and standard-model predictions

    SciTech Connect

    Sirlin, A.; Zucchini, R.

    1986-10-20

    An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.

  10. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  11. The MIDAS touch for Accurately Predicting the Stress-Strain Behavior of Tantalum

    SciTech Connect

    Jorgensen, S.

    2016-03-02

    Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].

  12. Quantitative spectroscopy of hot stars: accurate atomic data applied on a large scale as driver of recent breakthroughs

    NASA Astrophysics Data System (ADS)

    Przybilla, N.; Schaffenroth, V.; Nieva, M. F.; Butler, K.

    2016-10-01

    OB-type stars present hotbeds for non-LTE physics because of their strong radiation fields that drive the atmospheric plasma out of local thermodynamic equilibrium. We report on recent breakthroughs in the quantitative analysis of the optical and UV-spectra of OB-type stars that were facilitated by application of accurate and precise atomic data on a large scale. An astrophysicist's dream has come true, by bringing observed and model spectra into close match over wide parts of the observed wavelength ranges. This allows tight observational constraints to be derived from OB-type stars for a wide range of applications in astrophysics. However, despite the progress made, many details of the modelling may be improved further. We discuss atomic data needs in terms of laboratory measurements and also ab-initio calculations. Particular emphasis is given to quantitative spectroscopy in the near-IR, which will be the focus in the era of the upcoming extremely large telescopes.

  13. Quantitative AOP-based predictions for two aromatase ...

    EPA Pesticide Factsheets

    The adverse outcome pathway (AOP) framework can be used to support the use of mechanistic toxicology data as a basis for risk assessment. For certain risk contexts this includes defining, quantitative linkages between the molecular initiating event (MIE) and subsequent key events (KEs) within an AOP. One AOP for which strong, quantitative linkages have been established is aromatase inhibition leading to reproductive dysfunction in fish. A series of computational models have been linked to develop a quantitative AOP (Q-AOP). A measure of aromatase inhibition is used as the model input to estimate circulating plasma estradiol (E2) concentration and resultant circulating plasma vitellogenin (VTG) concentration. To evaluate model predictions, two aromatase inhibitors, letrozole and epoxiconazole, were selected based upon their relative aromatase inhibition potency in US EPA ToxCast assays. Reproductively mature female fathead minnows (Pimephales promelas) were exposed to varying concentrations of either letrozole (0.5, 7.5, 25, 75, 250 µg/L) or epoxiconazole (8, 25, 80, 250, 800 µg/L) in 24h flow through exposures. One additional consideration for model predictions was bioaccumulation of exposure chemicals and resultant circulating plasma concentration. To identify this, plasma from exposed minnows was extracted by supported liquid extraction (SLE) and concentrations of letrozole or epoxiconazole determined by LC-MS/MS. Plasma bioaccumulation factors (BAFplasma)

  14. Restriction Site Tiling Analysis: accurate discovery and quantitative genotyping of genome-wide polymorphisms using nucleotide arrays

    PubMed Central

    2010-01-01

    High-throughput genotype data can be used to identify genes important for local adaptation in wild populations, phenotypes in lab stocks, or disease-related traits in human medicine. Here we advance microarray-based genotyping for population genomics with Restriction Site Tiling Analysis. The approach simultaneously discovers polymorphisms and provides quantitative genotype data at 10,000s of loci. It is highly accurate and free from ascertainment bias. We apply the approach to uncover genomic differentiation in the purple sea urchin. PMID:20403197

  15. There's plenty of gloom at the bottom: the many challenges of accurate quantitation in size-based oligomeric separations.

    PubMed

    Striegel, André M

    2013-11-01

    There is a variety of small-molecule species (e.g., tackifiers, plasticizers, oligosaccharides) the size-based characterization of which is of considerable scientific and industrial importance. Likewise, quantitation of the amount of oligomers in a polymer sample is crucial for the import and export of substances into the USA and European Union (EU). While the characterization of ultra-high molar mass macromolecules by size-based separation techniques is generally considered a challenge, it is this author's contention that a greater challenge is encountered when trying to perform, for quantitation purposes, separations in and of the oligomeric region. The latter thesis is expounded herein, by detailing the various obstacles encountered en route to accurate, quantitative oligomeric separations by entropically dominated techniques such as size-exclusion chromatography, hydrodynamic chromatography, and asymmetric flow field-flow fractionation, as well as by methods which are, principally, enthalpically driven such as liquid adsorption and temperature gradient interaction chromatography. These obstacles include, among others, the diminished sensitivity of static light scattering (SLS) detection at low molar masses, the non-constancy of the response of SLS and of commonly employed concentration-sensitive detectors across the oligomeric region, and the loss of oligomers through the accumulation wall membrane in asymmetric flow field-flow fractionation. The battle is not lost, however, because, with some care and given a sufficient supply of sample, the quantitation of both individual oligomeric species and of the total oligomeric region is often possible.

  16. Quantitation of Insulin-Like Growth Factor 1 in Serum by Liquid Chromatography High Resolution Accurate-Mass Mass Spectrometry.

    PubMed

    Ketha, Hemamalini; Singh, Ravinder J

    2016-01-01

    Insulin-like growth factor 1 (IGF-1) is a 70 amino acid peptide hormone which acts as the principal mediator of the effects of growth hormone (GH). Due to a wide variability in circulating concentration of GH, IGF-1 quantitation is the first step in the diagnosis of GH excess or deficiency. Majority (>95 %) of IGF-1 circulates as a ternary complex along with its principle binding protein insulin-like growth factor 1 binding protein 3 (IGFBP-3) and acid labile subunit. The assay design approach for IGF-1 quantitation has to include a step to dissociate IGF-1 from its ternary complex. Several commercial assays employ a buffer containing acidified ethanol to achieve this. Despite several modifications, commercially available immunoassays have been shown to have challenges with interference from IGFBP-3. Additionally, inter-method comparison between IGF-1 immunoassays has been shown to be suboptimal. Mass spectrometry has been utilized for quantitation of IGF-1. In this chapter a liquid chromatography high resolution accurate-mass mass spectrometry (LC-HRAMS) based method for IGF-1 quantitation has been described.

  17. Accurate prediction of human drug toxicity: a major challenge in drug development.

    PubMed

    Li, Albert P

    2004-11-01

    Over the past decades, a number of drugs have been withdrawn or have required special labeling due to adverse effects observed post-marketing. Species differences in drug toxicity in preclinical safety tests and the lack of sensitive biomarkers and nonrepresentative patient population in clinical trials are probable reasons for the failures in predicting human drug toxicity. It is proposed that toxicology should evolve from an empirical practice to an investigative discipline. Accurate prediction of human drug toxicity requires resources and time to be spent in clearly defining key toxic pathways and corresponding risk factors, which hopefully, will be compensated by the benefits of a lower percentage of clinical failure due to toxicity and a decreased frequency of market withdrawal due to unacceptable adverse drug effects.

  18. Universal structural parameter to quantitatively predict metallic glass properties

    NASA Astrophysics Data System (ADS)

    Ding, Jun; Cheng, Yong-Qiang; Sheng, Howard; Asta, Mark; Ritchie, Robert O.; Ma, Evan

    2016-12-01

    Quantitatively correlating the amorphous structure in metallic glasses (MGs) with their physical properties has been a long-sought goal. Here we introduce `flexibility volume' as a universal indicator, to bridge the structural state the MG is in with its properties, on both atomic and macroscopic levels. The flexibility volume combines static atomic volume with dynamics information via atomic vibrations that probe local configurational space and interaction between neighbouring atoms. We demonstrate that flexibility volume is a physically appropriate parameter that can quantitatively predict the shear modulus, which is at the heart of many key properties of MGs. Moreover, the new parameter correlates strongly with atomic packing topology, and also with the activation energy for thermally activated relaxation and the propensity for stress-driven shear transformations. These correlations are expected to be robust across a very wide range of MG compositions, processing conditions and length scales.

  19. Universal structural parameter to quantitatively predict metallic glass properties

    PubMed Central

    Ding, Jun; Cheng, Yong-Qiang; Sheng, Howard; Asta, Mark; Ritchie, Robert O.; Ma, Evan

    2016-01-01

    Quantitatively correlating the amorphous structure in metallic glasses (MGs) with their physical properties has been a long-sought goal. Here we introduce ‘flexibility volume' as a universal indicator, to bridge the structural state the MG is in with its properties, on both atomic and macroscopic levels. The flexibility volume combines static atomic volume with dynamics information via atomic vibrations that probe local configurational space and interaction between neighbouring atoms. We demonstrate that flexibility volume is a physically appropriate parameter that can quantitatively predict the shear modulus, which is at the heart of many key properties of MGs. Moreover, the new parameter correlates strongly with atomic packing topology, and also with the activation energy for thermally activated relaxation and the propensity for stress-driven shear transformations. These correlations are expected to be robust across a very wide range of MG compositions, processing conditions and length scales. PMID:27941922

  20. Quantitative imaging features to predict cancer status in lung nodules

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Balagurunathan, Yoganand; Atwater, Thomas; Antic, Sanja; Li, Qian; Walker, Ronald; Smith, Gary T.; Massion, Pierre P.; Schabath, Matthew B.; Gillies, Robert J.

    2016-03-01

    Background: We propose a systematic methodology to quantify incidentally identified lung nodules based on observed radiological traits on a point scale. These quantitative traits classification model was used to predict cancer status. Materials and Methods: We used 102 patients' low dose computed tomography (LDCT) images for this study, 24 semantic traits were systematically scored from each image. We built a machine learning classifier in cross validation setting to find best predictive imaging features to differentiate malignant from benign lung nodules. Results: The best feature triplet to discriminate malignancy was based on long axis, concavity and lymphadenopathy with average AUC of 0.897 (Accuracy of 76.8%, Sensitivity of 64.3%, Specificity of 90%). A similar semantic triplet optimized on Sensitivity/Specificity (Youden's J index) included long axis, vascular convergence and lymphadenopathy which had an average AUC of 0.875 (Accuracy of 81.7%, Sensitivity of 76.2%, Specificity of 95%). Conclusions: Quantitative radiological image traits can differentiate malignant from benign lung nodules. These semantic features along with size measurement enhance the prediction accuracy.

  1. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    SciTech Connect

    Pourmoghaddas, Amir Wells, R. Glenn

    2016-01-15

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but their level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE

  2. Accurate prediction of the response of freshwater fish to a mixture of estrogenic chemicals.

    PubMed

    Brian, Jayne V; Harris, Catherine A; Scholze, Martin; Backhaus, Thomas; Booy, Petra; Lamoree, Marja; Pojana, Giulio; Jonkers, Niels; Runnalls, Tamsin; Bonfà, Angela; Marcomini, Antonio; Sumpter, John P

    2005-06-01

    Existing environmental risk assessment procedures are limited in their ability to evaluate the combined effects of chemical mixtures. We investigated the implications of this by analyzing the combined effects of a multicomponent mixture of five estrogenic chemicals using vitellogenin induction in male fathead minnows as an end point. The mixture consisted of estradiol, ethynylestradiol, nonylphenol, octylphenol, and bisphenol A. We determined concentration-response curves for each of the chemicals individually. The chemicals were then combined at equipotent concentrations and the mixture tested using fixed-ratio design. The effects of the mixture were compared with those predicted by the model of concentration addition using biomathematical methods, which revealed that there was no deviation between the observed and predicted effects of the mixture. These findings demonstrate that estrogenic chemicals have the capacity to act together in an additive manner and that their combined effects can be accurately predicted by concentration addition. We also explored the potential for mixture effects at low concentrations by exposing the fish to each chemical at one-fifth of its median effective concentration (EC50). Individually, the chemicals did not induce a significant response, although their combined effects were consistent with the predictions of concentration addition. This demonstrates the potential for estrogenic chemicals to act additively at environmentally relevant concentrations. These findings highlight the potential for existing environmental risk assessment procedures to underestimate the hazard posed by mixtures of chemicals that act via a similar mode of action, thereby leading to erroneous conclusions of absence of risk.

  3. Accurate Prediction of the Response of Freshwater Fish to a Mixture of Estrogenic Chemicals

    PubMed Central

    Brian, Jayne V.; Harris, Catherine A.; Scholze, Martin; Backhaus, Thomas; Booy, Petra; Lamoree, Marja; Pojana, Giulio; Jonkers, Niels; Runnalls, Tamsin; Bonfà, Angela; Marcomini, Antonio; Sumpter, John P.

    2005-01-01

    Existing environmental risk assessment procedures are limited in their ability to evaluate the combined effects of chemical mixtures. We investigated the implications of this by analyzing the combined effects of a multicomponent mixture of five estrogenic chemicals using vitellogenin induction in male fathead minnows as an end point. The mixture consisted of estradiol, ethynylestradiol, nonylphenol, octylphenol, and bisphenol A. We determined concentration–response curves for each of the chemicals individually. The chemicals were then combined at equipotent concentrations and the mixture tested using fixed-ratio design. The effects of the mixture were compared with those predicted by the model of concentration addition using biomathematical methods, which revealed that there was no deviation between the observed and predicted effects of the mixture. These findings demonstrate that estrogenic chemicals have the capacity to act together in an additive manner and that their combined effects can be accurately predicted by concentration addition. We also explored the potential for mixture effects at low concentrations by exposing the fish to each chemical at one-fifth of its median effective concentration (EC50). Individually, the chemicals did not induce a significant response, although their combined effects were consistent with the predictions of concentration addition. This demonstrates the potential for estrogenic chemicals to act additively at environmentally relevant concentrations. These findings highlight the potential for existing environmental risk assessment procedures to underestimate the hazard posed by mixtures of chemicals that act via a similar mode of action, thereby leading to erroneous conclusions of absence of risk. PMID:15929895

  4. IDSite: An accurate approach to predict P450-mediated drug metabolism

    PubMed Central

    Li, Jianing; Schneebeli, Severin T.; Bylund, Joseph; Farid, Ramy; Friesner, Richard A.

    2011-01-01

    Accurate prediction of drug metabolism is crucial for drug design. Since a large majority of drugs metabolism involves P450 enzymes, we herein describe a computational approach, IDSite, to predict P450-mediated drug metabolism. To model induced-fit effects, IDSite samples the conformational space with flexible docking in Glide followed by two refinement stages using the Protein Local Optimization Program (PLOP). Sites of metabolism (SOMs) are predicted according to a physical-based score that evaluates the potential of atoms to react with the catalytic iron center. As a preliminary test, we present in this paper the prediction of hydroxylation and O-dealkylation sites mediated by CYP2D6 using two different models: a physical-based simulation model, and a modification of this model in which a small number of parameters are fit to a training set. Without fitting any parameters to experimental data, the Physical IDSite scoring recovers 83% of the experimental observations for 56 compounds with a very low false positive rate. With only 4 fitted parameters, the Fitted IDSite was trained with the subset of 36 compounds and successfully applied to the other 20 compounds, recovering 94% of the experimental observations with high sensitivity and specificity for both sets. PMID:22247702

  5. Accurate Prediction of the Dynamical Changes within the Second PDZ Domain of PTP1e

    PubMed Central

    Cilia, Elisa; Vuister, Geerten W.; Lenaerts, Tom

    2012-01-01

    Experimental NMR relaxation studies have shown that peptide binding induces dynamical changes at the side-chain level throughout the second PDZ domain of PTP1e, identifying as such the collection of residues involved in long-range communication. Even though different computational approaches have identified subsets of residues that were qualitatively comparable, no quantitative analysis of the accuracy of these predictions was thus far determined. Here, we show that our information theoretical method produces quantitatively better results with respect to the experimental data than some of these earlier methods. Moreover, it provides a global network perspective on the effect experienced by the different residues involved in the process. We also show that these predictions are consistent within both the human and mouse variants of this domain. Together, these results improve the understanding of intra-protein communication and allostery in PDZ domains, underlining at the same time the necessity of producing similar data sets for further validation of thses kinds of methods. PMID:23209399

  6. ILT based defect simulation of inspection images accurately predicts mask defect printability on wafer

    NASA Astrophysics Data System (ADS)

    Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2016-05-01

    At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts

  7. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model

    PubMed Central

    Li, Zhen; Zhang, Renyu

    2017-01-01

    Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact

  8. A hierarchical approach to accurate predictions of macroscopic thermodynamic behavior from quantum mechanics and molecular simulations

    NASA Astrophysics Data System (ADS)

    Garrison, Stephen L.

    2005-07-01

    The combination of molecular simulations and potentials obtained from quantum chemistry is shown to be able to provide reasonably accurate thermodynamic property predictions. Gibbs ensemble Monte Carlo simulations are used to understand the effects of small perturbations to various regions of the model Lennard-Jones 12-6 potential. However, when the phase behavior and second virial coefficient are scaled by the critical properties calculated for each potential, the results obey a corresponding states relation suggesting a non-uniqueness problem for interaction potentials fit to experimental phase behavior. Several variations of a procedure collectively referred to as quantum mechanical Hybrid Methods for Interaction Energies (HM-IE) are developed and used to accurately estimate interaction energies from CCSD(T) calculations with a large basis set in a computationally efficient manner for the neon-neon, acetylene-acetylene, and nitrogen-benzene systems. Using these results and methods, an ab initio, pairwise-additive, site-site potential for acetylene is determined and then improved using results from molecular simulations using this initial potential. The initial simulation results also indicate that a limited range of energies important for accurate phase behavior predictions. Second virial coefficients calculated from the improved potential indicate that one set of experimental data in the literature is likely erroneous. This prescription is then applied to methanethiol. Difficulties in modeling the effects of the lone pair electrons suggest that charges on the lone pair sites negatively impact the ability of the intermolecular potential to describe certain orientations, but that the lone pair sites may be necessary to reasonably duplicate the interaction energies for several orientations. Two possible methods for incorporating the effects of three-body interactions into simulations within the pairwise-additivity formulation are also developed. A low density

  9. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water

    NASA Astrophysics Data System (ADS)

    Shvab, I.; Sadus, Richard J.

    2013-11-01

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm3 for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.

  10. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water.

    PubMed

    Shvab, I; Sadus, Richard J

    2013-11-21

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g∕cm(3) for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC∕E and TIP4P∕2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC∕E and TIP4P∕2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.

  11. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water

    SciTech Connect

    Shvab, I.; Sadus, Richard J.

    2013-11-21

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm{sup 3} for a wide range of temperatures (298–650 K) and pressures (0.1–700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.

  12. Persistent Homology for The Quantitative Prediction of Fullerene Stability

    PubMed Central

    Xia, Kelin; Feng, Xin; Tong, Yiying; Wei, Guo Wei

    2014-01-01

    Persistent homology is a relatively new tool often used for qualitative analysis of intrinsic topological features in images and data originated from scientific and engineering applications. In this paper, we report novel quantitative predictions of the energy and stability of fullerene molecules, the very first attempt in employing persistent homology in this context. The ground-state structures of a series of small fullerene molecules are first investigated with the standard Vietoris-Rips complex. We decipher all the barcodes, including both short-lived local bars and long-lived global bars arising from topological invariants, and associate them with fullerene structural details. By using accumulated bar lengths, we build quantitative models to correlate local and global Betti-2 bars respectively with the heat of formation and total curvature energies of fullerenes. It is found that the heat of formation energy is related to the local hexagonal cavities of small fullerenes, while the total curvature energies of fullerene isomers are associated with their sphericities, which are measured by the lengths of their long-lived Betti-2 bars. Excellent correlation coefficients (> 0.94) between persistent homology predictions and those of quantum or curvature analysis have been observed. A correlation matrix based filtration is introduced to further verify our findings. PMID:25523342

  13. Distance scaling method for accurate prediction of slowly varying magnetic fields in satellite missions

    NASA Astrophysics Data System (ADS)

    Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.

    2016-07-01

    In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.

  14. A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows

    NASA Astrophysics Data System (ADS)

    Bijleveld, H. A.; Veldman, A. E. P.

    2014-12-01

    A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.

  15. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  16. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    SciTech Connect

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  17. Accurate prediction of wall shear stress in a stented artery: newtonian versus non-newtonian models.

    PubMed

    Mejia, Juan; Mongrain, Rosaire; Bertrand, Olivier F

    2011-07-01

    A significant amount of evidence linking wall shear stress to neointimal hyperplasia has been reported in the literature. As a result, numerical and experimental models have been created to study the influence of stent design on wall shear stress. Traditionally, blood has been assumed to behave as a Newtonian fluid, but recently that assumption has been challenged. The use of a linear model; however, can reduce computational cost, and allow the use of Newtonian fluids (e.g., glycerine and water) instead of a blood analog fluid in an experimental setup. Therefore, it is of interest whether a linear model can be used to accurately predict the wall shear stress caused by a non-Newtonian fluid such as blood within a stented arterial segment. The present work compares the resulting wall shear stress obtained using two linear and one nonlinear model under the same flow waveform. All numerical models are fully three-dimensional, transient, and incorporate a realistic stent geometry. It is shown that traditional linear models (based on blood's lowest viscosity limit, 3.5 Pa s) underestimate the wall shear stress within a stented arterial segment, which can lead to an overestimation of the risk of restenosis. The second linear model, which uses a characteristic viscosity (based on an average strain rate, 4.7 Pa s), results in higher wall shear stress levels, but which are still substantially below those of the nonlinear model. It is therefore shown that nonlinear models result in more accurate predictions of wall shear stress within a stented arterial segment.

  18. Point-of-care cardiac troponin test accurately predicts heat stroke severity in rats.

    PubMed

    Audet, Gerald N; Quinn, Carrie M; Leon, Lisa R

    2015-11-15

    Heat stroke (HS) remains a significant public health concern. Despite the substantial threat posed by HS, there is still no field or clinical test of HS severity. We suggested previously that circulating cardiac troponin (cTnI) could serve as a robust biomarker of HS severity after heating. In the present study, we hypothesized that (cTnI) point-of-care test (ctPOC) could be used to predict severity and organ damage at the onset of HS. Conscious male Fischer 344 rats (n = 16) continuously monitored for heart rate (HR), blood pressure (BP), and core temperature (Tc) (radiotelemetry) were heated to maximum Tc (Tc,Max) of 41.9 ± 0.1°C and recovered undisturbed for 24 h at an ambient temperature of 20°C. Blood samples were taken at Tc,Max and 24 h after heat via submandibular bleed and analyzed on ctPOC test. POC cTnI band intensity was ranked using a simple four-point scale via two blinded observers and compared with cTnI levels measured by a clinical blood analyzer. Blood was also analyzed for biomarkers of systemic organ damage. HS severity, as previously defined using HR, BP, and recovery Tc profile during heat exposure, correlated strongly with cTnI (R(2) = 0.69) at Tc,Max. POC cTnI band intensity ranking accurately predicted cTnI levels (R(2) = 0.64) and HS severity (R(2) = 0.83). Five markers of systemic organ damage also correlated with ctPOC score (albumin, alanine aminotransferase, blood urea nitrogen, cholesterol, and total bilirubin; R(2) > 0.4). This suggests that cTnI POC tests can accurately determine HS severity and could serve as simple, portable, cost-effective HS field tests.

  19. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics

    PubMed Central

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  20. Quantitative prediction of reduction in large pipe setting round process

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Zhan, Peipei; Ma, Rui; Zhai, Ruixue

    2013-07-01

    The control manner during the process to ensure the quality of pipe products mainly relies on the operator's experience, so it is very necessary to study the setting round process and obtain its spring-back law. The setting round process is shaping an oval section pipe into circular section, so it is difficult to provide a quantificational analysis for its spring-back process because of the curvature inequality of pipe section neutral layer. However, the spring-back law of the circle-oval process can be easily predicted. The experimental method is firstly used to establish the equivalent effect between the setting round process and the circle-oval process. The setting round process can be converted into the circle-oval process. There are two difficulties in the theoretical analysis for the circle-oval process: elastic-plastic bending problem of curved beam; statically indeterminate problem. A quantitative analytic method for the circle-oval process is presented on the basis of combination of the spring-back law of plane curved beam with the element dividing idea in finite element method. The ovality after unloading versus the relative reduction is plotted with analytical and experimental results respectively, which shows a fair agreement. Finally, the method of quantitative prediction of reduction for large pipe setting round is given based on the equivalent effect and the analytical results. Five pipes, which are needed to be set round, are used to carry out experiment so as to verify this method. The results of verification experiment indicates that, in the experimental range, the residual ovality are all under 0.35% after the once only setting round with the theoretical prediction reductions. It is much less than the 1% requirement of pipe standard. Applying the established theoretical analysis is able to correct the pipe ovality with sufficient accuracy, which provides theoretical direction to plant use.

  1. Anomalous Transport in Carbonate Rock - Predictions and Quantitative Measures

    NASA Astrophysics Data System (ADS)

    Bijeljic, B.; Blunt, M. J.

    2014-12-01

    Solute transport in rock subsurface is important in a number of applications such as contaminant hydrology, carbon storage and enhanced oil recovery. Carbonate rock contain most of the world's oil reserves and potentially hold a storage capacity for carbon dioxide. Pore structure in carbonate rock introduces an additional complexity in the form of bimodal pore size distributions, which leads to complex anomalous transport behavior and poses a significant challenge for accurate predictions. We present a new modeling concept that simulates flow and transport on micro-CT images containing the information on inter- and intra-grain pore space of carbonate rock. Navier-Stokes equations are solved for flow in the image voxels comprising the pore space, streamline-based simulation is used to account for advection, and diffusion is superimposed by random walk. Firstly, the model is validated against the experimental NMR measurements in the dual porosity beadpack. Furthermore, the model predictions are made for a number of carbonate rock images which are then classified in terms of heterogeneity of the inter- and intra-grain pore space, heterogeneity in the flow field, and the mass transfer characteristics of the porous media. Finally, we demonstrate the predictive capabilities of the model through an analysis that includes a number of probability density functions (PDFs) measures of non-Fickian transport on the micro-CT images.

  2. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    PubMed

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  3. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples.

    PubMed

    Mackie, David M; Jahnke, Justin P; Benyamin, Marcus S; Sumner, James J

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users' purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells.

  4. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    PubMed

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.

  5. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    SciTech Connect

    Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.

  6. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    PubMed Central

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  7. Fast and accurate pressure-drop prediction in straightened atherosclerotic coronary arteries.

    PubMed

    Schrauwen, Jelle T C; Koeze, Dion J; Wentzel, Jolanda J; van de Vosse, Frans N; van der Steen, Anton F W; Gijsen, Frank J H

    2015-01-01

    Atherosclerotic disease progression in coronary arteries is influenced by wall shear stress. To compute patient-specific wall shear stress, computational fluid dynamics (CFD) is required. In this study we propose a method for computing the pressure-drop in regions proximal and distal to a plaque, which can serve as a boundary condition in CFD. As a first step towards exploring the proposed method we investigated ten straightened coronary arteries. First, the flow fields were calculated with CFD and velocity profiles were fitted on the results. Second, the Navier-Stokes equation was simplified and solved with the found velocity profiles to obtain a pressure-drop estimate (Δp (1)). Next, Δp (1) was compared to the pressure-drop from CFD (Δp CFD) as a validation step. Finally, the velocity profiles, and thus the pressure-drop were predicted based on geometry and flow, resulting in Δp geom. We found that Δp (1) adequately estimated Δp CFD with velocity profiles that have one free parameter β. This β was successfully related to geometry and flow, resulting in an excellent agreement between Δp CFD and Δp geom: 3.9 ± 4.9% difference at Re = 150. We showed that this method can quickly and accurately predict pressure-drop on the basis of geometry and flow in straightened coronary arteries that are mildly diseased.

  8. Accurate load prediction by BEM with airfoil data from 3D RANS simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Marc S.; Nitzsche, Jens; Hennings, Holger

    2016-09-01

    In this paper, two methods for the extraction of airfoil coefficients from 3D CFD simulations of a wind turbine rotor are investigated, and these coefficients are used to improve the load prediction of a BEM code. The coefficients are extracted from a number of steady RANS simulations, using either averaging of velocities in annular sections, or an inverse BEM approach for determination of the induction factors in the rotor plane. It is shown that these 3D rotor polars are able to capture the rotational augmentation at the inner part of the blade as well as the load reduction by 3D effects close to the blade tip. They are used as input to a simple BEM code and the results of this BEM with 3D rotor polars are compared to the predictions of BEM with 2D airfoil coefficients plus common empirical corrections for stall delay and tip loss. While BEM with 2D airfoil coefficients produces a very different radial distribution of loads than the RANS simulation, the BEM with 3D rotor polars manages to reproduce the loads from RANS very accurately for a variety of load cases, as long as the blade pitch angle is not too different from the cases from which the polars were extracted.

  9. Accurate and Robust Genomic Prediction of Celiac Disease Using Statistical Learning

    PubMed Central

    Abraham, Gad; Tye-Din, Jason A.; Bhalala, Oneil G.; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael

    2014-01-01

    Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87–0.89) and in independent replication across cohorts (AUC of 0.86–0.9), despite differences in ethnicity. The models explained 30–35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases. PMID:24550740

  10. Energy expenditure during level human walking: seeking a simple and accurate predictive solution.

    PubMed

    Ludlow, Lindsay W; Weyand, Peter G

    2016-03-01

    Accurate prediction of the metabolic energy that walking requires can inform numerous health, bodily status, and fitness outcomes. We adopted a two-step approach to identifying a concise, generalized equation for predicting level human walking metabolism. Using literature-aggregated values we compared 1) the predictive accuracy of three literature equations: American College of Sports Medicine (ACSM), Pandolf et al., and Height-Weight-Speed (HWS); and 2) the goodness-of-fit possible from one- vs. two-component descriptions of walking metabolism. Literature metabolic rate values (n = 127; speed range = 0.4 to 1.9 m/s) were aggregated from 25 subject populations (n = 5-42) whose means spanned a 1.8-fold range of heights and a 4.2-fold range of weights. Population-specific resting metabolic rates (V̇o2 rest) were determined using standardized equations. Our first finding was that the ACSM and Pandolf et al. equations underpredicted nearly all 127 literature-aggregated values. Consequently, their standard errors of estimate (SEE) were nearly four times greater than those of the HWS equation (4.51 and 4.39 vs. 1.13 ml O2·kg(-1)·min(-1), respectively). For our second comparison, empirical best-fit relationships for walking metabolism were derived from the data set in one- and two-component forms for three V̇o2-speed model types: linear (∝V(1.0)), exponential (∝V(2.0)), and exponential/height (∝V(2.0)/Ht). We found that the proportion of variance (R(2)) accounted for, when averaged across the three model types, was substantially lower for one- vs. two-component versions (0.63 ± 0.1 vs. 0.90 ± 0.03) and the predictive errors were nearly twice as great (SEE = 2.22 vs. 1.21 ml O2·kg(-1)·min(-1)). Our final analysis identified the following concise, generalized equation for predicting level human walking metabolism: V̇o2 total = V̇o2 rest + 3.85 + 5.97·V(2)/Ht (where V is measured in m/s, Ht in meters, and V̇o2 in ml O2·kg(-1)·min(-1)).

  11. Preferential access to genetic information from endogenous hominin ancient DNA and accurate quantitative SNP-typing via SPEX

    PubMed Central

    Brotherton, Paul; Sanchez, Juan J.; Cooper, Alan; Endicott, Phillip

    2010-01-01

    The analysis of targeted genetic loci from ancient, forensic and clinical samples is usually built upon polymerase chain reaction (PCR)-generated sequence data. However, many studies have shown that PCR amplification from poor-quality DNA templates can create sequence artefacts at significant levels. With hominin (human and other hominid) samples, the pervasive presence of highly PCR-amplifiable human DNA contaminants in the vast majority of samples can lead to the creation of recombinant hybrids and other non-authentic artefacts. The resulting PCR-generated sequences can then be difficult, if not impossible, to authenticate. In contrast, single primer extension (SPEX)-based approaches can genotype single nucleotide polymorphisms from ancient fragments of DNA as accurately as modern DNA. A single SPEX-type assay can amplify just one of the duplex DNA strands at target loci and generate a multi-fold depth-of-coverage, with non-authentic recombinant hybrids reduced to undetectable levels. Crucially, SPEX-type approaches can preferentially access genetic information from damaged and degraded endogenous ancient DNA templates over modern human DNA contaminants. The development of SPEX-type assays offers the potential for highly accurate, quantitative genotyping from ancient hominin samples. PMID:19864251

  12. Method for accurate quantitation of background tissue optical properties in the presence of emission from a strong fluorescence marker

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.

    2015-03-01

    Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.

  13. Renal Cortical Lactate Dehydrogenase: A Useful, Accurate, Quantitative Marker of In Vivo Tubular Injury and Acute Renal Failure

    PubMed Central

    Zager, Richard A.; Johnson, Ali C. M.; Becker, Kirsten

    2013-01-01

    Studies of experimental acute kidney injury (AKI) are critically dependent on having precise methods for assessing the extent of tubular cell death. However, the most widely used techniques either provide indirect assessments (e.g., BUN, creatinine), suffer from the need for semi-quantitative grading (renal histology), or reflect the status of residual viable, not the number of lost, renal tubular cells (e.g., NGAL content). Lactate dehydrogenase (LDH) release is a highly reliable test for assessing degrees of in vitro cell death. However, its utility as an in vivo AKI marker has not been defined. Towards this end, CD-1 mice were subjected to graded renal ischemia (0, 15, 22, 30, 40, or 60 min) or to nephrotoxic (glycerol; maleate) AKI. Sham operated mice, or mice with AKI in the absence of acute tubular necrosis (ureteral obstruction; endotoxemia), served as negative controls. Renal cortical LDH or NGAL levels were assayed 2 or 24 hrs later. Ischemic, glycerol, and maleate-induced AKI were each associated with striking, steep, inverse correlations (r, −0.89) between renal injury severity and renal LDH content. With severe AKI, >65% LDH declines were observed. Corresponding prompt plasma and urinary LDH increases were observed. These observations, coupled with the maintenance of normal cortical LDH mRNA levels, indicated the renal LDH efflux, not decreased LDH synthesis, caused the falling cortical LDH levels. Renal LDH content was well maintained with sham surgery, ureteral obstruction or endotoxemic AKI. In contrast to LDH, renal cortical NGAL levels did not correlate with AKI severity. In sum, the above results indicate that renal cortical LDH assay is a highly accurate quantitative technique for gauging the extent of experimental acute ischemic and toxic renal injury. That it avoids the limitations of more traditional AKI markers implies great potential utility in experimental studies that require precise quantitation of tubule cell death. PMID:23825563

  14. Can radiation therapy treatment planning system accurately predict surface doses in postmastectomy radiation therapy patients?

    SciTech Connect

    Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay

    2012-07-01

    Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.

  15. A Support Vector Machine model for the prediction of proteotypic peptides for accurate mass and time proteomics

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.

    2008-07-01

    Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php

  16. Structure-based constitutive model can accurately predict planar biaxial properties of aortic wall tissue.

    PubMed

    Polzer, S; Gasser, T C; Novak, K; Man, V; Tichy, M; Skacel, P; Bursa, J

    2015-03-01

    Structure-based constitutive models might help in exploring mechanisms by which arterial wall histology is linked to wall mechanics. This study aims to validate a recently proposed structure-based constitutive model. Specifically, the model's ability to predict mechanical biaxial response of porcine aortic tissue with predefined collagen structure was tested. Histological slices from porcine thoracic aorta wall (n=9) were automatically processed to quantify the collagen fiber organization, and mechanical testing identified the non-linear properties of the wall samples (n=18) over a wide range of biaxial stretches. Histological and mechanical experimental data were used to identify the model parameters of a recently proposed multi-scale constitutive description for arterial layers. The model predictive capability was tested with respect to interpolation and extrapolation. Collagen in the media was predominantly aligned in circumferential direction (planar von Mises distribution with concentration parameter bM=1.03 ± 0.23), and its coherence decreased gradually from the luminal to the abluminal tissue layers (inner media, b=1.54 ± 0.40; outer media, b=0.72 ± 0.20). In contrast, the collagen in the adventitia was aligned almost isotropically (bA=0.27 ± 0.11), and no features, such as families of coherent fibers, were identified. The applied constitutive model captured the aorta biaxial properties accurately (coefficient of determination R(2)=0.95 ± 0.03) over the entire range of biaxial deformations and with physically meaningful model parameters. Good predictive properties, well outside the parameter identification space, were observed (R(2)=0.92 ± 0.04). Multi-scale constitutive models equipped with realistic micro-histological data can predict macroscopic non-linear aorta wall properties. Collagen largely defines already low strain properties of media, which explains the origin of wall anisotropy seen at this strain level. The structure and mechanical

  17. Prediction of the confidence interval of quantitative trait Loci location.

    PubMed

    Visscher, Peter M; Goddard, Mike E

    2004-07-01

    In 1997, Darvasi and Soller presented empirical predictions of the confidence interval of quantitative trait loci (QTL) location for dense marker maps in experimental crosses. They showed from simulation results for backcross and F2 populations from inbred lines that the 95% confidence interval was a simple function of sample size and the effect of the QTL. In this study, we derive by theory simple equations that can be used to predict any confidence interval and show that for the 95% confidence interval, they are in good agreement with the empirical results given by Darvasi and Soller. A general form of the confidence interval is given that also applies to other population structures (e.g., collections of sib pairs). Furthermore, the expected shape of the likelihood-ratio-test around the true QTL location is derived, which is shown to be extremely leptokurtic. It is shown that this shape explains why confidence intervals from the Log of Odds (LOD) drop-off method and bootstrap results frequently differ for real data sets.

  18. Predicting accurate fluorescent spectra for high molecular weight polycyclic aromatic hydrocarbons using density functional theory

    NASA Astrophysics Data System (ADS)

    Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.

    2016-10-01

    The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.

  19. New consensus definition for acute kidney injury accurately predicts 30-day mortality in cirrhosis with infection

    PubMed Central

    Wong, Florence; O’Leary, Jacqueline G; Reddy, K Rajender; Patton, Heather; Kamath, Patrick S; Fallon, Michael B; Garcia-Tsao, Guadalupe; Subramanian, Ram M.; Malik, Raza; Maliakkal, Benedict; Thacker, Leroy R; Bajaj, Jasmohan S

    2015-01-01

    Background & Aims A consensus conference proposed that cirrhosis-associated acute kidney injury (AKI) be defined as an increase in serum creatinine by >50% from the stable baseline value in <6 months or by ≥0.3mg/dL in <48 hrs. We prospectively evaluated the ability of these criteria to predict mortality within 30 days among hospitalized patients with cirrhosis and infection. Methods 337 patients with cirrhosis admitted with or developed an infection in hospital (56% men; 56±10 y old; model for end-stage liver disease score, 20±8) were followed. We compared data on 30-day mortality, hospital length-of-stay, and organ failure between patients with and without AKI. Results 166 (49%) developed AKI during hospitalization, based on the consensus criteria. Patients who developed AKI had higher admission Child-Pugh (11.0±2.1 vs 9.6±2.1; P<.0001), and MELD scores (23±8 vs17±7; P<.0001), and lower mean arterial pressure (81±16mmHg vs 85±15mmHg; P<.01) than those who did not. Also higher amongst patients with AKI were mortality in ≤30 days (34% vs 7%), intensive care unit transfer (46% vs 20%), ventilation requirement (27% vs 6%), and shock (31% vs 8%); AKI patients also had longer hospital stays (17.8±19.8 days vs 13.3±31.8 days) (all P<.001). 56% of AKI episodes were transient, 28% persistent, and 16% resulted in dialysis. Mortality was 80% among those without renal recovery, higher compared to partial (40%) or complete recovery (15%), or AKI-free patients (7%; P<.0001). Conclusions 30-day mortality is 10-fold higher among infected hospitalized cirrhotic patients with irreversible AKI than those without AKI. The consensus definition of AKI accurately predicts 30-day mortality, length of hospital stay, and organ failure. PMID:23999172

  20. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  1. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  2. Cluster abundance in chameleon f(R) gravity I: toward an accurate halo mass function prediction

    NASA Astrophysics Data System (ADS)

    Cataneo, Matteo; Rapetti, David; Lombriser, Lucas; Li, Baojiu

    2016-12-01

    We refine the mass and environment dependent spherical collapse model of chameleon f(R) gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution N-body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the enhancement of the f(R) halo abundance with respect to that of General Relativity (GR) within a precision of lesssim 5% from the results obtained in the simulations. Similar accuracy can be achieved for the full f(R) mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexibility of our method allows also for this to be applied to other scalar-tensor theories characterized by a mass and environment dependent spherical collapse.

  3. Accurate prediction of band gaps and optical properties of HfO2

    NASA Astrophysics Data System (ADS)

    Ondračka, Pavel; Holec, David; Nečas, David; Zajíčková, Lenka

    2016-10-01

    We report on optical properties of various polymorphs of hafnia predicted within the framework of density functional theory. The full potential linearised augmented plane wave method was employed together with the Tran-Blaha modified Becke-Johnson potential (TB-mBJ) for exchange and local density approximation for correlation. Unit cells of monoclinic, cubic and tetragonal crystalline, and a simulated annealing-based model of amorphous hafnia were fully relaxed with respect to internal positions and lattice parameters. Electronic structures and band gaps for monoclinic, cubic, tetragonal and amorphous hafnia were calculated using three different TB-mBJ parametrisations and the results were critically compared with the available experimental and theoretical reports. Conceptual differences between a straightforward comparison of experimental measurements to a calculated band gap on the one hand and to a whole electronic structure (density of electronic states) on the other hand, were pointed out, suggesting the latter should be used whenever possible. Finally, dielectric functions were calculated at two levels, using the random phase approximation without local field effects and with a more accurate Bethe-Salpether equation (BSE) to account for excitonic effects. We conclude that a satisfactory agreement with experimental data for HfO2 was obtained only in the latter case.

  4. Molecular Dynamics in Mixed Solvents Reveals Protein-Ligand Interactions, Improves Docking, and Allows Accurate Binding Free Energy Predictions.

    PubMed

    Arcon, Juan Pablo; Defelipe, Lucas A; Modenutti, Carlos P; López, Elias D; Alvarez-Garcia, Daniel; Barril, Xavier; Turjanski, Adrián G; Martí, Marcelo A

    2017-03-31

    One of the most important biological processes at the molecular level is the formation of protein-ligand complexes. Therefore, determining their structure and underlying key interactions is of paramount relevance and has direct applications in drug development. Because of its low cost relative to its experimental sibling, molecular dynamics (MD) simulations in the presence of different solvent probes mimicking specific types of interactions have been increasingly used to analyze protein binding sites and reveal protein-ligand interaction hot spots. However, a systematic comparison of different probes and their real predictive power from a quantitative and thermodynamic point of view is still missing. In the present work, we have performed MD simulations of 18 different proteins in pure water as well as water mixtures of ethanol, acetamide, acetonitrile and methylammonium acetate, leading to a total of 5.4 μs simulation time. For each system, we determined the corresponding solvent sites, defined as space regions adjacent to the protein surface where the probability of finding a probe atom is higher than that in the bulk solvent. Finally, we compared the identified solvent sites with 121 different protein-ligand complexes and used them to perform molecular docking and ligand binding free energy estimates. Our results show that combining solely water and ethanol sites allows sampling over 70% of all possible protein-ligand interactions, especially those that coincide with ligand-based pharmacophoric points. Most important, we also show how the solvent sites can be used to significantly improve ligand docking in terms of both accuracy and precision, and that accurate predictions of ligand binding free energies, along with relative ranking of ligand affinity, can be performed.

  5. Towards Accurate Prediction of Turbulent, Three-Dimensional, Recirculating Flows with the NCC

    NASA Technical Reports Server (NTRS)

    Iannetti, A.; Tacina, R.; Jeng, S.-M.; Cai, J.

    2001-01-01

    The National Combustion Code (NCC) was used to calculate the steady state, nonreacting flow field of a prototype Lean Direct Injection (LDI) swirler. This configuration used nine groups of eight holes drilled at a thirty-five degree angle to induce swirl. These nine groups created swirl in the same direction, or a corotating pattern. The static pressure drop across the holes was fixed at approximately four percent. Computations were performed on one quarter of the geometry, because the geometry is considered rotationally periodic every ninety degrees. The final computational grid used was approximately 2.26 million tetrahedral cells, and a cubic nonlinear k - epsilon model was used to model turbulence. The NCC results were then compared to time averaged Laser Doppler Velocimetry (LDV) data. The LDV measurements were performed on the full geometry, but four ninths of the geometry was measured. One-, two-, and three-dimensional representations of both flow fields are presented. The NCC computations compare both qualitatively and quantitatively well to the LDV data, but differences exist downstream. The comparison is encouraging, and shows that NCC can be used for future injector design studies. To improve the flow prediction accuracy of turbulent, three-dimensional, recirculating flow fields with the NCC, recommendations are given.

  6. Accurate prediction of V1 location from cortical folds in a surface coordinate system

    PubMed Central

    Hinds, Oliver P.; Rajendran, Niranjini; Polimeni, Jonathan R.; Augustinack, Jean C.; Wiggins, Graham; Wald, Lawrence L.; Rosas, H. Diana; Potthast, Andreas; Schwartz, Eric L.; Fischl, Bruce

    2008-01-01

    Previous studies demonstrated substantial variability of the location of primary visual cortex (V1) in stereotaxic coordinates when linear volume-based registration is used to match volumetric image intensities (Amunts et al., 2000). However, other qualitative reports of V1 location (Smith, 1904; Stensaas et al., 1974; Rademacher et al., 1993) suggested a consistent relationship between V1 and the surrounding cortical folds. Here, the relationship between folds and the location of V1 is quantified using surface-based analysis to generate a probabilistic atlas of human V1. High-resolution (about 200 μm) magnetic resonance imaging (MRI) at 7 T of ex vivo human cerebral hemispheres allowed identification of the full area via the stria of Gennari: a myeloarchitectonic feature specific to V1. Separate, whole-brain scans were acquired using MRI at 1.5 T to allow segmentation and mesh reconstruction of the cortical gray matter. For each individual, V1 was manually identified in the high-resolution volume and projected onto the cortical surface. Surface-based intersubject registration (Fischl et al., 1999b) was performed to align the primary cortical folds of individual hemispheres to those of a reference template representing the average folding pattern. An atlas of V1 location was constructed by computing the probability of V1 inclusion for each cortical location in the template space. This probabilistic atlas of V1 exhibits low prediction error compared to previous V1 probabilistic atlases built in volumetric coordinates. The increased predictability observed under surface-based registration suggests that the location of V1 is more accurately predicted by the cortical folds than by the shape of the brain embedded in the volume of the skull. In addition, the high quality of this atlas provides direct evidence that surface-based intersubject registration methods are superior to volume-based methods at superimposing functional areas of cortex, and therefore are better

  7. Prediction of trabecular bone qualitative properties using scanning quantitative ultrasound

    NASA Astrophysics Data System (ADS)

    Qin, Yi-Xian; Lin, Wei; Mittra, Erik; Xia, Yi; Cheng, Jiqi; Judex, Stefan; Rubin, Clint; Müller, Ralph

    2013-11-01

    Microgravity induced bone loss represents a critical health problem in astronauts, particularly occurred in weight-supporting skeleton, which leads to osteopenia and increase of fracture risk. Lack of suitable evaluation modality makes it difficult for monitoring skeletal status in long term space mission and increases potential risk of complication. Such disuse osteopenia and osteoporosis compromise trabecular bone density, and architectural and mechanical properties. While X-ray based imaging would not be practical in space, quantitative ultrasound may provide advantages to characterize bone density and strength through wave propagation in complex trabecular structure. This study used a scanning confocal acoustic diagnostic and navigation system (SCAN) to evaluate trabecular bone quality in 60 cubic trabecular samples harvested from adult sheep. Ultrasound image based SCAN measurements in structural and strength properties were validated by μCT and compressive mechanical testing. This result indicated a moderately strong negative correlations observed between broadband ultrasonic attenuation (BUA) and μCT-determined bone volume fraction (BV/TV, R2=0.53). Strong correlations were observed between ultrasound velocity (UV) and bone's mechanical strength and structural parameters, i.e., bulk Young's modulus (R2=0.67) and BV/TV (R2=0.85). The predictions for bone density and mechanical strength were significantly improved by using a linear combination of both BUA and UV, yielding R2=0.92 for BV/TV and R2=0.71 for bulk Young's modulus. These results imply that quantitative ultrasound can characterize trabecular structural and mechanical properties through measurements of particular ultrasound parameters, and potentially provide an excellent estimation for bone's structural integrity.

  8. Prediction of trabecular bone qualitative properties using scanning quantitative ultrasound

    PubMed Central

    Qin, Yi-Xian; Lin, Wei; Mittra, Erik; Xia, Yi; Cheng, Jiqi; Judex, Stefan; Rubin, Clint; Müller, Ralph

    2012-01-01

    Microgravity induced bone loss represents a critical health problem in astronauts, particularly occurred in weight-supporting skeleton, which leads to osteopenia and increase of fracture risk. Lack of suitable evaluation modality makes it difficult for monitoring skeletal status in long term space mission and increases potential risk of complication. Such disuse osteopenia and osteoporosis compromise trabecular bone density, and architectural and mechanical properties. While X-ray based imaging would not be practical in space, quantitative ultrasound may provide advantages to characterize bone density and strength through wave propagation in complex trabecular structure. This study used a scanning confocal acoustic diagnostic and navigation system (SCAN) to evaluate trabecular bone quality in 60 cubic trabecular samples harvested from adult sheep. Ultrasound image based SCAN measurements in structural and strength properties were validated by μCT and compressive mechanical testing. This result indicated a moderately strong negative correlations observed between broadband ultrasonic attenuation (BUA) and μCT-determined bone volume fraction (BV/TV, R2=0.53). Strong correlations were observed between ultrasound velocity (UV) and bone’s mechanical strength and structural parameters, i.e., bulk Young’s modulus (R2=0.67) and BV/TV (R2=0.85). The predictions for bone density and mechanical strength were significantly improved by using a linear combination of both BUA and UV, yielding R2=0.92 for BV/TV and R2=0.71 for bulk Young’s modulus. These results imply that quantitative ultrasound can characterize trabecular structural and mechanical properties through measurements of particular ultrasound parameters, and potentially provide an excellent estimation for bone’s structural integrity. PMID:23976803

  9. Prediction of trabecular bone qualitative properties using scanning quantitative ultrasound.

    PubMed

    Qin, Yi-Xian; Lin, Wei; Mittra, Erik; Xia, Yi; Cheng, Jiqi; Judex, Stefan; Rubin, Clint; Müller, Ralph

    2013-11-01

    Microgravity induced bone loss represents a critical health problem in astronauts, particularly occurred in weight-supporting skeleton, which leads to osteopenia and increase of fracture risk. Lack of suitable evaluation modality makes it difficult for monitoring skeletal status in long term space mission and increases potential risk of complication. Such disuse osteopenia and osteoporosis compromise trabecular bone density, and architectural and mechanical properties. While X-ray based imaging would not be practical in space, quantitative ultrasound may provide advantages to characterize bone density and strength through wave propagation in complex trabecular structure. This study used a scanning confocal acoustic diagnostic and navigation system (SCAN) to evaluate trabecular bone quality in 60 cubic trabecular samples harvested from adult sheep. Ultrasound image based SCAN measurements in structural and strength properties were validated by μCT and compressive mechanical testing. This result indicated a moderately strong negative correlations observed between broadband ultrasonic attenuation (BUA) and μCT-determined bone volume fraction (BV/TV, R(2)=0.53). Strong correlations were observed between ultrasound velocity (UV) and bone's mechanical strength and structural parameters, i.e., bulk Young's modulus (R(2)=0.67) and BV/TV (R(2)=0.85). The predictions for bone density and mechanical strength were significantly improved by using a linear combination of both BUA and UV, yielding R(2)=0.92 for BV/TV and R(2)=0.71 for bulk Young's modulus. These results imply that quantitative ultrasound can characterize trabecular structural and mechanical properties through measurements of particular ultrasound parameters, and potentially provide an excellent estimation for bone's structural integrity.

  10. Accurate, fast and cost-effective diagnostic test for monosomy 1p36 using real-time quantitative PCR.

    PubMed

    Cunha, Pricila da Silva; Pena, Heloisa B; D'Angelo, Carla Sustek; Koiffmann, Celia P; Rosenfeld, Jill A; Shaffer, Lisa G; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5-0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  11. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    PubMed Central

    Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341

  12. Raoult’s law revisited: accurately predicting equilibrium relative humidity points for humidity control experiments

    PubMed Central

    Bowler, Michael G.

    2017-01-01

    The humidity surrounding a sample is an important variable in scientific experiments. Biological samples in particular require not just a humid atmosphere but often a relative humidity (RH) that is in equilibrium with a stabilizing solution required to maintain the sample in the same state during measurements. The controlled dehydration of macromolecular crystals can lead to significant increases in crystal order, leading to higher diffraction quality. Devices that can accurately control the humidity surrounding crystals while monitoring diffraction have led to this technique being increasingly adopted, as the experiments become easier and more reproducible. Matching the RH to the mother liquor is the first step in allowing the stable mounting of a crystal. In previous work [Wheeler, Russi, Bowler & Bowler (2012). Acta Cryst. F68, 111–114], the equilibrium RHs were measured for a range of concentrations of the most commonly used precipitants in macromolecular crystallography and it was shown how these related to Raoult’s law for the equilibrium vapour pressure of water above a solution. However, a discrepancy between the measured values and those predicted by theory could not be explained. Here, a more precise humidity control device has been used to determine equilibrium RH points. The new results are in agreement with Raoult’s law. A simple argument in statistical mechanics is also presented, demonstrating that the equilibrium vapour pressure of a solvent is proportional to its mole fraction in an ideal solution: Raoult’s law. The same argument can be extended to the case where the solvent and solute molecules are of different sizes, as is the case with polymers. The results provide a framework for the correct maintenance of the RH surrounding a sample. PMID:28381983

  13. Raoult's law revisited: accurately predicting equilibrium relative humidity points for humidity control experiments.

    PubMed

    Bowler, Michael G; Bowler, David R; Bowler, Matthew W

    2017-04-01

    The humidity surrounding a sample is an important variable in scientific experiments. Biological samples in particular require not just a humid atmosphere but often a relative humidity (RH) that is in equilibrium with a stabilizing solution required to maintain the sample in the same state during measurements. The controlled dehydration of macromolecular crystals can lead to significant increases in crystal order, leading to higher diffraction quality. Devices that can accurately control the humidity surrounding crystals while monitoring diffraction have led to this technique being increasingly adopted, as the experiments become easier and more reproducible. Matching the RH to the mother liquor is the first step in allowing the stable mounting of a crystal. In previous work [Wheeler, Russi, Bowler & Bowler (2012). Acta Cryst. F68, 111-114], the equilibrium RHs were measured for a range of concentrations of the most commonly used precipitants in macromolecular crystallography and it was shown how these related to Raoult's law for the equilibrium vapour pressure of water above a solution. However, a discrepancy between the measured values and those predicted by theory could not be explained. Here, a more precise humidity control device has been used to determine equilibrium RH points. The new results are in agreement with Raoult's law. A simple argument in statistical mechanics is also presented, demonstrating that the equilibrium vapour pressure of a solvent is proportional to its mole fraction in an ideal solution: Raoult's law. The same argument can be extended to the case where the solvent and solute molecules are of different sizes, as is the case with polymers. The results provide a framework for the correct maintenance of the RH surrounding a sample.

  14. How Accurate Are the Anthropometry Equations in in Iranian Military Men in Predicting Body Composition?

    PubMed Central

    Shakibaee, Abolfazl; Faghihzadeh, Soghrat; Alishiri, Gholam Hossein; Ebrahimpour, Zeynab; Faradjzadeh, Shahram; Sobhani, Vahid; Asgari, Alireza

    2015-01-01

    Background: The body composition varies according to different life styles (i.e. intake calories and caloric expenditure). Therefore, it is wise to record military personnel’s body composition periodically and encourage those who abide to the regulations. Different methods have been introduced for body composition assessment: invasive and non-invasive. Amongst them, the Jackson and Pollock equation is most popular. Objectives: The recommended anthropometric prediction equations for assessing men’s body composition were compared with dual-energy X-ray absorptiometry (DEXA) gold standard to develop a modified equation to assess body composition and obesity quantitatively among Iranian military men. Patients and Methods: A total of 101 military men aged 23 - 52 years old with a mean age of 35.5 years were recruited and evaluated in the present study (average height, 173.9 cm and weight, 81.5 kg). The body-fat percentages of subjects were assessed both with anthropometric assessment and DEXA scan. The data obtained from these two methods were then compared using multiple regression analysis. Results: The mean and standard deviation of body fat percentage of the DEXA assessment was 21.2 ± 4.3 and body fat percentage obtained from three Jackson and Pollock 3-, 4- and 7-site equations were 21.1 ± 5.8, 22.2 ± 6.0 and 20.9 ± 5.7, respectively. There was a strong correlation between these three equations and DEXA (R² = 0.98). Conclusions: The mean percentage of body fat obtained from the three equations of Jackson and Pollock was very close to that of body fat obtained from DEXA; however, we suggest using a modified Jackson-Pollock 3-site equation for volunteer military men because the 3-site equation analysis method is simpler and faster than other methods. PMID:26715964

  15. Quantitative and Morphological Measures May Predict Growth and Mortality During Prenatal Growth in Japanese Quails

    PubMed Central

    Arora, Kashmiri L.; Vatsalya, Vatsalya

    2014-01-01

    Growth pattern and mortality rate during the embryonic phase of avian species are difficult to recognize and predict. Determination of such measures and associated events may enhance our understanding of characteristics involved in the growth and hatching process. Furthermore, some quantitative measures could validate morphological determinants during the embryonic phase and predict the course of normal growth and alterations. Our aim was to characterize quantitative growth of embryos and to establish baseline embryonic standards for use in comparative and pathological research during the prenatal life of Japanese quail. Day 10 was a landmark timeline for initiation of extensive anatomical changes in growth and transformation. Wet and dry weights were positively correlated with each other and inversely correlated with water content (p = 0.05). Following d10, the water content decreased progressively, whereas, dry and wet weights increased with increasing age. Velocity of growth in wet and dry weights was evident starting d6, spiked at d11 and d15 and then declined before hatching on d16. Organic and inorganic contents of embryos were positively associated with age. Progressive increase in the organic to inorganic ratio with age was evident after d5, spiked on d9, d13 and d16. Accurate determinations of prenatal growth processes could serve as valuable tools in identifying morphological developments and characterization of prenatal growth and mortality, thus enhancing the reproductive efficiency of the breeding colony and the postnatal robustness of the offspring. PMID:25285101

  16. Combinatorial modeling of chromatin features quantitatively predicts DNA replication timing in Drosophila.

    PubMed

    Comoglio, Federico; Paro, Renato

    2014-01-01

    In metazoans, each cell type follows a characteristic, spatio-temporally regulated DNA replication program. Histone modifications (HMs) and chromatin binding proteins (CBPs) are fundamental for a faithful progression and completion of this process. However, no individual HM is strictly indispensable for origin function, suggesting that HMs may act combinatorially in analogy to the histone code hypothesis for transcriptional regulation. In contrast to gene expression however, the relationship between combinations of chromatin features and DNA replication timing has not yet been demonstrated. Here, by exploiting a comprehensive data collection consisting of 95 CBPs and HMs we investigated their combinatorial potential for the prediction of DNA replication timing in Drosophila using quantitative statistical models. We found that while combinations of CBPs exhibit moderate predictive power for replication timing, pairwise interactions between HMs lead to accurate predictions genome-wide that can be locally further improved by CBPs. Independent feature importance and model analyses led us to derive a simplified, biologically interpretable model of the relationship between chromatin landscape and replication timing reaching 80% of the full model accuracy using six model terms. Finally, we show that pairwise combinations of HMs are able to predict differential DNA replication timing across different cell types. All in all, our work provides support to the existence of combinatorial HM patterns for DNA replication and reveal cell-type independent key elements thereof, whose experimental investigation might contribute to elucidate the regulatory mode of this fundamental cellular process.

  17. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    PubMed

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation.

  18. Investigation and prediction of protein precipitation by polyethylene glycol using quantitative structure-activity relationship models.

    PubMed

    Hämmerling, Frank; Ladd Effio, Christopher; Andris, Sebastian; Kittelmann, Jörg; Hubbuch, Jürgen

    2017-01-10

    Precipitation of proteins is considered to be an effective purification method for proteins and has proven its potential to replace costly chromatography processes. Besides salts and polyelectrolytes, polymers, such as polyethylene glycol (PEG), are commonly used for precipitation applications under mild conditions. Process development, however, for protein precipitation steps still is based mainly on heuristic approaches and high-throughput experimentation due to a lack of understanding of the underlying mechanisms. In this work we apply quantitative structure-activity relationships (QSARs) to model two parameters, the discontinuity point m* and the β-value, that describe the complete precipitation curve of a protein under defined conditions. The generated QSAR models are sensitive to the protein type, pH, and ionic strength. It was found that the discontinuity point m* is mainly dependent on protein molecular structure properties and electrostatic surface properties, whereas the β-value is influenced by the variance in electrostatics and hydrophobicity on the protein surface. The models for m* and the β-value exhibit a good correlation between observed and predicted data with a coefficient of determination of R(2)≥0.90 and, hence, are able to accurately predict precipitation curves for proteins. The predictive capabilities were demonstrated for a set of combinations of protein type, pH, and ionic strength not included in the generation of the models and good agreement between predicted and experimental data was achieved.

  19. Combining Structural Modeling with Ensemble Machine Learning to Accurately Predict Protein Fold Stability and Binding Affinity Effects upon Mutation

    PubMed Central

    Garcia Lopez, Sebastian; Kim, Philip M.

    2014-01-01

    Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403

  20. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    PubMed

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes.

  1. Development of quantitative structure property relationships for predicting the melting point of energetic materials.

    PubMed

    Morrill, Jason A; Byrd, Edward F C

    2015-11-01

    The accurate prediction of the melting temperature of organic compounds is a significant problem that has eluded researchers for many years. The most common approach used to develop predictive models entails the derivation of quantitative structure-property relationships (QSPRs), which are multivariate linear relationships between calculated quantities that are descriptors of molecular or electronic features and a property of interest. In this report the derivation of QSPRs to predict melting temperatures of energetic materials based on descriptors calculated using the AM1 semiempirical quantum mechanical method are described. In total, the melting points and experimental crystal structures of 148 energetic materials were analyzed. Principal components analysis was performed in order to assess the relative importance and roles of the descriptors in our QSPR models. Also described are the results of k means cluster analysis, performed in order to identify natural groupings within our study set of structures. The QSPR models resulting from these analyses gave training set R(2) values of 0.6085 (RMSE = ± 15.7 °C) and 0.7468 (RMSE = ± 13.2 °C). The test sets for these clusters had R(2) values of 0.9428 (RMSE = ± 7.0 °C) and 0.8974 (RMSE = ± 8.8 °C), respectively. These models are among the best melting point QSPRs yet published for energetic materials.

  2. Towards more accurate wind and solar power prediction by improving NWP model physics

    NASA Astrophysics Data System (ADS)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  3. Allele Specific Locked Nucleic Acid Quantitative PCR (ASLNAqPCR): An Accurate and Cost-Effective Assay to Diagnose and Quantify KRAS and BRAF Mutation

    PubMed Central

    Morandi, Luca; de Biase, Dario; Visani, Michela; Cesari, Valentina; De Maglio, Giovanna; Pizzolitto, Stefano; Pession, Annalisa; Tallini, Giovanni

    2012-01-01

    The use of tyrosine kinase inhibitors (TKIs) requires the testing for hot spot mutations of the molecular effectors downstream the membrane-bound tyrosine kinases since their wild type status is expected for response to TKI therapy. We report a novel assay that we have called Allele Specific Locked Nucleic Acid quantitative PCR (ASLNAqPCR). The assay uses LNA-modified allele specific primers and LNA-modified beacon probes to increase sensitivity, specificity and to accurately quantify mutations. We designed primers specific for codon 12/13 KRAS mutations and BRAF V600E, and validated the assay with 300 routine samples from a variety of sources, including cytology specimens. All were analyzed by ASLNAqPCR and Sanger sequencing. Discordant cases were pyrosequenced. ASLNAqPCR correctly identified BRAF and KRAS mutations in all discordant cases and all had a mutated/wild type DNA ratio below the analytical sensitivity of the Sanger method. ASLNAqPCR was 100% specific with greater accuracy, positive and negative predictive values compared with Sanger sequencing. The analytical sensitivity of ASLNAqPCR is 0.1%, allowing quantification of mutated DNA in small neoplastic cell clones. ASLNAqPCR can be performed in any laboratory with real-time PCR equipment, is very cost-effective and can easily be adapted to detect hot spot mutations in other oncogenes. PMID:22558339

  4. Predicting in vivo glioma growth with the reaction diffusion equation constrained by quantitative magnetic resonance imaging data

    NASA Astrophysics Data System (ADS)

    Hormuth, David A., II; Weis, Jared A.; Barnes, Stephanie L.; Miga, Michael I.; Rericha, Erin C.; Quaranta, Vito; Yankeelov, Thomas E.

    2015-07-01

    Reaction-diffusion models have been widely used to model glioma growth. However, it has not been shown how accurately this model can predict future tumor status using model parameters (i.e., tumor cell diffusion and proliferation) estimated from quantitative in vivo imaging data. To this end, we used in silico studies to develop the methods needed to accurately estimate tumor specific reaction-diffusion model parameters, and then tested the accuracy with which these parameters can predict future growth. The analogous study was then performed in a murine model of glioma growth. The parameter estimation approach was tested using an in silico tumor ‘grown’ for ten days as dictated by the reaction-diffusion equation. Parameters were estimated from early time points and used to predict subsequent growth. Prediction accuracy was assessed at global (total volume and Dice value) and local (concordance correlation coefficient, CCC) levels. Guided by the in silico study, rats (n = 9) with C6 gliomas, imaged with diffusion weighted magnetic resonance imaging, were used to evaluate the model’s accuracy for predicting in vivo tumor growth. The in silico study resulted in low global (tumor volume error <8.8%, Dice >0.92) and local (CCC values >0.80) level errors for predictions up to six days into the future. The in vivo study showed higher global (tumor volume error >11.7%, Dice <0.81) and higher local (CCC <0.33) level errors over the same time period. The in silico study shows that model parameters can be accurately estimated and used to accurately predict future tumor growth at both the global and local scale. However, the poor predictive accuracy in the experimental study suggests the reaction-diffusion equation is an incomplete description of in vivo C6 glioma biology and may require further modeling of intra-tumor interactions including segmentation of (for example) proliferative and necrotic regions.

  5. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    NASA Astrophysics Data System (ADS)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  6. An accurate and efficient method to predict the electronic excitation energies of BODIPY fluorescent dyes.

    PubMed

    Wang, Jia-Nan; Jin, Jun-Ling; Geng, Yun; Sun, Shi-Ling; Xu, Hong-Liang; Lu, Ying-Hua; Su, Zhong-Min

    2013-03-15

    Recently, the extreme learning machine neural network (ELMNN) as a valid computing method has been proposed to predict the nonlinear optical property successfully (Wang et al., J. Comput. Chem. 2012, 33, 231). In this work, first, we follow this line of work to predict the electronic excitation energies using the ELMNN method. Significantly, the root mean square deviation of the predicted electronic excitation energies of 90 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene (BODIPY) derivatives between the predicted and experimental values has been reduced to 0.13 eV. Second, four groups of molecule descriptors are considered when building the computing models. The results show that the quantum chemical descriptions have the closest intrinsic relation with the electronic excitation energy values. Finally, a user-friendly web server (EEEBPre: Prediction of electronic excitation energies for BODIPY dyes), which is freely accessible to public at the web site: http://202.198.129.218, has been built for prediction. This web server can return the predicted electronic excitation energy values of BODIPY dyes that are high consistent with the experimental values. We hope that this web server would be helpful to theoretical and experimental chemists in related research.

  7. Sensor data fusion for accurate cloud presence prediction using Dempster-Shafer evidence theory.

    PubMed

    Li, Jiaming; Luo, Suhuai; Jin, Jesse S

    2010-01-01

    Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.

  8. Toward accurate prediction of pKa values for internal protein residues: the importance of conformational relaxation and desolvation energy.

    PubMed

    Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K

    2011-12-01

    Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy.

  9. NESmapper: accurate prediction of leucine-rich nuclear export signals using activity-based profiles.

    PubMed

    Kosugi, Shunichi; Yanagawa, Hiroshi; Terauchi, Ryohei; Tabata, Satoshi

    2014-09-01

    The nuclear export of proteins is regulated largely through the exportin/CRM1 pathway, which involves the specific recognition of leucine-rich nuclear export signals (NESs) in the cargo proteins, and modulates nuclear-cytoplasmic protein shuttling by antagonizing the nuclear import activity mediated by importins and the nuclear import signal (NLS). Although the prediction of NESs can help to define proteins that undergo regulated nuclear export, current methods of predicting NESs, including computational tools and consensus-sequence-based searches, have limited accuracy, especially in terms of their specificity. We found that each residue within an NES largely contributes independently and additively to the entire nuclear export activity. We created activity-based profiles of all classes of NESs with a comprehensive mutational analysis in mammalian cells. The profiles highlight a number of specific activity-affecting residues not only at the conserved hydrophobic positions but also in the linker and flanking regions. We then developed a computational tool, NESmapper, to predict NESs by using profiles that had been further optimized by training and combining the amino acid properties of the NES-flanking regions. This tool successfully reduced the considerable number of false positives, and the overall prediction accuracy was higher than that of other methods, including NESsential and Wregex. This profile-based prediction strategy is a reliable way to identify functional protein motifs. NESmapper is available at http://sourceforge.net/projects/nesmapper.

  10. Multi-omics integration accurately predicts cellular state in unexplored conditions for Escherichia coli

    PubMed Central

    Kim, Minseung; Rai, Navneet; Zorraquino, Violeta; Tagkopoulos, Ilias

    2016-01-01

    A significant obstacle in training predictive cell models is the lack of integrated data sources. We develop semi-supervised normalization pipelines and perform experimental characterization (growth, transcriptional, proteome) to create Ecomics, a consistent, quality-controlled multi-omics compendium for Escherichia coli with cohesive meta-data information. We then use this resource to train a multi-scale model that integrates four omics layers to predict genome-wide concentrations and growth dynamics. The genetic and environmental ontology reconstructed from the omics data is substantially different and complementary to the genetic and chemical ontologies. The integration of different layers confers an incremental increase in the prediction performance, as does the information about the known gene regulatory and protein-protein interactions. The predictive performance of the model ranges from 0.54 to 0.87 for the various omics layers, which far exceeds various baselines. This work provides an integrative framework of omics-driven predictive modelling that is broadly applicable to guide biological discovery. PMID:27713404

  11. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    SciTech Connect

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; Parks, Jerry M.

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of

  12. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    DOE PAGES

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; ...

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalaminmore » in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co axial ligand binding, leading to substantial errors in predicted

  13. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    PubMed

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-04

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  14. Prediction of photosensitivity of 1,4-dihydropyridine antihypertensives by quantitative structure-property relationship.

    PubMed

    Ioele, Giuseppina; De Luca, Michele; Oliverio, Filomena; Ragno, Gaetano

    2009-10-15

    A quantitative structure-property relationships (QSPR) model, correlating the light sensitivity against theoretical molecular descriptors, was developed for a set of 1,4-dihydropyridine calcium channel antagonist drugs. These compounds are characterized by a high tendency to degradation when exposed to light, furnishing in the most of cases a related oxidation product from aromatization of the dihydropyridinic ring. Photodegradation was forced by exposing the drugs to a Xenon lamp, in accordance with the ICH international rules, and degradation kinetics was monitored by spectrophotometry. The photodegradation rates combined with a series of descriptors related to the chemical structures were computed by Partial Least Squares (PLS) multivariate analysis. An accurate selection of the variables, fitting at the best the PLS model, was performed. Two descriptors related to the substituent information on both the dihydropyridinic and benzenic rings and four molecular descriptors, were selected. The QSPR model was fully cross validated and then optimized with an external set of novel 1,4-dihydropyridine drugs, obtaining very satisfactory statistical results. The good agreement between predicted and measured photodegradation rate (R(2)=0.8727) demonstrated the high accuracy of the QSPR model in predicting the photosensitivity of the drugs belonging to this class. The model was finally proposed as an effective tool to design new congeneric molecules characterized by high photostability.

  15. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    NASA Astrophysics Data System (ADS)

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  16. An endometrial gene expression signature accurately predicts recurrent implantation failure after IVF

    PubMed Central

    Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.

    2016-01-01

    The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113

  17. Accurate ab initio prediction of NMR chemical shifts of nucleic acids and nucleic acids/protein complexes

    PubMed Central

    Victora, Andrea; Möller, Heiko M.; Exner, Thomas E.

    2014-01-01

    NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3–0.6 ppm and correlation coefficients (r2) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135

  18. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  19. Dynamics of Flexible MLI-type Debris for Accurate Orbit Prediction

    DTIC Science & Technology

    2014-09-01

    SUBJECT TERMS EOARD, orbital debris , HAMR objects, multi-layered insulation, orbital dynamics, orbit predictions, orbital propagation 16. SECURITY...illustration are orbital debris [Souce: NASA...piece of space junk (a paint fleck) during the STS-7 mission (Photo: NASA Orbital Debris Program Office

  20. Hippocampus neuronal metabolic gene expression outperforms whole tissue data in accurately predicting Alzheimer's disease progression.

    PubMed

    Stempler, Shiri; Waldman, Yedael Y; Wolf, Lior; Ruppin, Eytan

    2012-09-01

    Numerous metabolic alterations are associated with the impairment of brain cells in Alzheimer's disease (AD). Here we use gene expression microarrays of both whole hippocampus tissue and hippocampal neurons of AD patients to investigate the ability of metabolic gene expression to predict AD progression and its cognitive decline. We find that the prediction accuracy of different AD stages is markedly higher when using neuronal expression data (0.9) than when using whole tissue expression (0.76). Furthermore, the metabolic genes' expression is shown to be as effective in predicting AD severity as the entire gene list. Remarkably, a regression model from hippocampal metabolic gene expression leads to a marked correlation of 0.57 with the Mini-Mental State Examination cognitive score. Notably, the expression of top predictive neuronal genes in AD is significantly higher than that of other metabolic genes in the brains of healthy subjects. All together, the analyses point to a subset of metabolic genes that is strongly associated with normal brain functioning and whose disruption plays a major role in AD.

  1. Predicting repeat self-harm in children--how accurate can we expect to be?

    PubMed

    Chitsabesan, Prathiba; Harrington, Richard; Harrington, Valerie; Tomenson, Barbara

    2003-01-01

    The main objective of the study was to find which variables predict repetition of deliberate self-harm in children. The study is based on a group of children who took part in a randomized control trial investigating the effects of a home-based family intervention for children who had deliberately poisoned themselves. These children had a range of baseline and outcome measures collected on two occasions (two and six months follow-up). Outcome data were collected from 149 (92 %) of the initial 162 children over the six months. Twenty-three children made a further deliberate self-harm attempt within the follow-up period. A number of variables at baseline were found to be significantly associated with repeat self-harm. Parental mental health and a history of previous attempts were the strongest predictors. A model of prediction of further deliberate self-harm combining these significant individual variables produced a high positive predictive value (86 %) but had low sensitivity (28 %). Predicting repeat self-harm in children is difficult, even with a comprehensive series of assessments over multiple time points, and we need to adapt services with this in mind. We propose a model of service provision which takes these findings into account.

  2. Multireference correlation consistent composite approach [MR-ccCA]: toward accurate prediction of the energetics of excited and transition state chemistry.

    PubMed

    Oyedepo, Gbenga A; Wilson, Angela K

    2010-08-26

    The correlation consistent Composite Approach, ccCA [ Deyonker , N. J. ; Cundari , T. R. ; Wilson , A. K. J. Chem. Phys. 2006 , 124 , 114104 ] has been demonstrated to predict accurate thermochemical properties of chemical species that can be described by a single configurational reference state, and at reduced computational cost, as compared with ab initio methods such as CCSD(T) used in combination with large basis sets. We have developed three variants of a multireference equivalent of this successful theoretical model. The method, called the multireference correlation consistent composite approach (MR-ccCA), is designed to predict the thermochemical properties of reactive intermediates, excited state species, and transition states to within chemical accuracy (e.g., 1 kcal/mol for enthalpies of formation) of reliable experimental values. In this study, we have demonstrated the utility of MR-ccCA: (1) in the determination of the adiabatic singlet-triplet energy separations and enthalpies of formation for the ground states for a set of diradicals and unsaturated compounds, and (2) in the prediction of energetic barriers to internal rotation, in ethylene and its heavier congener, disilene. Additionally, we have utilized MR-ccCA to predict the enthalpies of formation of the low-lying excited states of all the species considered. MR-ccCA is shown to give quantitative results without reliance upon empirically derived parameters, making it suitable for application to study novel chemical systems with significant nondynamical correlation effects.

  3. Accurate prediction of the optical rotation and NMR properties for highly flexible chiral natural products.

    PubMed

    Hashmi, Muhammad Ali; Andreassend, Sarah K; Keyzers, Robert A; Lein, Matthias

    2016-09-21

    Despite advances in electronic structure theory the theoretical prediction of spectroscopic properties remains a computational challenge. This is especially true for natural products that exhibit very large conformational freedom and hence need to be sampled over many different accessible conformations. We report a strategy, which is able to predict NMR chemical shifts and more elusive properties like the optical rotation with great precision, through step-wise incremental increases of the conformational degrees of freedom. The application of this method is demonstrated for 3-epi-xestoaminol C, a chiral natural compound with a long, linear alkyl chain of 14 carbon atoms. Experimental NMR and [α]D values are reported to validate the results of the density functional theory calculations.

  4. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    PubMed

    El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  5. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.

    PubMed

    Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L

    2015-06-30

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives.

  6. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    SciTech Connect

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  7. Fast and accurate numerical method for predicting gas chromatography retention time.

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-08-07

    Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.

  8. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    PubMed

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure.

  9. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    PubMed Central

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  10. Accurate prediction of drug-induced liver injury using stem cell-derived populations.

    PubMed

    Szkolnicka, Dagmara; Farnworth, Sarah L; Lucendo-Villarin, Baltasar; Storck, Christopher; Zhou, Wenli; Iredale, John P; Flint, Oliver; Hay, David C

    2014-02-01

    Despite major progress in the knowledge and management of human liver injury, there are millions of people suffering from chronic liver disease. Currently, the only cure for end-stage liver disease is orthotopic liver transplantation; however, this approach is severely limited by organ donation. Alternative approaches to restoring liver function have therefore been pursued, including the use of somatic and stem cell populations. Although such approaches are essential in developing scalable treatments, there is also an imperative to develop predictive human systems that more effectively study and/or prevent the onset of liver disease and decompensated organ function. We used a renewable human stem cell resource, from defined genetic backgrounds, and drove them through developmental intermediates to yield highly active, drug-inducible, and predictive human hepatocyte populations. Most importantly, stem cell-derived hepatocytes displayed equivalence to primary adult hepatocytes, following incubation with known hepatotoxins. In summary, we have developed a serum-free, scalable, and shippable cell-based model that faithfully predicts the potential for human liver injury. Such a resource has direct application in human modeling and, in the future, could play an important role in developing renewable cell-based therapies.

  11. Narcissism and childhood recollections: a quantitative test of psychoanalytic predictions.

    PubMed

    Otway, Lorna J; Vignoles, Vivian L

    2006-01-01

    Different psychotherapeutic theories provide contradictory accounts of adult narcissism as the product of either parental coldness or excessive parental admiration during childhood. Yet, none of these theories has been tested systematically in a nonclinical sample. The authors compared four structural equation models predicting overt and covert narcissism among 120 United Kingdom adults. Both forms of narcissism were predicted by both recollections of parental coldness and recollections of excessive parental admiration. Moreover, a suppression relationship was detected between these predictors: The effects of each were stronger when modeled together than separately. These effects were found after controlling for working models of attachment; covert narcissism was predicted also by attachment anxiety. This combination of childhood experiences may help to explain the paradoxical combination of grandiosity and fragility in adult narcissism.

  12. Computing organic stereoselectivity - from concepts to quantitative calculations and predictions.

    PubMed

    Peng, Qian; Duarte, Fernanda; Paton, Robert S

    2016-11-07

    Advances in theory and processing power have established computation as a valuable interpretative and predictive tool in the discovery of new asymmetric catalysts. This tutorial review outlines the theory and practice of modeling stereoselective reactions. Recent examples illustrate how an understanding of the fundamental principles and the application of state-of-the-art computational methods may be used to gain mechanistic insight into organic and organometallic reactions. We highlight the emerging potential of this computational tool-box in providing meaningful predictions for the rational design of asymmetric catalysts. We present an accessible account of the field to encourage future synergy between computation and experiment.

  13. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding

    NASA Astrophysics Data System (ADS)

    Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.

    2016-02-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process.

  14. Can tritiated water-dilution space accurately predict total body water in chukar partridges

    SciTech Connect

    Crum, B.G.; Williams, J.B.; Nagy, K.A.

    1985-11-01

    Total body water (TBW) volumes determined from the dilution space of injected tritiated water have consistently overestimated actual water volumes (determined by desiccation to constant mass) in reptiles and mammals, but results for birds are controversial. We investigated potential errors in both the dilution method and the desiccation method in an attempt to resolve this controversy. Tritiated water dilution yielded an accurate measurement of water mass in vitro. However, in vivo, this method yielded a 4.6% overestimate of the amount of water (3.1% of live body mass) in chukar partridges, apparently largely because of loss of tritium from body water to sites of dissociable hydrogens on body solids. An additional source of overestimation (approximately 2% of body mass) was loss of tritium to the solids in blood samples during distillation of blood to obtain pure water for tritium analysis. Measuring tritium activity in plasma samples avoided this problem but required measurement of, and correction for, the dry matter content in plasma. Desiccation to constant mass by lyophilization or oven-drying also overestimated the amount of water actually in the bodies of chukar partridges by 1.4% of body mass, because these values included water adsorbed onto the outside of feathers. When desiccating defeathered carcasses, oven-drying at 70 degrees C yielded TBW values identical to those obtained from lyophilization, but TBW was overestimated (0.5% of body mass) by drying at 100 degrees C due to loss of organic substances as well as water.

  15. Does preoperative cross-sectional imaging accurately predict main duct involvement in intraductal papillary mucinous neoplasm?

    PubMed

    Barron, M R; Roch, A M; Waters, J A; Parikh, J A; DeWitt, J M; Al-Haddad, M A; Ceppa, E P; House, M G; Zyromski, N J; Nakeeb, A; Pitt, H A; Schmidt, C Max

    2014-03-01

    Main pancreatic duct (MPD) involvement is a well-demonstrated risk factor for malignancy in intraductal papillary mucinous neoplasm (IPMN). Preoperative radiographic determination of IPMN type is heavily relied upon in oncologic risk stratification. We hypothesized that radiographic assessment of MPD involvement in IPMN is an accurate predictor of pathological MPD involvement. Data regarding all patients undergoing resection for IPMN at a single academic institution between 1992 and 2012 were gathered prospectively. Retrospective analysis of imaging and pathologic data was undertaken. Preoperative classification of IPMN type was based on cross-sectional imaging (MRI/magnetic resonance cholangiopancreatography (MRCP) and/or CT). Three hundred sixty-two patients underwent resection for IPMN. Of these, 334 had complete data for analysis. Of 164 suspected branch duct (BD) IPMN, 34 (20.7%) demonstrated MPD involvement on final pathology. Of 170 patients with suspicion of MPD involvement, 50 (29.4%) demonstrated no MPD involvement. Of 34 patients with suspected BD-IPMN who were found to have MPD involvement on pathology, 10 (29.4%) had invasive carcinoma. Alternatively, 2/50 (4%) of the patients with suspected MPD involvement who ultimately had isolated BD-IPMN demonstrated invasive carcinoma. Preoperative radiographic IPMN type did not correlate with final pathology in 25% of the patients. In addition, risk of invasive carcinoma correlates with pathologic presence of MPD involvement.

  16. DisoMCS: Accurately Predicting Protein Intrinsically Disordered Regions Using a Multi-Class Conservative Score Approach

    PubMed Central

    Wang, Zhiheng; Yang, Qianqian; Li, Tonghua; Cong, Peisheng

    2015-01-01

    The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS) obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction) tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database. Availability The DisoMCS is available at http://cal.tongji.edu.cn/disorder/. PMID:26090958

  17. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    SciTech Connect

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  18. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    NASA Astrophysics Data System (ADS)

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-01

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  19. PREDICTING TOXICOLOGICAL ENDPOINTS OF CHEMICALS USING QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIPS (QSARS)

    EPA Science Inventory

    Quantitative structure-activity relationships (QSARs) are being developed to predict the toxicological endpoints for untested chemicals similar in structure to chemicals that have known experimental toxicological data. Based on a very large number of predetermined descriptors, a...

  20. Computational methods toward accurate RNA structure prediction using coarse-grained and all-atom models.

    PubMed

    Krokhotin, Andrey; Dokholyan, Nikolay V

    2015-01-01

    Computational methods can provide significant insights into RNA structure and dynamics, bridging the gap in our understanding of the relationship between structure and biological function. Simulations enrich and enhance our understanding of data derived on the bench, as well as provide feasible alternatives to costly or technically challenging experiments. Coarse-grained computational models of RNA are especially important in this regard, as they allow analysis of events occurring in timescales relevant to RNA biological function, which are inaccessible through experimental methods alone. We have developed a three-bead coarse-grained model of RNA for discrete molecular dynamics simulations. This model is efficient in de novo prediction of short RNA tertiary structure, starting from RNA primary sequences of less than 50 nucleotides. To complement this model, we have incorporated additional base-pairing constraints and have developed a bias potential reliant on data obtained from hydroxyl probing experiments that guide RNA folding to its correct state. By introducing experimentally derived constraints to our computer simulations, we are able to make reliable predictions of RNA tertiary structures up to a few hundred nucleotides. Our refined model exemplifies a valuable benefit achieved through integration of computation and experimental methods.

  1. Neural network and SVM classifiers accurately predict lipid binding proteins, irrespective of sequence homology.

    PubMed

    Bakhtiarizadeh, Mohammad Reza; Moradi-Shahrbabak, Mohammad; Ebrahimi, Mansour; Ebrahimie, Esmaeil

    2014-09-07

    Due to the central roles of lipid binding proteins (LBPs) in many biological processes, sequence based identification of LBPs is of great interest. The major challenge is that LBPs are diverse in sequence, structure, and function which results in low accuracy of sequence homology based methods. Therefore, there is a need for developing alternative functional prediction methods irrespective of sequence similarity. To identify LBPs from non-LBPs, the performances of support vector machine (SVM) and neural network were compared in this study. Comprehensive protein features and various techniques were employed to create datasets. Five-fold cross-validation (CV) and independent evaluation (IE) tests were used to assess the validity of the two methods. The results indicated that SVM outperforms neural network. SVM achieved 89.28% (CV) and 89.55% (IE) overall accuracy in identification of LBPs from non-LBPs and 92.06% (CV) and 92.90% (IE) (in average) for classification of different LBPs classes. Increasing the number and the range of extracted protein features as well as optimization of the SVM parameters significantly increased the efficiency of LBPs class prediction in comparison to the only previous report in this field. Altogether, the results showed that the SVM algorithm can be run on broad, computationally calculated protein features and offers a promising tool in detection of LBPs classes. The proposed approach has the potential to integrate and improve the common sequence alignment based methods.

  2. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    SciTech Connect

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  3. Combining multiple regression and principal component analysis for accurate predictions for column ozone in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.

    2013-06-01

    This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.

  4. How Accurate Is the Prediction of Maximal Oxygen Uptake with Treadmill Testing?

    PubMed Central

    Wicks, John R.; Oldridge, Neil B.

    2016-01-01

    Background Cardiorespiratory fitness measured by treadmill testing has prognostic significance in determining mortality with cardiovascular and other chronic disease states. The accuracy of a recently developed method for estimating maximal oxygen uptake (VO2peak), the heart rate index (HRI), is dependent only on heart rate (HR) and was tested against oxygen uptake (VO2), either measured or predicted from conventional treadmill parameters (speed, incline, protocol time). Methods The HRI equation, METs = 6 x HRI– 5, where HRI = maximal HR/resting HR, provides a surrogate measure of VO2peak. Forty large scale treadmill studies were identified through a systematic search using MEDLINE, Google Scholar and Web of Science in which VO2peak was either measured (TM-VO2meas; n = 20) or predicted (TM-VO2pred; n = 20) based on treadmill parameters. All studies were required to have reported group mean data of both resting and maximal HRs for determination of HR index-derived oxygen uptake (HRI-VO2). Results The 20 studies with measured VO2 (TM-VO2meas), involved 11,477 participants (median 337) with a total of 105,044 participants (median 3,736) in the 20 studies with predicted VO2 (TM-VO2pred). A difference of only 0.4% was seen between mean (±SD) VO2peak for TM- VO2meas and HRI-VO2 (6.51±2.25 METs and 6.54±2.28, respectively; p = 0.84). In contrast, there was a highly significant 21.1% difference between mean (±SD) TM-VO2pred and HRI-VO2 (8.12±1.85 METs and 6.71±1.92, respectively; p<0.001). Conclusion Although mean TM-VO2meas and HRI-VO2 were almost identical, mean TM-VO2pred was more than 20% greater than mean HRI-VO2. PMID:27875547

  5. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    SciTech Connect

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  6. Fast and Accurate Accessible Surface Area Prediction Without a Sequence Profile.

    PubMed

    Faraggi, Eshel; Kouza, Maksim; Zhou, Yaoqi; Kloczkowski, Andrzej

    2017-01-01

    A fast accessible surface area (ASA) predictor is presented. In this new approach no residue mutation profiles generated by multiple sequence alignments are used as inputs. Instead, we use only single sequence information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for ASAquick are available from Research and Information Systems at http://mamiris.com and from the Battelle Center for Mathematical Medicine at http://mathmed.org .

  7. Sequence features accurately predict genome-wide MeCP2 binding in vivo

    PubMed Central

    Rube, H. Tomas; Lee, Wooje; Hejna, Miroslav; Chen, Huaiyang; Yasui, Dag H.; Hess, John F.; LaSalle, Janine M.; Song, Jun S.; Gong, Qizhi

    2016-01-01

    Methyl-CpG binding protein 2 (MeCP2) is critical for proper brain development and expressed at near-histone levels in neurons, but the mechanism of its genomic localization remains poorly understood. Using high-resolution MeCP2-binding data, we show that DNA sequence features alone can predict binding with 88% accuracy. Integrating MeCP2 binding and DNA methylation in a probabilistic graphical model, we demonstrate that previously reported genome-wide association with methylation is in part due to MeCP2's affinity to GC-rich chromatin, a result replicated using published data. Furthermore, MeCP2 co-localizes with nucleosomes. Finally, MeCP2 binding downstream of promoters correlates with increased expression in Mecp2-deficient neurons. PMID:27008915

  8. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.

  9. Development of a method to accurately calculate the Dpb and quickly predict the strength of a chemical bond

    NASA Astrophysics Data System (ADS)

    Du, Xia; Zhao, Dong-Xia; Yang, Zhong-Zhi

    2013-02-01

    A new approach to characterize and measure bond strength has been developed. First, we propose a method to accurately calculate the potential acting on an electron in a molecule (PAEM) at the saddle point along a chemical bond in situ, denoted by Dpb. Then, a direct method to quickly evaluate bond strength is established. We choose some familiar molecules as models for benchmarking this method. As a practical application, the Dpb of base pairs in DNA along C-H and N-H bonds are obtained for the first time. All results show that C7-H of A-T and C8-H of G-C are the relatively weak bonds that are the injured positions in DNA damage. The significance of this work is twofold: (i) A method is developed to calculate Dpb of various sizable molecules in situ quickly and accurately; (ii) This work demonstrates the feasibility to quickly predict the bond strength in macromolecules.

  10. Fast and accurate prediction for aerodynamic forces and moments acting on satellites flying in Low-Earth Orbit

    NASA Astrophysics Data System (ADS)

    Jin, Xuhon; Huang, Fei; Hu, Pengju; Cheng, Xiaoli

    2016-11-01

    A fundamental prerequisite for satellites operating in a Low Earth Orbit (LEO) is the availability of fast and accurate prediction of non-gravitational aerodynamic forces, which is characterised by the free molecular flow regime. However, conventional computational methods like the analytical integral method and direct simulation Monte Carlo (DSMC) technique are found failing to deal with flow shadowing and multiple reflections or computationally expensive. This work develops a general computer program for the accurate calculation of aerodynamic forces in the free molecular flow regime using the test particle Monte Carlo (TPMC) method, and non-gravitational aerodynamic forces actiong on the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite is calculated for different freestream conditions and gas-surface interaction models by the computer program.

  11. Quantitative thickness prediction of tectonically deformed coal using Extreme Learning Machine and Principal Component Analysis: a case study

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Li, Yan; Chen, Tongjun; Yan, Qiuyan; Ma, Li

    2017-04-01

    The thickness of tectonically deformed coal (TDC) has positive correlation associations with gas outbursts. In order to predict the TDC thickness of coal beds, we propose a new quantitative predicting method using an extreme learning machine (ELM) algorithm, a principal component analysis (PCA) algorithm, and seismic attributes. At first, we build an ELM prediction model using the PCA attributes of a synthetic seismic section. The results suggest that the ELM model can produce a reliable and accurate prediction of the TDC thickness for synthetic data, preferring Sigmoid activation function and 20 hidden nodes. Then, we analyze the applicability of the ELM model on the thickness prediction of the TDC with real application data. Through the cross validation of near-well traces, the results suggest that the ELM model can produce a reliable and accurate prediction of the TDC. After that, we use 250 near-well traces from 10 wells to build an ELM predicting model and use the model to forecast the TDC thickness of the No. 15 coal in the study area using the PCA attributes as the inputs. Comparing the predicted results, it is noted that the trained ELM model with two selected PCA attributes yields better predication results than those from the other combinations of the attributes. Finally, the trained ELM model with real seismic data have a different number of hidden nodes (10) than the trained ELM model with synthetic seismic data. In summary, it is feasible to use an ELM model to predict the TDC thickness using the calculated PCA attributes as the inputs. However, the input attributes, the activation function and the number of hidden nodes in the ELM model should be selected and tested carefully based on individual application.

  12. Simplified risk score models accurately predict the risk of major in-hospital complications following percutaneous coronary intervention.

    PubMed

    Resnic, F S; Ohno-Machado, L; Selwyn, A; Simon, D I; Popma, J J

    2001-07-01

    The objectives of this analysis were to develop and validate simplified risk score models for predicting the risk of major in-hospital complications after percutaneous coronary intervention (PCI) in the era of widespread stenting and use of glycoprotein IIb/IIIa antagonists. We then sought to compare the performance of these simplified models with those of full logistic regression and neural network models. From January 1, 1997 to December 31, 1999, data were collected on 4,264 consecutive interventional procedures at a single center. Risk score models were derived from multiple logistic regression models using the first 2,804 cases and then validated on the final 1,460 cases. The area under the receiver operating characteristic (ROC) curve for the risk score model that predicted death was 0.86 compared with 0.85 for the multiple logistic model and 0.83 for the neural network model (validation set). For the combined end points of death, myocardial infarction, or bypass surgery, the corresponding areas under the ROC curves were 0.74, 0.78, and 0.81, respectively. Previously identified risk factors were confirmed in this analysis. The use of stents was associated with a decreased risk of in-hospital complications. Thus, risk score models can accurately predict the risk of major in-hospital complications after PCI. Their discriminatory power is comparable to those of logistic models and neural network models. Accurate bedside risk stratification may be achieved with these simple models.

  13. Can a quantitative simulation of an Otto engine be accurately rendered by a simple Novikov model with heat leak?

    NASA Astrophysics Data System (ADS)

    Fischer, A.; Hoffmann, K.-H.

    2004-03-01

    In this case study a complex Otto engine simulation provides data including, but not limited to, effects from losses due to heat conduction, exhaust losses and frictional losses. This data is used as a benchmark to test whether the Novikov engine with heat leak, a simple endoreversible model, can reproduce the complex engine behavior quantitatively by an appropriate choice of model parameters. The reproduction obtained proves to be of high quality.

  14. Identification of fidgety movements and prediction of CP by the use of computer-based video analysis is more accurate when based on two video recordings.

    PubMed

    Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild

    2013-08-01

    This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies.

  15. Integrative subcellular proteomic analysis allows accurate prediction of human disease-causing genes

    PubMed Central

    Zhao, Li; Chen, Yiyun; Bajaj, Amol Onkar; Eblimit, Aiden; Xu, Mingchu; Soens, Zachry T.; Wang, Feng; Ge, Zhongqi; Jung, Sung Yun; He, Feng; Li, Yumei; Wensel, Theodore G.; Qin, Jun; Chen, Rui

    2016-01-01

    Proteomic profiling on subcellular fractions provides invaluable information regarding both protein abundance and subcellular localization. When integrated with other data sets, it can greatly enhance our ability to predict gene function genome-wide. In this study, we performed a comprehensive proteomic analysis on the light-sensing compartment of photoreceptors called the outer segment (OS). By comparing with the protein profile obtained from the retina tissue depleted of OS, an enrichment score for each protein is calculated to quantify protein subcellular localization, and 84% accuracy is achieved compared with experimental data. By integrating the protein OS enrichment score, the protein abundance, and the retina transcriptome, the probability of a gene playing an essential function in photoreceptor cells is derived with high specificity and sensitivity. As a result, a list of genes that will likely result in human retinal disease when mutated was identified and validated by previous literature and/or animal model studies. Therefore, this new methodology demonstrates the synergy of combining subcellular fractionation proteomics with other omics data sets and is generally applicable to other tissues and diseases. PMID:26912414

  16. Accurate prediction of the refractive index of polymers using first principles and data modeling

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes

    Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.

  17. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    PubMed

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin.

  18. Searching for Computational Strategies to Accurately Predict pKas of Large Phenolic Derivatives.

    PubMed

    Rebollar-Zepeda, Aida Mariana; Campos-Hernández, Tania; Ramírez-Silva, María Teresa; Rojas-Hernández, Alberto; Galano, Annia

    2011-08-09

    Twenty-two reaction schemes have been tested, within the cluster-continuum model including up to seven explicit water molecules. They have been used in conjunction with nine different methods, within the density functional theory and with second-order Møller-Plesset. The quality of the pKa predictions was found to be strongly dependent on the chosen scheme, while only moderately influenced by the method of calculation. We recommend the E1 reaction scheme [HA + OH(-) (3H2O) ↔ A(-) (H2O) + 3H2O], since it yields mean unsigned errors (MUE) lower than 1 unit of pKa for most of the tested functionals. The best pKa values obtained from this reaction scheme are those involving calculations with PBE0 (MUE = 0.77), TPSS (MUE = 0.82), BHandHLYP (MUE = 0.82), and B3LYP (MUE = 0.86) functionals. This scheme has the additional advantage, compared to the proton exchange method, which also gives very small values of MUE, of being experiment independent. It should be kept in mind, however, that these recommendations are valid within the cluster-continuum model, using the polarizable continuum model in conjunction with the united atom Hartree-Fock cavity and the strategy based on thermodynamic cycles. Changes in any of these aspects of the used methodology may lead to different outcomes.

  19. Towards Relaxing the Spherical Solar Radiation Pressure Model for Accurate Orbit Predictions

    NASA Astrophysics Data System (ADS)

    Lachut, M.; Bennett, J.

    2016-09-01

    The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit predictions by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus resulting in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.

  20. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.

  1. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    PubMed Central

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries. PMID:26520735

  2. Prediction of Prostate Cancer Recurrence Using Quantitative Phase Imaging

    NASA Astrophysics Data System (ADS)

    Sridharan, Shamira; Macias, Virgilia; Tangella, Krishnarao; Kajdacsy-Balla, André; Popescu, Gabriel

    2015-05-01

    The risk of biochemical recurrence of prostate cancer among individuals who undergo radical prostatectomy for treatment is around 25%. Current clinical methods often fail at successfully predicting recurrence among patients at intermediate risk for recurrence. We used a label-free method, spatial light interference microscopy, to perform localized measurements of light scattering in prostatectomy tissue microarrays. We show, for the first time to our knowledge, that anisotropy of light scattering in the stroma immediately adjoining cancerous glands can be used to identify patients at higher risk for recurrence. The data show that lower value of anisotropy corresponds to a higher risk for recurrence, meaning that the stroma adjoining the glands of recurrent patients is more fractionated than in non-recurrent patients. Our method outperformed the widely accepted clinical tool CAPRA-S in the cases we interrogated irrespective of Gleason grade, prostate-specific antigen (PSA) levels and pathological tumor-node-metastasis (pTNM) stage. These results suggest that QPI shows promise in assisting pathologists to improve prediction of prostate cancer recurrence.

  3. Quantitative assessment of protein function prediction from metagenomics shotgun sequences.

    PubMed

    Harrington, E D; Singh, A H; Doerks, T; Letunic, I; von Mering, C; Jensen, L J; Raes, J; Bork, P

    2007-08-28

    To assess the potential of protein function prediction in environmental genomics data, we analyzed shotgun sequences from four diverse and complex habitats. Using homology searches as well as customized gene neighborhood methods that incorporate intergenic and evolutionary distances, we inferred specific functions for 76% of the 1.4 million predicted ORFs in these samples (83% when nonspecific functions are considered). Surprisingly, these fractions are only slightly smaller than the corresponding ones in completely sequenced genomes (83% and 86%, respectively, by using the same methodology) and considerably higher than previously thought. For as many as 75,448 ORFs (5% of the total), only neighborhood methods can assign functions, illustrated here by a previously undescribed gene associated with the well characterized heme biosynthesis operon and a potential transcription factor that might regulate a coupling between fatty acid biosynthesis and degradation. Our results further suggest that, although functions can be inferred for most proteins on earth, many functions remain to be discovered in numerous small, rare protein families.

  4. Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2002-01-01

    NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.

  5. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    SciTech Connect

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  6. Quantitative morphometry of glomerulonephritis with crescents. Diagnostic and predictive value.

    PubMed

    Elfenbein, I B; Baluarte, H J; Cubillos-Rojas, M; Gruskin, A B; Coté, M; Cornfeld, D

    1975-01-01

    Histologic patterns in the glomerular tufts in "Glomerulonephritis with many crescents" take three main forms: (1) compression and sclerosis of glomeruli, (2) necrotizing glomerulitis, and (3) proliferation with or without exudation. In the third group, histologic differentiation between patients with poststreptococcal glomerulonephritis with many crescents (AGN) and those with nonstreptococcal rapidly progressive glomerulonephritis (RPGN) may be impossible. In a retrospective study, quantitative morphometry of glomeruli effectively separated three patients with AGN from two patients with RPGN after the usual histologic and electron microscopic observations had failed. Parameters studied were areas of tufts and crescents and total number of cells and granulocytes in tufts and crescents. Surface areas of tufts and crescents were separately determined by photographing glomeruli, projecting and tracing outlines of tufts and crescents, and cutting out and weighing the tracings. The cell density of glomerular tufts (cell per 1000-sq. mum. area) was significantly greater in AGN than in RPGN when either total cell densities (17.64 plus or minus 0.41 versus 13.63 plus or minus 0.30) or total cells minus granulocytes (16.39 plus or minus 0.50 versus 12.99 plus or minus 0.52) were compared. The cell density in the tufts was 120 and 70 per cent greater than controls in AGN and RPGN, respectively. Exudation of inflammatory cells is contributory but not the major cause of hypercellularity in AGN. Follow-up studies with biopsies showed marked resolution in two of three patients with AGN, with normal blood urea nitrogen levels and focal scarring in the third, whereas the two patients with RPGN had either extensive scarring and reduced renal function or required chronic hemodialysis.

  7. Neurodegenerative diseases: quantitative predictions of protein-RNA interactions.

    PubMed

    Cirillo, Davide; Agostini, Federico; Klus, Petr; Marchese, Domenica; Rodriguez, Silvia; Bolognesi, Benedetta; Tartaglia, Gian Gaetano

    2013-02-01

    Increasing evidence indicates that RNA plays an active role in a number of neurodegenerative diseases. We recently introduced a theoretical framework, catRAPID, to predict the binding ability of protein and RNA molecules. Here, we use catRAPID to investigate ribonucleoprotein interactions linked to inherited intellectual disability, amyotrophic lateral sclerosis, Creutzfeuld-Jakob, Alzheimer's, and Parkinson's diseases. We specifically focus on (1) RNA interactions with fragile X mental retardation protein FMRP; (2) protein sequestration caused by CGG repeats; (3) noncoding transcripts regulated by TAR DNA-binding protein 43 TDP-43; (4) autogenous regulation of TDP-43 and FMRP; (5) iron-mediated expression of amyloid precursor protein APP and α-synuclein; (6) interactions between prions and RNA aptamers. Our results are in striking agreement with experimental evidence and provide new insights in processes associated with neuronal function and misfunction.

  8. Does a more precise chemical description of protein-ligand complexes lead to more accurate prediction of binding affinity?

    PubMed

    Ballester, Pedro J; Schreyer, Adrian; Blundell, Tom L

    2014-03-24

    Predicting the binding affinities of large sets of diverse molecules against a range of macromolecular targets is an extremely challenging task. The scoring functions that attempt such computational prediction are essential for exploiting and analyzing the outputs of docking, which is in turn an important tool in problems such as structure-based drug design. Classical scoring functions assume a predetermined theory-inspired functional form for the relationship between the variables that describe an experimentally determined or modeled structure of a protein-ligand complex and its binding affinity. The inherent problem of this approach is in the difficulty of explicitly modeling the various contributions of intermolecular interactions to binding affinity. New scoring functions based on machine-learning regression models, which are able to exploit effectively much larger amounts of experimental data and circumvent the need for a predetermined functional form, have already been shown to outperform a broad range of state-of-the-art scoring functions in a widely used benchmark. Here, we investigate the impact of the chemical description of the complex on the predictive power of the resulting scoring function using a systematic battery of numerical experiments. The latter resulted in the most accurate scoring function to date on the benchmark. Strikingly, we also found that a more precise chemical description of the protein-ligand complex does not generally lead to a more accurate prediction of binding affinity. We discuss four factors that may contribute to this result: modeling assumptions, codependence of representation and regression, data restricted to the bound state, and conformational heterogeneity in data.

  9. Easy-to-use, general, and accurate multi-Kinect calibration and its application to gait monitoring for fall prediction.

    PubMed

    Staranowicz, Aaron N; Ray, Christopher; Mariottini, Gian-Luca

    2015-01-01

    Falls are the most-common causes of unintentional injury and death in older adults. Many clinics, hospitals, and health-care providers are urgently seeking accurate, low-cost, and easy-to-use technology to predict falls before they happen, e.g., by monitoring the human walking pattern (or "gait"). Despite the wide popularity of Microsoft's Kinect and the plethora of solutions for gait monitoring, no strategy has been proposed to date to allow non-expert users to calibrate the cameras, which is essential to accurately fuse the body motion observed by each camera in a single frame of reference. In this paper, we present a novel multi-Kinect calibration algorithm that has advanced features when compared to existing methods: 1) is easy to use, 2) it can be used in any generic Kinect arrangement, and 3) it provides accurate calibration. Extensive real-world experiments have been conducted to validate our algorithm and to compare its performance against other multi-Kinect calibration approaches, especially to show the improved estimate of gait parameters. Finally, a MATLAB Toolbox has been made publicly available for the entire research community.

  10. A cross-race effect in metamemory: Predictions of face recognition are more accurate for members of our own race

    PubMed Central

    Hourihan, Kathleen L.; Benjamin, Aaron S.; Liu, Xiping

    2012-01-01

    The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness’s claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness. PMID:23162788

  11. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.

  12. Shrinking the Psoriasis Assessment Gap: Early Gene-Expression Profiling Accurately Predicts Response to Long-Term Treatment.

    PubMed

    Correa da Rosa, Joel; Kim, Jaehwan; Tian, Suyan; Tomalin, Lewis E; Krueger, James G; Suárez-Fariñas, Mayte

    2017-02-01

    There is an "assessment gap" between the moment a patient's response to treatment is biologically determined and when a response can actually be determined clinically. Patients' biochemical profiles are a major determinant of clinical outcome for a given treatment. It is therefore feasible that molecular-level patient information could be used to decrease the assessment gap. Thanks to clinically accessible biopsy samples, high-quality molecular data for psoriasis patients are widely available. Psoriasis is therefore an excellent disease for testing the prospect of predicting treatment outcome from molecular data. Our study shows that gene-expression profiles of psoriasis skin lesions, taken in the first 4 weeks of treatment, can be used to accurately predict (>80% area under the receiver operating characteristic curve) the clinical endpoint at 12 weeks. This could decrease the psoriasis assessment gap by 2 months. We present two distinct prediction modes: a universal predictor, aimed at forecasting the efficacy of untested drugs, and specific predictors aimed at forecasting clinical response to treatment with four specific drugs: etanercept, ustekinumab, adalimumab, and methotrexate. We also develop two forms of prediction: one from detailed, platform-specific data and one from platform-independent, pathway-based data. We show that key biomarkers are associated with responses to drugs and doses and thus provide insight into the biology of pathogenesis reversion.

  13. Quantitative polymerase chain reaction analysis of DNA from noninvasive samples for accurate microsatellite genotyping of wild chimpanzees (Pan troglodytes verus).

    PubMed

    Morin, P A; Chambers, K E; Boesch, C; Vigilant, L

    2001-07-01

    Noninvasive samples are useful for molecular genetic analyses of wild animal populations. However, the low DNA content of such samples makes DNA amplification difficult, and there is the potential for erroneous results when one of two alleles at heterozygous microsatellite loci fails to be amplified. In this study we describe an assay designed to measure the amount of amplifiable nuclear DNA in low DNA concentration extracts from noninvasive samples. We describe the range of DNA amounts obtained from chimpanzee faeces and shed hair samples and formulate a new efficient approach for accurate microsatellite genotyping. Prescreening of extracts for DNA quantity is recommended for sorting of samples for likely success and reliability. Repetition of results remains extensive for analysis of microsatellite amplifications beginning from low starting amounts of DNA, but is reduced for those with higher DNA content.

  14. Accurate prediction of unsteady and time-averaged pressure loads using a hybrid Reynolds-Averaged/large-eddy simulation technique

    NASA Astrophysics Data System (ADS)

    Bozinoski, Radoslav

    Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions

  15. Absolute Measurements of Macrophage Migration Inhibitory Factor and Interleukin-1-β mRNA Levels Accurately Predict Treatment Response in Depressed Patients

    PubMed Central

    Ferrari, Clarissa; Uher, Rudolf; Bocchio-Chiavetto, Luisella; Riva, Marco Andrea; Pariante, Carmine M.

    2016-01-01

    Background: Increased levels of inflammation have been associated with a poorer response to antidepressants in several clinical samples, but these findings have had been limited by low reproducibility of biomarker assays across laboratories, difficulty in predicting response probability on an individual basis, and unclear molecular mechanisms. Methods: Here we measured absolute mRNA values (a reliable quantitation of number of molecules) of Macrophage Migration Inhibitory Factor and interleukin-1β in a previously published sample from a randomized controlled trial comparing escitalopram vs nortriptyline (GENDEP) as well as in an independent, naturalistic replication sample. We then used linear discriminant analysis to calculate mRNA values cutoffs that best discriminated between responders and nonresponders after 12 weeks of antidepressants. As Macrophage Migration Inhibitory Factor and interleukin-1β might be involved in different pathways, we constructed a protein-protein interaction network by the Search Tool for the Retrieval of Interacting Genes/Proteins. Results: We identified cutoff values for the absolute mRNA measures that accurately predicted response probability on an individual basis, with positive predictive values and specificity for nonresponders of 100% in both samples (negative predictive value=82% to 85%, sensitivity=52% to 61%). Using network analysis, we identified different clusters of targets for these 2 cytokines, with Macrophage Migration Inhibitory Factor interacting predominantly with pathways involved in neurogenesis, neuroplasticity, and cell proliferation, and interleukin-1β interacting predominantly with pathways involved in the inflammasome complex, oxidative stress, and neurodegeneration. Conclusion: We believe that these data provide a clinically suitable approach to the personalization of antidepressant therapy: patients who have absolute mRNA values above the suggested cutoffs could be directed toward earlier access to more

  16. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait

    PubMed Central

    Golestan Hashemi, Farahnaz Sadat; Rafii, Mohd Y.; Ismail, Mohd Razi; Mohamed, Mahmud Tengku Muda; Rahim, Harun A.; Latif, Mohammad Abdul; Aslani, Farzad

    2015-01-01

    When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL) analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP), the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance) was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM). These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice. PMID:26061689

  17. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging

    NASA Astrophysics Data System (ADS)

    Hughes, Timothy J.; Kandathil, Shaun M.; Popelier, Paul L. A.

    2015-02-01

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G**, B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol-1, decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol-1.

  18. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-05

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1).

  19. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding

    PubMed Central

    Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.

    2016-01-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally—a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592

  20. TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients

    PubMed Central

    Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.

    2016-01-01

    BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218

  1. A novel fibrosis index comprising a non-cholesterol sterol accurately predicts HCV-related liver cirrhosis.

    PubMed

    Ydreborg, Magdalena; Lisovskaja, Vera; Lagging, Martin; Brehm Christensen, Peer; Langeland, Nina; Buhl, Mads Rauning; Pedersen, Court; Mørch, Kristine; Wejstål, Rune; Norkrans, Gunnar; Lindh, Magnus; Färkkilä, Martti; Westin, Johan

    2014-01-01

    Diagnosis of liver cirrhosis is essential in the management of chronic hepatitis C virus (HCV) infection. Liver biopsy is invasive and thus entails a risk of complications as well as a potential risk of sampling error. Therefore, non-invasive diagnostic tools are preferential. The aim of the present study was to create a model for accurate prediction of liver cirrhosis based on patient characteristics and biomarkers of liver fibrosis, including a panel of non-cholesterol sterols reflecting cholesterol synthesis and absorption and secretion. We evaluated variables with potential predictive significance for liver fibrosis in 278 patients originally included in a multicenter phase III treatment trial for chronic HCV infection. A stepwise multivariate logistic model selection was performed with liver cirrhosis, defined as Ishak fibrosis stage 5-6, as the outcome variable. A new index, referred to as Nordic Liver Index (NoLI) in the paper, was based on the model: Log-odds (predicting cirrhosis) = -12.17+ (age × 0.11) + (BMI (kg/m(2)) × 0.23) + (D7-lathosterol (μg/100 mg cholesterol)×(-0.013)) + (Platelet count (x10(9)/L) × (-0.018)) + (Prothrombin-INR × 3.69). The area under the ROC curve (AUROC) for prediction of cirrhosis was 0.91 (95% CI 0.86-0.96). The index was validated in a separate cohort of 83 patients and the AUROC for this cohort was similar (0.90; 95% CI: 0.82-0.98). In conclusion, the new index may complement other methods in diagnosing cirrhosis in patients with chronic HCV infection.

  2. Cholera Modeling: Challenges to Quantitative Analysis and Predicting the Impact of Interventions

    PubMed Central

    Grad, Yonatan H.; Miller, Joel C.; Lipsitch, Marc

    2012-01-01

    Several mathematical models of epidemic cholera have recently been proposed in response to outbreaks in Zimbabwe and Haiti. These models aim to estimate the dynamics of cholera transmission and the impact of possible interventions, with a goal of providing guidance to policy-makers in deciding among alternative courses of action, including vaccination, provision of clean water, and antibiotics. Here we discuss concerns about model misspecification, parameter uncertainty, and spatial heterogeneity intrinsic to models for cholera. We argue for caution in interpreting quantitative predictions, particularly predictions of the effectiveness of interventions. We specify sensitivity analyses that would be necessary to improve confidence in model-based quantitative prediction, and suggest types of monitoring in future epidemic settings that would improve analysis and prediction. PMID:22659546

  3. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  4. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S.; Shirley, Eric L.; Prendergast, David

    2017-03-01

    Constrained-occupancy delta-self-consistent-field (Δ SCF ) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1 s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The Δ SCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle Δ SCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  5. Qualitative and Quantitative Protein Complex Prediction Through Proteome-Wide Simulations.

    PubMed

    Rizzetto, Simone; Priami, Corrado; Csikász-Nagy, Attila

    2015-10-01

    Despite recent progress in proteomics most protein complexes are still unknown. Identification of these complexes will help us understand cellular regulatory mechanisms and support development of new drugs. Therefore it is really important to establish detailed information about the composition and the abundance of protein complexes but existing algorithms can only give qualitative predictions. Herein, we propose a new approach based on stochastic simulations of protein complex formation that integrates multi-source data--such as protein abundances, domain-domain interactions and functional annotations--to predict alternative forms of protein complexes together with their abundances. This method, called SiComPre (Simulation based Complex Prediction), achieves better qualitative prediction of yeast and human protein complexes than existing methods and is the first to predict protein complex abundances. Furthermore, we show that SiComPre can be used to predict complexome changes upon drug treatment with the example of bortezomib. SiComPre is the first method to produce quantitative predictions on the abundance of molecular complexes while performing the best qualitative predictions. With new data on tissue specific protein complexes becoming available SiComPre will be able to predict qualitative and quantitative differences in the complexome in various tissue types and under various conditions.

  6. Optimization of human dose prediction by using quantitative and translational pharmacology in drug discovery.

    PubMed

    Bueters, Tjerk; Gibson, Christopher; Visser, Sandra A G

    2015-01-01

    In this perspective article, we explain how quantitative and translational pharmacology, when well-implemented, is believed to lead to improved clinical candidates and drug targets that are differentiated from current treatment options. Quantitative and translational pharmacology aims to build and continuously improve the quantitative relationship between drug exposure, target engagement, efficacy, safety and its interspecies relationship at every phase of drug discovery. Drug hunters should consider and apply these concepts to develop compounds with a higher probability of interrogating the clinical biological hypothesis. We offer different approaches to set an initial effective concentration or pharmacokinetic-pharmacodynamic target in man and to predict human pharmacokinetics that determine together the predicted human dose and dose schedule. All concepts are illustrated with ample literature examples.

  7. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  8. A High Resolution/Accurate Mass (HRAM) Data-Dependent MS3 Neutral Loss Screening, Classification, and Relative Quantitation Methodology for Carbonyl Compounds in Saliva

    NASA Astrophysics Data System (ADS)

    Dator, Romel; Carrà, Andrea; Maertens, Laura; Guidolin, Valeria; Villalta, Peter W.; Balbo, Silvia

    2016-10-01

    Reactive carbonyl compounds (RCCs) are ubiquitous in the environment and are generated endogenously as a result of various physiological and pathological processes. These compounds can react with biological molecules inducing deleterious processes believed to be at the basis of their toxic effects. Several of these compounds are implicated in neurotoxic processes, aging disorders, and cancer. Therefore, a method characterizing exposures to these chemicals will provide insights into how they may influence overall health and contribute to disease pathogenesis. Here, we have developed a high resolution accurate mass (HRAM) screening strategy allowing simultaneous identification and relative quantitation of DNPH-derivatized carbonyls in human biological fluids. The screening strategy involves the diagnostic neutral loss of hydroxyl radical triggering MS3 fragmentation, which is only observed in positive ionization mode of DNPH-derivatized carbonyls. Unique fragmentation pathways were used to develop a classification scheme for characterizing known and unanticipated/unknown carbonyl compounds present in saliva. Furthermore, a relative quantitation strategy was implemented to assess variations in the levels of carbonyl compounds before and after exposure using deuterated d 3 -DNPH. This relative quantitation method was tested on human samples before and after exposure to specific amounts of alcohol. The nano-electrospray ionization (nano-ESI) in positive mode afforded excellent sensitivity with detection limits on-column in the high-attomole levels. To the best of our knowledge, this is the first report of a method using HRAM neutral loss screening of carbonyl compounds. In addition, the method allows simultaneous characterization and relative quantitation of DNPH-derivatized compounds using nano-ESI in positive mode.

  9. Accurate prediction of the electronic properties of low-dimensional graphene derivatives using a screened hybrid density functional.

    PubMed

    Barone, Veronica; Hod, Oded; Peralta, Juan E; Scuseria, Gustavo E

    2011-04-19

    Over the last several years, low-dimensional graphene derivatives, such as carbon nanotubes and graphene nanoribbons, have played a central role in the pursuit of a plausible carbon-based nanotechnology. Their electronic properties can be either metallic or semiconducting depending purely on morphology, but predicting their electronic behavior has proven challenging. The combination of experimental efforts with modeling of these nanometer-scale structures has been instrumental in gaining insight into their physical and chemical properties and the processes involved at these scales. Particularly, approximations based on density functional theory have emerged as a successful computational tool for predicting the electronic structure of these materials. In this Account, we review our efforts in modeling graphitic nanostructures from first principles with hybrid density functionals, namely the Heyd-Scuseria-Ernzerhof (HSE) screened exchange hybrid and the hybrid meta-generalized functional of Tao, Perdew, Staroverov, and Scuseria (TPSSh). These functionals provide a powerful tool for quantitatively studying structure-property relations and the effects of external perturbations such as chemical substitutions, electric and magnetic fields, and mechanical deformations on the electronic and magnetic properties of these low-dimensional carbon materials. We show how HSE and TPSSh successfully predict the electronic properties of these materials, providing a good description of their band structure and density of states, their work function, and their magnetic ordering in the cases in which magnetism arises. Moreover, these approximations are capable of successfully predicting optical transitions (first and higher order) in both metallic and semiconducting single-walled carbon nanotubes of various chiralities and diameters with impressive accuracy. This versatility includes the correct prediction of the trigonal warping splitting in metallic nanotubes. The results predicted

  10. Accurate ab initio predictions of ionization energies and heats of formation for the 2-propyl, phenyl, and benzyl radicals

    NASA Astrophysics Data System (ADS)

    Lau, K.-C.; Ng, C. Y.

    2006-01-01

    The ionization energies (IEs) for the 2-propyl (2-C3H7), phenyl (C6H5), and benzyl (C6H5CH2) radicals have been calculated by the wave-function-based ab initio CCSD(T)/CBS approach, which involves the approximation to the complete basis set (CBS) limit at the coupled cluster level with single and double excitations plus quasiperturbative triple excitation [CCSD(T)]. The zero-point vibrational energy correction, the core-valence electronic correction, and the scalar relativistic effect correction have been also made in these calculations. Although a precise IE value for the 2-C3H7 radical has not been directly determined before due to the poor Franck-Condon factor for the photoionization transition at the ionization threshold, the experimental value deduced indirectly using other known energetic data is found to be in good accord with the present CCSD(T)/CBS prediction. The comparison between the predicted value through the focal-point analysis and the highly precise experimental value for the IE(C6H5CH2) determined in the previous pulsed field ionization photoelectron (PFI-PE) study shows that the CCSD(T)/CBS method is capable of providing an accurate IE prediction for C6H5CH2, achieving an error limit of 35 meV. The benchmarking of the CCSD(T)/CBS IE(C6H5CH2) prediction suggests that the CCSD(T)/CBS IE(C6H5) prediction obtained here has a similar accuracy of 35 meV. Taking into account this error limit for the CCSD(T)/CBS prediction and the experimental uncertainty, the CCSD(T)/CBS IE(C6H5) value is also consistent with the IE(C6H5) reported in the previous HeI photoelectron measurement. Furthermore, the present study provides support for the conclusion that the CCSD(T)/CBS approach with high-level energy corrections can be used to provide reliable IE predictions for C3-C7 hydrocarbon radicals with an uncertainty of +/-35 meV. Employing the atomization scheme, we have also computed the 0 K (298 K) heats of formation in kJ/mol at the CCSD(T)/CBS level for 2-C3H7

  11. High IFIT1 expression predicts improved clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma.

    PubMed

    Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong

    2016-06-01

    Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma.

  12. The Current and Future Use of Ridge Regression for Prediction in Quantitative Genetics

    PubMed Central

    de Vlaming, Ronald; Groenen, Patrick J. F.

    2015-01-01

    In recent years, there has been a considerable amount of research on the use of regularization methods for inference and prediction in quantitative genetics. Such research mostly focuses on selection of markers and shrinkage of their effects. In this review paper, the use of ridge regression for prediction in quantitative genetics using single-nucleotide polymorphism data is discussed. In particular, we consider (i) the theoretical foundations of ridge regression, (ii) its link to commonly used methods in animal breeding, (iii) the computational feasibility, and (iv) the scope for constructing prediction models with nonlinear effects (e.g., dominance and epistasis). Based on a simulation study we gauge the current and future potential of ridge regression for prediction of human traits using genome-wide SNP data. We conclude that, for outcomes with a relatively simple genetic architecture, given current sample sizes in most cohorts (i.e., N < 10,000) the predictive accuracy of ridge regression is slightly higher than the classical genome-wide association study approach of repeated simple regression (i.e., one regression per SNP). However, both capture only a small proportion of the heritability. Nevertheless, we find evidence that for large-scale initiatives, such as biobanks, sample sizes can be achieved where ridge regression compared to the classical approach improves predictive accuracy substantially. PMID:26273586

  13. The Current and Future Use of Ridge Regression for Prediction in Quantitative Genetics.

    PubMed

    de Vlaming, Ronald; Groenen, Patrick J F

    2015-01-01

    In recent years, there has been a considerable amount of research on the use of regularization methods for inference and prediction in quantitative genetics. Such research mostly focuses on selection of markers and shrinkage of their effects. In this review paper, the use of ridge regression for prediction in quantitative genetics using single-nucleotide polymorphism data is discussed. In particular, we consider (i) the theoretical foundations of ridge regression, (ii) its link to commonly used methods in animal breeding, (iii) the computational feasibility, and (iv) the scope for constructing prediction models with nonlinear effects (e.g., dominance and epistasis). Based on a simulation study we gauge the current and future potential of ridge regression for prediction of human traits using genome-wide SNP data. We conclude that, for outcomes with a relatively simple genetic architecture, given current sample sizes in most cohorts (i.e., N < 10,000) the predictive accuracy of ridge regression is slightly higher than the classical genome-wide association study approach of repeated simple regression (i.e., one regression per SNP). However, both capture only a small proportion of the heritability. Nevertheless, we find evidence that for large-scale initiatives, such as biobanks, sample sizes can be achieved where ridge regression compared to the classical approach improves predictive accuracy substantially.

  14. Predicting Children's Reading and Mathematics Achievement from Early Quantitative Knowledge and Domain-General Cognitive Abilities

    PubMed Central

    Chu, Felicia W.; vanMarle, Kristy; Geary, David C.

    2016-01-01

    One hundred children (44 boys) participated in a 3-year longitudinal study of the development of basic quantitative competencies and the relation between these competencies and later mathematics and reading achievement. The children's preliteracy knowledge, intelligence, executive functions, and parental educational background were also assessed. The quantitative tasks assessed a broad range of symbolic and nonsymbolic knowledge and were administered four times across 2 years of preschool. Mathematics achievement was assessed at the end of each of 2 years of preschool, and mathematics and word reading achievement were assessed at the end of kindergarten. Our goals were to determine how domain-general abilities contribute to growth in children's quantitative knowledge and to determine how domain-general and domain-specific abilities contribute to children's preschool mathematics achievement and kindergarten mathematics and reading achievement. We first identified four core quantitative competencies (e.g., knowledge of the cardinal value of number words) that predict later mathematics achievement. The domain-general abilities were then used to predict growth in these competencies across 2 years of preschool, and the combination of domain-general abilities, preliteracy skills, and core quantitative competencies were used to predict mathematics achievement across preschool and mathematics and word reading achievement at the end of kindergarten. Both intelligence and executive functions predicted growth in the four quantitative competencies, especially across the first year of preschool. A combination of domain-general and domain-specific competencies predicted preschoolers' mathematics achievement, with a trend for domain-specific skills to be more strongly related to achievement at the beginning of preschool than at the end of preschool. Preschool preliteracy skills, sensitivity to the relative quantities of collections of objects, and cardinal knowledge predicted

  15. Predicting Children's Reading and Mathematics Achievement from Early Quantitative Knowledge and Domain-General Cognitive Abilities.

    PubMed

    Chu, Felicia W; vanMarle, Kristy; Geary, David C

    2016-01-01

    One hundred children (44 boys) participated in a 3-year longitudinal study of the development of basic quantitative competencies and the relation between these competencies and later mathematics and reading achievement. The children's preliteracy knowledge, intelligence, executive functions, and parental educational background were also assessed. The quantitative tasks assessed a broad range of symbolic and nonsymbolic knowledge and were administered four times across 2 years of preschool. Mathematics achievement was assessed at the end of each of 2 years of preschool, and mathematics and word reading achievement were assessed at the end of kindergarten. Our goals were to determine how domain-general abilities contribute to growth in children's quantitative knowledge and to determine how domain-general and domain-specific abilities contribute to children's preschool mathematics achievement and kindergarten mathematics and reading achievement. We first identified four core quantitative competencies (e.g., knowledge of the cardinal value of number words) that predict later mathematics achievement. The domain-general abilities were then used to predict growth in these competencies across 2 years of preschool, and the combination of domain-general abilities, preliteracy skills, and core quantitative competencies were used to predict mathematics achievement across preschool and mathematics and word reading achievement at the end of kindergarten. Both intelligence and executive functions predicted growth in the four quantitative competencies, especially across the first year of preschool. A combination of domain-general and domain-specific competencies predicted preschoolers' mathematics achievement, with a trend for domain-specific skills to be more strongly related to achievement at the beginning of preschool than at the end of preschool. Preschool preliteracy skills, sensitivity to the relative quantities of collections of objects, and cardinal knowledge predicted

  16. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    accurate flow characteristics in the NGV domain and the rotor domain with less computational time and computer memory requirements. In contrast, the time accurate flow simulation can predict all unsteady flow characteristics occurring in the turbine stage, but with high computational resource requirements. (Abstract shortened by UMI.)

  17. Quantitative prediction of type II solar radio emission from the Sun to 1 AU

    NASA Astrophysics Data System (ADS)

    Schmidt, J. M.; Cairns, Iver H.

    2016-01-01

    Coronal mass ejections (CMEs) are frequently associated with shocks and type II solar radio bursts. Despite involving fundamental plasma physics and being the archetype for collective radio emission from shocks, type II bursts have resisted detailed explanation for over 60 years. Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst from ≈4 MHz to 30 kHz (harmonic), including an intensification when the CME-driven shock reached STEREO A. We demonstrate the first accurate and quantitative simulation of a type II burst from the high corona (near 11 solar radii) to 1 AU for this event with a combination of a data-driven three-dimensional magnetohydrodynamic simulation for the CME and plasma background and an analytic quantitative kinetic model for the radio emission.

  18. Evaluation of the Quantitative Prediction of a Trend Reversal on the Japanese Stock Market in 1999

    NASA Astrophysics Data System (ADS)

    Johansen, Anders; Sornette, Didier

    In January 1999, the authors published a quantitative prediction that the Nikkei index should recover from its 14-year low in January 1999 and reach ~20 500 a year later. The purpose of the present paper is to evaluate the performance of this specific prediction as well as the underlying model: the forecast, performed at a time when the Nikkei was at its lowest (as we can now judge in hindsight), has correctly captured the change of trend as well as the quantitative evolution of the Nikkei index since its inception. As the change of trend from sluggish to recovery was estimated quite unlikely by many observers at that time, a Bayesian analysis shows that a skeptical (resp. neutral) Bayesian sees prior belief in our model amplified into a posterior belief 19 times larger (resp. reach the 95% level).

  19. Predicting Next Year's Resources--Short-Term Enrollment Forecasting for Accurate Budget Planning. AIR Forum Paper 1978.

    ERIC Educational Resources Information Center

    Salley, Charles D.

    Accurate enrollment forecasts are a prerequisite for reliable budget projections. This is because tuition payments make up a significant portion of a university's revenue, and anticipated revenue is the immediate constraint on current operating expenditures. Accurate forecasts are even more critical to revenue projections when a university's…

  20. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    SciTech Connect

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  1. Accurate, quantitative assays for the hydrolysis of soluble type I, II, and III /sup 3/H-acetylated collagens by bacterial and tissue collagenases

    SciTech Connect

    Mallya, S.K.; Mookhtiar, K.A.; Van Wart, H.E.

    1986-11-01

    Accurate and quantitative assays for the hydrolysis of soluble /sup 3/H-acetylated rat tendon type I, bovine cartilage type II, and human amnion type III collagens by both bacterial and tissue collagenases have been developed. The assays are carried out at any temperature in the 1-30/sup 0/C range in a single reaction tube and the progress of the reaction is monitored by withdrawing aliquots as a function of time, quenching with 1,10-phenanthroline, and quantitation of the concentration of hydrolysis fragments. The latter is achieved by selective denaturation of these fragments by incubation under conditions described in the previous paper of this issue. The assays give percentages of hydrolysis of all three collagen types by neutrophil collagenase that agree well with the results of gel electrophoresis experiments. The initial rates of hydrolysis of all three collagens are proportional to the concentration of both neutrophil or Clostridial collagenases over a 10-fold range of enzyme concentrations. All three assays can be carried out at collagen concentrations that range from 0.06 to 2 mg/ml and give linear double reciprocal plots for both tissue and bacterial collagenases that can be used to evaluate the kinetic parameters K/sub m/ and k/sub cat/ or V/sub max/. The assay developed for the hydrolysis of rat type I collagen by neutrophil collagenase is shown to be more sensitive by at least one order of magnitude than comparable assays that use rat type I collagen fibrils or gels as substrate.

  2. Outcome prediction within twelve hours after severe traumatic brain injury by quantitative cerebral blood flow.

    PubMed

    Kaloostian, Paul; Robertson, Claudia; Gopinath, Shankar P; Stippler, Martina; King, C Christopher; Qualls, Clifford; Yonas, Howard; Nemoto, Edwin M

    2012-03-20

    We measured quantitative cortical mantle cerebral blood flow (CBF) by stable xenon computed tomography (CT) within the first 12 h after severe traumatic brain injury (TBI) to determine whether neurologic outcome can be predicted by CBF stratification early after injury. Stable xenon CT was used for quantitative measurement of CBF (mL/100 g/min) in 22 cortical mantle regions stratified as follows: low (0-8), intermediate (9-30), normal (31-70), and hyperemic (>70) in 120 patients suffering severe (Glasgow Coma Scale [GCS] score ≤8) TBI. For each of these CBF strata, percentages of total cortical mantle volume were calculated. Outcomes were assessed by Glasgow Outcome Scale (GOS) score at discharge (DC), and 1, 3, and 6 months after discharge. Quantitative cortical mantle CBF differentiated GOS 1 and GOS 2 (dead or vegetative state) from GOS 3-5 (severely disabled to good recovery; p<0.001). Receiver operating characteristic (ROC) curve analysis for percent total normal plus hyperemic flow volume (TNHV) predicting GOS 3-5 outcome at 6 months for CBF measured <6 and <12 h after injury showed ROC area under the curve (AUC) cut-scores of 0.92 and 0.77, respectively. In multivariate analysis, percent TNHV is an independent predictor of GOS 3-5, with an odds ratio of 1.460 per 10 percentage point increase, as is initial GCS score (OR=1.090). The binary version of the Marshall CT score was an independent predictor of 6-month outcome, whereas age was not. These results suggest that quantitative cerebral cortical CBF measured within the first 6 and 12 h after TBI predicts 6-month outcome, which may be useful in guiding patient care and identifying patients for randomized clinical trials. A larger multicenter randomized clinical trial is indicated.

  3. Detection and quantitation of trace phenolphthalein (in pharmaceutical preparations and in forensic exhibits) by liquid chromatography-tandem mass spectrometry, a sensitive and accurate method.

    PubMed

    Sharma, Kakali; Sharma, Shiba P; Lahiri, Sujit C

    2013-01-01

    Phenolphthalein, an acid-base indicator and laxative, is important as a constituent of widely used weight-reducing multicomponent food formulations. Phenolphthalein is an useful reagent in forensic science for the identification of blood stains of suspected victims and for apprehending erring officials accepting bribes in graft or trap cases. The pink-colored alkaline hand washes originating from the phenolphthalein-smeared notes can easily be determined spectrophotometrically. But in many cases, colored solution turns colorless with time, which renders the genuineness of bribe cases doubtful to the judiciary. No method is known till now for the detection and identification of phenolphthalein in colorless forensic exhibits with positive proof. Liquid chromatography-tandem mass spectrometry had been found to be most sensitive, accurate method capable of detection and quantitation of trace phenolphthalein in commercial formulations and colorless forensic exhibits with positive proof. The detection limit of phenolphthalein was found to be 1.66 pg/L or ng/mL, and the calibration curve shows good linearity (r(2) = 0.9974).

  4. Population Synthesis in the Blue. IV. Accurate Model Predictions for Lick Indices and UBV Colors in Single Stellar Populations

    NASA Astrophysics Data System (ADS)

    Schiavon, Ricardo P.

    2007-07-01

    We present a new set of model predictions for 16 Lick absorption line indices from Hδ through Fe5335 and UBV colors for single stellar populations with ages ranging between 1 and 15 Gyr, [Fe/H] ranging from -1.3 to +0.3, and variable abundance ratios. The models are based on accurate stellar parameters for the Jones library stars and a new set of fitting functions describing the behavior of line indices as a function of effective temperature, surface gravity, and iron abundance. The abundances of several key elements in the library stars have been obtained from the literature in order to characterize the abundance pattern of the stellar library, thus allowing us to produce model predictions for any set of abundance ratios desired. We develop a method to estimate mean ages and abundances of iron, carbon, nitrogen, magnesium, and calcium that explores the sensitivity of the various indices modeled to those parameters. The models are compared to high-S/N data for Galactic clusters spanning the range of ages, metallicities, and abundance patterns of interest. Essentially all line indices are matched when the known cluster parameters are adopted as input. Comparing the models to high-quality data for galaxies in the nearby universe, we reproduce previous results regarding the enhancement of light elements and the spread in the mean luminosity-weighted ages of early-type galaxies. When the results from the analysis of blue and red indices are contrasted, we find good consistency in the [Fe/H] that is inferred from different Fe indices. Applying our method to estimate mean ages and abundances from stacked SDSS spectra of early-type galaxies brighter than L*, we find mean luminosity-weighed ages of the order of ~8 Gyr and iron abundances slightly below solar. Abundance ratios, [X/Fe], tend to be higher than solar and are positively correlated with galaxy luminosity. Of all elements, nitrogen is the more strongly correlated with galaxy luminosity, which seems to indicate

  5. 3D soft tissue predictions with a tetrahedral mass tensor model for a maxillofacial planning system: a quantitative validation study

    NASA Astrophysics Data System (ADS)

    Mollemans, W.; Schutyser, F.; Nadjmi, N.; Maes, F.; Suetens, P.

    2006-03-01

    In this paper we present an extensive quantitative validation on 3D facial soft tissue simulation for maxillofacial surgery planning. The study group contained 10 patients. In previous work we presented a new Mass Tensor Model to simulate the new facial appearance after maxillofacial surgery in a fast way. 10 patients were preoperatively CT-scanned and the surgical intervention was planned. 4 months after surgery, a post-operative control CT was acquired. In this study, the simulated facial outlook is compared with post-operative image data. After defining corresponding points between the predicted and actual post-operative facial skin surface, using a variant of the non-rigid TPS-RPM algorithm, distances between these correspondences are quantified and visualized in 3D. As shown, the average median distance measures only 0.60 mm and the average 90% percentile stays below 1.5 mm. We can conclude that our model clearly provides an accurate prediction of the real post-operative outcome and is therefore suitable for use in clinical practice.

  6. Predictive three-dimensional quantitative structure-activity relationship of cytochrome P450 1A2 inhibitors.

    PubMed

    Korhonen, Laura E; Rahnasto, Minna; Mähönen, Niina J; Wittekindt, Carsten; Poso, Antti; Juvonen, Risto O; Raunio, Hannu

    2005-06-02

    The purpose of this study was to determine the cytochrome P450 1A2 (CYP1A2) inhibition potencies of structurally diverse compounds to create a comprehensive three-dimensional quantitative structure-activity relationship (3D-QSAR) model of CYP1A2 inhibitors and to use this model to predict the inhibition potencies of an external set of compounds. Fifty-two compounds including naphthalene, lactone and quinoline derivatives were assayed in a 96-well plate format for CYP1A2 inhibition activity using 7-ethoxyresorufin O-dealkylation as the probe reaction. The IC50 values of the tested compounds varied from 2.3 microM to over 40,000 microM. On the basis of this data set, a comparative molecular field analysis (CoMFA) and GRID/GOLPE models were created that yielded novel structural information about the interaction between inhibitory molecules and the CYP1A2 active site. The created CoMFA model was able to accurately predict inhibitory potencies of several structurally unrelated compounds, including selective inhibitors of other cytochrome P450 forms.

  7. Do Skilled Elementary Teachers Hold Scientific Conceptions and Can They Accurately Predict the Type and Source of Students' Preconceptions of Electric Circuits?

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…

  8. A quantitative review of pollination syndromes: do floral traits predict effective pollinators?

    PubMed

    Rosas-Guerrero, Víctor; Aguilar, Ramiro; Martén-Rodríguez, Silvana; Ashworth, Lorena; Lopezaraiza-Mikel, Martha; Bastida, Jesús M; Quesada, Mauricio

    2014-03-01

    The idea of pollination syndromes has been largely discussed but no formal quantitative evaluation has yet been conducted across angiosperms. We present the first systematic review of pollination syndromes that quantitatively tests whether the most effective pollinators for a species can be inferred from suites of floral traits for 417 plant species. Our results support the syndrome concept, indicating that convergent floral evolution is driven by adaptation to the most effective pollinator group. The predictability of pollination syndromes is greater in pollinator-dependent species and in plants from tropical regions. Many plant species also have secondary pollinators that generally correspond to the ancestral pollinators documented in evolutionary studies. We discuss the utility and limitations of pollination syndromes and the role of secondary pollinators to understand floral ecology and evolution.

  9. Prediction of Coronal Mass Ejections From Vector Magnetograms: Quantitative Measures as Predictors

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Gary, G. A.; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    We derived two quantitative measures of an active region's global nonpotentiality from the region's vector magnetogram, 1) the net current (I(sub N)), and 2) the length of strong-shear, strong-field main neutral line (Lss), and used these two measures in a pilot study of the CME productivity of 4 active regions. We compared the global nonpotentiality measures to the active regions' CME productivity determined from GOES and Yohkoh/SXT observations. We found that two of the active regions were highly globally nonpotential and were CME productive, while the other two active regions had little global nonpotentiality and produced no CMEs. At the Fall 2000 AGU, we reported on an expanded study (12 active regions and 17 magnetograms) in which we evaluated four quantitative global measures of an active region's magnetic field and compared these measures with the CME productivity. The four global measures (all derived from MSFC vector magnetograms) included our two previous measures (I(sub N) and L(sub ss)) as well as two new ones, the total magnetic flux (PHI) (a measure of an active region's size), and the normalized twist (alpha (bar)= muIN/PHI). We found that the three quantitative measures of global nonpotentiality (I(sub N), L(sub ss), alpha (bar)) were all well correlated (greater than 99% confidence level) with an active region's CME productivity within plus or minus 2 days of the day of the magnetogram. We will now report on our findings of how good our quantitative measures are as predictors of active-region CME productivity, using only CMEs that occurred after the magnetogram. We report the preliminary skill test of these quantitative measures as predictors. We compare the CME prediction success of our quantitative measures to the CME prediction success based on an active region's past CME productivity. We examine the cases of the handful of false positive and false negatives to look for improvements to our predictors. This work is funded by NSF through the Space

  10. Quantitative analysis of tin alloy combined with artificial neural network prediction

    SciTech Connect

    Oh, Seong Y.; Yueh, Fang-Yu; Singh, Jagdish P.

    2010-05-01

    Laser-induced breakdown spectroscopy was applied to quantitative analysis of three impurities in Sn alloy. The impurities analysis was based on the internal standard method using the Sn I 333.062-nm line as the reference line to achieve the best reproducible results. Minor-element concentrations (Ag, Cu, Pb) in the alloy were comparatively evaluated by artificial neural networks (ANNs) and calibration curves. ANN was found to effectively predict elemental concentrations with a trend of nonlinear growth due to self-absorption. The limits of detection for Ag, Cu, and Pb in Sn alloy were determined to be 29, 197, and 213 ppm, respectively.

  11. Quantitative Prediction of Computational Quality (so the S and C Folks will Accept it)

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Luckring, James M.; Morrison, Joseph H.

    2004-01-01

    Our choice of title may seem strange but we mean each word. In this talk, we are not going to be concerned with computations made "after the fact", i.e. those for which data are available and which are being conducted for explanation and insight. Here we are interested in preventing S&C design problems by finding them through computation before data are available. For such a computation to have any credibility with those who absorb the risk, it is necessary to quantitatively PREDICT the quality of the computational results.

  12. A quantitative parameter-free prediction of simulated crystal nucleation times

    SciTech Connect

    Aga, Rachel S; Morris, James R; Hoyt, Jeffrey John; Mendelev, Mikhail I.

    2006-01-01

    We present direct comparisons between simulated crystal-nucleation times and theoretical predictions using a model of aluminum, and demonstrate that a quantitative prediction can be made. All relevant thermodynamic properties of the system are known, making the agreement of our simulation data with nucleation theories free of any adjustable parameters. The role of transient nucleation is included in the classical nucleation theory approach, and shown to be necessary to understand the observed nucleation times. The calculations provide an explanation on why nucleation is difficult to observe in simulations at moderate undercoolings. Even when the simulations are significantly larger than the critical nucleus, and when simulation times are sufficiently long, at moderate undercoolings the small concentration of critical nuclei makes the probability of the nucleation low in molecular dynamics simulations.

  13. Quantitative prediction of solvation free energy in octanol of organic compounds.

    PubMed

    Delgado, Eduardo J; Jaña, Gonzalo A

    2009-03-01

    The free energy of solvation, DeltaGS0, in octanol of organic compounds is quantitatively predicted from the molecular structure. The model, involving only three molecular descriptors, is obtained by multiple linear regression analysis from a data set of 147 compounds containing diverse organic functions, namely, halogenated and non-halogenated alkanes, alkenes, alkynes, aromatics, alcohols, aldehydes, ketones, amines, ethers and esters; covering a DeltaGS0 range from about -50 to 0 kJ.mol(-1). The model predicts the free energy of solvation with a squared correlation coefficient of 0.93 and a standard deviation, 2.4 kJ.mol(-1), just marginally larger than the generally accepted value of experimental uncertainty. The involved molecular descriptors have definite physical meaning corresponding to the different intermolecular interactions occurring in the bulk liquid phase. The model is validated with an external set of 36 compounds not included in the training set.

  14. Prediction of intracellular storage polymers using quantitative image analysis in enhanced biological phosphorus removal systems.

    PubMed

    Mesquita, Daniela P; Leal, Cristiano; Cunha, Jorge R; Oehmen, Adrian; Amaral, A Luís; Reis, Maria A M; Ferreira, Eugénio C

    2013-04-03

    The present study focuses on predicting the concentration of intracellular storage polymers in enhanced biological phosphorus removal (EBPR) systems. For that purpose, quantitative image analysis techniques were developed for determining the intracellular concentrations of PHA (PHB and PHV) with Nile blue and glycogen with aniline blue staining. Partial least squares (PLS) were used to predict the standard analytical values of these polymers by the proposed methodology. Identification of the aerobic and anaerobic stages proved to be crucial for improving the assessment of PHA, PHB and PHV intracellular concentrations. Current Nile blue based methodology can be seen as a feasible starting point for further enhancement. Glycogen detection based on the developed aniline blue staining methodology combined with the image analysis data proved to be a promising technique, toward the elimination of the need for analytical off-line measurements.

  15. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    EPA Science Inventory

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  16. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    PubMed Central

    2010-01-01

    Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene

  17. Genetic programming based quantitative structure-retention relationships for the prediction of Kovats retention indices.

    PubMed

    Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S

    2015-11-13

    The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science.

  18. Investigation of telomere lengths measurement by quantitative real-time PCR to predict age.

    PubMed

    Hewakapuge, Sudinna; van Oorschot, Roland A H; Lewandowski, Paul; Baindur-Hudson, Swati

    2008-09-01

    Currently DNA profiling methods only compare a suspect's DNA with DNA left at the crime scene. When there is no suspect, it would be useful for the police to be able to predict what the person of interest looks like by analysing the DNA left behind in a crime scene. Determination of the age of the suspect is an important factor in creating an identikit. Human somatic cells gradually lose telomeric repeats with age. This study investigated if one could use a correlation between telomere length and age, to predict the age of an individual from their DNA. Telomere length, in buccal cells, of 167 individuals aged between 1 and 96 years old was measured using real-time quantitative PCR. Telomere length decreased with age (r=-0.185, P<0.05) and the age of an individual could be roughly determined by the following formula: (age=relative telomere length -1.5/-0.005). The regression (R(2)) value between telomere length and age was approximately 0.04, which is too low to be use for forensics. The causes for the presence of large variation in telomere lengths in the population were further investigated. The age prediction accuracies were low even after dividing samples into non-related Caucasians, males and females (5%, 9% and 1%, respectively). Mean telomere lengths of eight age groups representing each decade of life showed non-linear decrease in telomere length with age. There were variations in telomere lengths even among similarly aged individuals aged 26 years old (n=10) and age 54 years old (n=9). Therefore, telomere length measurement by real-time quantitative PCR cannot be used to predict age of a person, due to the presence of large inter-individual variations in telomere lengths.

  19. Prediction of quantitative intrathoracic fluid volume to diagnose pulmonary oedema using LabVIEW.

    PubMed

    Urooj, Shabana; Khan, M; Ansari, A Q; Lay-Ekuakille, Aimé; Salhan, Ashok K

    2012-01-01

    Pulmonary oedema is a life-threatening disease that requires special attention in the area of research and clinical diagnosis. Computer-based techniques are rarely used to quantify the intrathoracic fluid volume (IFV) for diagnostic purposes. This paper discusses a software program developed to detect and diagnose pulmonary oedema using LabVIEW. The software runs on anthropometric dimensions and physiological parameters, mainly transthoracic electrical impedance (TEI). This technique is accurate and faster than existing manual techniques. The LabVIEW software was used to compute the parameters required to quantify IFV. An equation relating per cent control and IFV was obtained. The results of predicted TEI and measured TEI were compared with previously reported data to validate the developed program. It was found that the predicted values of TEI obtained from the computer-based technique were much closer to the measured values of TEI. Six new subjects were enrolled to measure and predict transthoracic impedance and hence to quantify IFV. A similar difference was also observed in the measured and predicted values of TEI for the new subjects.

  20. PSSP-RFE: accurate prediction of protein structural class by recursive feature extraction from PSI-BLAST profile, physical-chemical property and functional annotations.

    PubMed

    Li, Liqi; Cui, Xiang; Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi

    2014-01-01

    Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets.

  1. Quantitative perturbation-based analysis of gene expression predicts enhancer activity in early Drosophila embryo

    PubMed Central

    Sayal, Rupinder; Dresch, Jacqueline M; Pushel, Irina; Taylor, Benjamin R; Arnosti, David N

    2016-01-01

    Enhancers constitute one of the major components of regulatory machinery of metazoans. Although several genome-wide studies have focused on finding and locating enhancers in the genomes, the fundamental principles governing their internal architecture and cis-regulatory grammar remain elusive. Here, we describe an extensive, quantitative perturbation analysis targeting the dorsal-ventral patterning gene regulatory network (GRN) controlled by Drosophila NF-κB homolog Dorsal. To understand transcription factor interactions on enhancers, we employed an ensemble of mathematical models, testing effects of cooperativity, repression, and factor potency. Models trained on the dataset correctly predict activity of evolutionarily divergent regulatory regions, providing insights into spatial relationships between repressor and activator binding sites. Importantly, the collective predictions of sets of models were effective at novel enhancer identification and characterization. Our study demonstrates how experimental dataset and modeling can be effectively combined to provide quantitative insights into cis-regulatory information on a genome-wide scale. DOI: http://dx.doi.org/10.7554/eLife.08445.001 PMID:27152947

  2. Genotype-based quantitative prediction of drug exposure for drugs metabolized by CYP2D6.

    PubMed

    Tod, M; Goutelle, S; Gagnieu, M C

    2011-10-01

    We propose a framework to enable quantitative prediction of the impact of CYP2D6 polymorphisms on drug exposure. It relies mostly on in vivo data and uses two characteristic parameters: one for the drug and the other for the genotype. The metric of interest is the ratio of drug area under the curve (AUC) in patients with mutant genotype to the AUC in patients with wild-type genotype. Any combination of alleles, as well as duplications, may be accommodated in the framework. Estimates of the characteristic parameters were obtained by orthogonal regression for 40 drugs and five classes of genotypes, respectively, including poor, intermediate, and ultrarapid metabolizers (PMs, IMs, and UMs). The mean prediction error of AUC ratios was -0.05, and the mean prediction absolute error was 0.20. An external validation was also carried out. The model may be used to predict the variations in exposure induced by all drug-genotype combinations. An application of this model to a rare combination of alleles (*4*10) is described.

  3. Prediction of prostate cancer recurrence using quantitative phase imaging: Validation on a general population

    NASA Astrophysics Data System (ADS)

    Sridharan, Shamira; Macias, Virgilia; Tangella, Krishnarao; Melamed, Jonathan; Dube, Emily; Kong, Max Xiangtian; Kajdacsy-Balla, André; Popescu, Gabriel

    2016-09-01

    Prediction of biochemical recurrence risk of prostate cancer following radical prostatectomy is critical for determining whether the patient would benefit from adjuvant treatments. Various nomograms exist today for identifying individuals at higher risk for recurrence; however, an optimistic under-estimation of recurrence risk is a common problem associated with these methods. We previously showed that anisotropy of light scattering measured using quantitative phase imaging, in the stromal layer adjacent to cancerous glands, is predictive of recurrence. That nested-case controlled study consisted of specimens specifically chosen such that the current prognostic methods fail. Here we report on validating the utility of optical anisotropy for prediction of prostate cancer recurrence in a general population of 192 patients, with 17% probability of recurrence. Our results show that our method can identify recurrent cases with 73% sensitivity and 72% specificity, which is comparable to that of CAPRA-S, a current state of the art method, in the same population. However, our results show that optical anisotropy outperforms CAPRA-S for patients with Gleason grades 7–10. In essence, we demonstrate that anisotropy is a better biomarker for identifying high-risk cases, while Gleason grade is better suited for selecting non-recurrence. Therefore, we propose that anisotropy and current techniques be used together to maximize prediction accuracy.

  4. Effects of genetic and environmental factors on trait network predictions from quantitative trait locus data.

    PubMed

    Remington, David L

    2009-03-01

    The use of high-throughput genomic techniques to map gene expression quantitative trait loci has spurred the development of path analysis approaches for predicting functional networks linking genes and natural trait variation. The goal of this study was to test whether potentially confounding factors, including effects of common environment and genes not included in path models, affect predictions of cause-effect relationships among traits generated by QTL path analyses. Structural equation modeling (SEM) was used to test simple QTL-trait networks under different regulatory scenarios involving direct and indirect effects. SEM identified the correct models under simple scenarios, but when common-environment effects were simulated in conjunction with direct QTL effects on traits, they were poorly distinguished from indirect effects, leading to false support for indirect models. Application of SEM to loblolly pine QTL data provided support for biologically plausible a priori hypotheses of QTL mechanisms affecting height and diameter growth. However, some biologically implausible models were also well supported. The results emphasize the need to include any available functional information, including predictions for genetic and environmental correlations, to develop plausible models if biologically useful trait network predictions are to be made.

  5. Predicting total organic halide formation from drinking water chlorination using quantitative structure-property relationships.

    PubMed

    Luilo, G B; Cabaniss, S E

    2011-10-01

    Chlorinating water which contains dissolved organic matter (DOM) produces disinfection byproducts, the majority of unknown structure. Hence, the total organic halide (TOX) measurement is used as a surrogate for toxic disinfection byproducts. This work derives a robust quantitative structure-property relationship (QSPR) for predicting the TOX formation potential of model compounds. Literature data for 49 compounds were used to train the QSPR in moles of chlorine per mole of compound (Cp) (mol-Cl/mol-Cp). The resulting QSPR has four descriptors, calibration [Formula: see text] of 0.72 and standard deviation of estimation of 0.43 mol-Cl/mol-Cp. Internal and external validation indicate that the QSPR has good predictive power and low bias (‰<‰1%). Applying this QSPR to predict TOX formation by DOM surrogates - tannic acid, two model fulvic acids and two agent-based model assemblages - gave a predicted TOX range of 136-184 µg-Cl/mg-C, consistent with experimental data for DOM, which ranged from 78 to 192 µg-Cl/mg-C. However, the limited structural variation in the training data may limit QSPR applicability; studies of more sulfur-containing compounds, heterocyclic compounds and high molecular weight compounds could lead to a more widely applicable QSPR.

  6. Prediction of prostate cancer recurrence using quantitative phase imaging: Validation on a general population

    PubMed Central

    Sridharan, Shamira; Macias, Virgilia; Tangella, Krishnarao; Melamed, Jonathan; Dube, Emily; Kong, Max Xiangtian; Kajdacsy-Balla, André; Popescu, Gabriel

    2016-01-01

    Prediction of biochemical recurrence risk of prostate cancer following radical prostatectomy is critical for determining whether the patient would benefit from adjuvant treatments. Various nomograms exist today for identifying individuals at higher risk for recurrence; however, an optimistic under-estimation of recurrence risk is a common problem associated with these methods. We previously showed that anisotropy of light scattering measured using quantitative phase imaging, in the stromal layer adjacent to cancerous glands, is predictive of recurrence. That nested-case controlled study consisted of specimens specifically chosen such that the current prognostic methods fail. Here we report on validating the utility of optical anisotropy for prediction of prostate cancer recurrence in a general population of 192 patients, with 17% probability of recurrence. Our results show that our method can identify recurrent cases with 73% sensitivity and 72% specificity, which is comparable to that of CAPRA-S, a current state of the art method, in the same population. However, our results show that optical anisotropy outperforms CAPRA-S for patients with Gleason grades 7–10. In essence, we demonstrate that anisotropy is a better biomarker for identifying high-risk cases, while Gleason grade is better suited for selecting non-recurrence. Therefore, we propose that anisotropy and current techniques be used together to maximize prediction accuracy. PMID:27658807

  7. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    PubMed

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  8. A hybrid approach to advancing quantitative prediction of tissue distribution of basic drugs in human

    SciTech Connect

    Poulin, Patrick; Ekins, Sean; Theil, Frank-Peter

    2011-01-15

    A general toxicity of basic drugs is related to phospholipidosis in tissues. Therefore, it is essential to predict the tissue distribution of basic drugs to facilitate an initial estimate of that toxicity. The objective of the present study was to further assess the original prediction method that consisted of using the binding to red blood cells measured in vitro for the unbound drug (RBCu) as a surrogate for tissue distribution, by correlating it to unbound tissue:plasma partition coefficients (Kpu) of several tissues, and finally to predict volume of distribution at steady-state (V{sub ss}) in humans under in vivo conditions. This correlation method demonstrated inaccurate predictions of V{sub ss} for particular basic drugs that did not follow the original correlation principle. Therefore, the novelty of this study is to provide clarity on the actual hypotheses to identify i) the impact of pharmacological mode of action on the generic correlation of RBCu-Kpu, ii) additional mechanisms of tissue distribution for the outlier drugs, iii) molecular features and properties that differentiate compounds as outliers in the original correlation analysis in order to facilitate its applicability domain alongside the properties already used so far, and finally iv) to present a novel and refined correlation method that is superior to what has been previously published for the prediction of human V{sub ss} of basic drugs. Applying a refined correlation method after identifying outliers would facilitate the prediction of more accurate distribution parameters as key inputs used in physiologically based pharmacokinetic (PBPK) and phospholipidosis models.

  9. Can the conventional sextant prostate biopsy accurately predict unilateral prostate cancer in low-risk, localized, prostate cancer?

    PubMed

    Mayes, Janice M; Mouraviev, Vladimir; Sun, Leon; Tsivian, Matvey; Madden, John F; Polascik, Thomas J

    2011-01-01

    We evaluate the reliability of routine sextant prostate biopsy to detect unilateral lesions. A total of 365 men with complete records including all clinical and pathologic variables who underwent a preoperative sextant biopsy and subsequent radical prostatectomy (RP) for clinically localized prostate cancer at our medical center between January 1996 and December 2006 were identified. When the sextant biopsy detects unilateral disease, according to RP results, the NPV is high (91%) with a low false negative rate (9%). However, the sextant biopsy has a PPV of 28% with a high false positive rate (72%). Therefore, a routine sextant prostate biopsy cannot provide reliable, accurate information about the unilaterality of tumor lesion(s).

  10. Quantitative structure-activity relationships (QSARs) for estrogen binding to the estrogen receptor: predictions across species.

    PubMed Central

    Tong, W; Perkins, R; Strelitz, R; Collantes, E R; Keenan, S; Welsh, W J; Branham, W S; Sheehan, D M

    1997-01-01

    The recognition of adverse effects due to environmental endocrine disruptors in humans and wildlife has focused attention on the need for predictive tools to select the most likely estrogenic chemicals from a very large number of chemicals for subsequent screening and/or testing for potential environmental toxicity. A three-dimensional quantitative structure-activity relationship (QSAR) model using comparative molecular field analysis (CoMFA) was constructed based on relative binding affinity (RBA) data from an estrogen receptor (ER) binding assay using calf uterine cytosol. The model demonstrated significant correlation of the calculated steric and electrostatic fields with RBA and yielded predictions that agreed well with experimental values over the entire range of RBA values. Analysis of the CoMFA three-dimensional contour plots revealed a consistent picture of the structural features that are largely responsible for the observed variations in RBA. Importantly, we established a correlation between the predicted RBA values for calf ER and their actual RBA values for human ER. These findings suggest a means to begin to construct a more comprehensive estrogen knowledge base by combining RBA assay data from multiple species in 3D-QSAR based predictive models, which could then be used to screen untested chemicals for their potential to bind to the ER. Another QSAR model was developed based on classical physicochemical descriptors generated using the CODESSA (Comprehensive Descriptors for Structural and Statistical Analysis) program. The predictive ability of the CoMFA model was superior to the corresponding CODESSA model. Images Figure 2. Figure 3. Figure 4. Figure 5. PMID:9353176

  11. Validation of Quantitative Structure-Activity Relationship (QSAR) Model for Photosensitizer Activity Prediction

    PubMed Central

    Frimayanti, Neni; Yam, Mun Li; Lee, Hong Boon; Othman, Rozana; Zain, Sharifuddin M.; Rahman, Noorsaadah Abd.

    2011-01-01

    Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR) method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT) activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA) method. Based on the method, r2 value, r2 (CV) value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 μM to 7.04 μM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set) of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set. PMID:22272096

  12. Ranking of hair dye substances according to predicted sensitization potency: quantitative structure-activity relationships.

    PubMed

    Søsted, H; Basketter, D A; Estrada, E; Johansen, J D; Patlewicz, G Y

    2004-01-01

    Allergic contact dermatitis following the use of hair dyes is well known. Many chemicals are used in hair dyes and it is unlikely that all cases of hair dye allergy can be diagnosed by means of patch testing with p-phenylenediamine (PPD). The objectives of this study are to identify all hair dye substances registered in Europe and to provide their tonnage data. The sensitization potential of each substance was then estimated by using a quantitative structure-activity relationship (QSAR) model and the substances were ranked according to their predicted potency. A cluster analysis was performed in order to help select a number of chemically diverse hair dye substances that could be used in subsequent clinical work. Various information sources, including the Inventory of Cosmetics Ingredients, new regulations on cosmetics, data on total use and ChemId (the Chemical Search Input website provided by the National Library of Medicine), were used in order to identify the names and structures of the hair dyes. A QSAR model, developed with the help of experimental local lymph node assay data and topological sub-structural molecular descriptors (TOPS-MODE), was used in order to predict the likely sensitization potential. Predictions for sensitization potential were made for the 229 substances that could be identified by means of a chemical structure, the majority of these hair dyes (75%) being predicted to be strong/moderate sensitizers. Only 22% were predicted to be weak sensitizers and 3% were predicted to be extremely weak or non-sensitizing. Eight of the most widely used hair dye substances were predicted to be strong/moderate sensitizers, including PPD - which is the most commonly used hair dye allergy marker in patch testing. A cluster analysis by using TOPS-MODE descriptors as inputs helped us group the hair dye substances according to their chemical similarity. This would facilitate the selection of potential substances for clinical patch testing. A patch-test series

  13. Quantitative prediction of catalepsy induced by amoxapine, cinnarizine and cyclophosphamide in mice.

    PubMed

    Nasu, R; Matsuo, H; Takanaga, H; Ohtani, H; Sawada, Y

    2000-05-01

    Parkinsonism can be a side effect of antipsychotic drugs, and has recently been reported with peripherally acting drugs such as calcium channel blockers, antiarrhythmic agents and so on. In this study, we examined the quantitative prediction of drug-induced catalepsy by amoxapine, cinnarizine and cyclophosphamide, which have been reported to induce parkinsonism. Dose-dependent catalepsy was induced by these drugs in mice. In vivo dopamine D(1), D(2) and muscarinic acetylcholine (mACh) receptor occupancies by these drugs in the striatum were also examined. The in vitro binding affinities (K(i) values) of amoxapine and cinnarizine to dopamine D(1), D(2) and mACh receptors in rat striatal synaptic membrane were 200 and 2900 nM, 58.4 and 76.4 nM and 379 and 290 nM, respectively. Cyclophosphamide did not bind to these receptors at concentrations up to 100 microM. Twenty drugs, including those mentioned above, showed a significant correlation between the observed intensity of catalepsy and the values predicted with a pharmacodynamic model (Haraguchi K, Ito K, Kotaki H, Sawada Y, Iga T. Prediction of drug-induced catalepsy based on dopamine D(1), D(2), and muscarinic acetylcholine receptor occupancies. Drug Metab Disp 1997; 25: 675-684) based on in vivo occupancy of dopamine D(1), D(2) and mACh receptors. We conclude that occupancy of dopamine D(1) and D(2) receptors contributes to catalepsy induction by amoxapine and cinnarizine.

  14. Predicting adsorption of aromatic compounds by carbon nanotubes based on quantitative structure property relationship principles

    NASA Astrophysics Data System (ADS)

    Rahimi-Nasrabadi, Mehdi; Akhoondi, Reza; Pourmortazavi, Seied Mahdi; Ahmadi, Farhad

    2015-11-01

    Quantitative structure property relationship (QSPR) models were developed to predict the adsorption of aromatic compounds by carbon nanotubes (CNTs). Five descriptors chosen by combining self-organizing map and stepwise multiple linear regression (MLR) techniques were used to connect the structure of the studied chemicals with their adsorption descriptor (K∞) using linear and nonlinear modeling techniques. Correlation coefficient (R2) of 0.99 and root-mean square error (RMSE) of 0.29 for multilayered perceptron neural network (MLP-NN) model are signs of the superiority of the developed nonlinear model over MLR model with R2 of 0.93 and RMSE of 0.36. The results of cross-validation test showed the reliability of MLP-NN to predict the K∞ values for the aromatic contaminants. Molar volume and hydrogen bond accepting ability were found to be the factors much influencing the adsorption of the compounds. The developed QSPR, as a neural network based model, could be used to predict the adsorption of organic compounds by CNTs.

  15. Novel quantitative pigmentation phenotyping enhances genetic association, epistasis, and prediction of human eye colour

    PubMed Central

    Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H.; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R.; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E.; Kayser, Manfred

    2017-01-01

    Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing. PMID:28240252

  16. Novel quantitative pigmentation phenotyping enhances genetic association, epistasis, and prediction of human eye colour.

    PubMed

    Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E; Kayser, Manfred

    2017-02-27

    Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing.

  17. Quantitative EEG for Predicting Upper-limb Motor Recovery in Chronic Stroke Robot-assisted Rehabilitation.

    PubMed

    Trujillo, Paula; Mastropietro, Alfonso; Scano, Alessandro; Chiavenna, Andrea; Mrakic-Sposta, Simona; Caimmi, Marco; Molteni, Franco; Rizzo, Giovanna

    2017-03-03

    Stroke is a leading cause for adult disability, which in many cases causes motor deficits. Despite the developments in motor rehabilitation techniques, recovery of upper limb functions after stroke is limited and heterogeneous in terms of outcomes, and knowledge of important factors that may affect the outcome of the therapy is necessary to make a reasonable prediction for individual patients. In this study, we assessed the relationship between quantitative electroencephalographic (QEEG) measures and the motor outcome in chronic stroke patients that underwent a robot-assisted rehabilitation program to evaluate the utility of QEEG indices to predict motor recovery. For this purpose, we acquired resting-state electroencephalographic signals from which the Power Ratio Index (PRI), Delta/Alpha Ratio (DAR), and Brain Symmetry Index (BSI) were calculated. The outcome of the motor rehabilitation was evaluated using upper-limb section of the Fugl-Meyer Assessment. We found that PRI was significantly correlated with the motor recovery, suggesting that this index may provide useful information to predict the rehabilitation outcome.

  18. Is Demography Destiny? Application of Machine Learning Techniques to Accurately Predict Population Health Outcomes from a Minimal Demographic Dataset

    PubMed Central

    Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve

    2015-01-01

    For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease. PMID:25938675

  19. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    PubMed

    Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve

    2015-01-01

    For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

  20. A Maximal Graded Exercise Test to Accurately Predict VO2max in 18-65-Year-Old Adults

    ERIC Educational Resources Information Center

    George, James D.; Bradshaw, Danielle I.; Hyde, Annette; Vehrs, Pat R.; Hager, Ronald L.; Yanowitz, Frank G.

    2007-01-01

    The purpose of this study was to develop an age-generalized regression model to predict maximal oxygen uptake (VO sub 2 max) based on a maximal treadmill graded exercise test (GXT; George, 1996). Participants (N = 100), ages 18-65 years, reached a maximal level of exertion (mean plus or minus standard deviation [SD]; maximal heart rate [HR sub…

  1. Accurate Prediction of Protein Functional Class From Sequence in the Mycobacterium Tuberculosis and Escherichia Coli Genomes Using Data Mining

    PubMed Central

    Karwath, Andreas; Clare, Amanda; Dehaspe, Luc

    2000-01-01

    The analysis of genomics data needs to become as automated as its generation. Here we present a novel data-mining approach to predicting protein functional class from sequence. This method is based on a combination of inductive logic programming clustering and rule learning. We demonstrate the effectiveness of this approach on the M. tuberculosis and E. coli genomes, and identify biologically interpretable rules which predict protein functional class from information only available from the sequence. These rules predict 65% of the ORFs with no assigned function in M. tuberculosis and 24% of those in E. coli, with an estimated accuracy of 60–80% (depending on the level of functional assignment). The rules are founded on a combination of detection of remote homology, convergent evolution and horizontal gene transfer. We identify rules that predict protein functional class even in the absence of detectable sequence or structural homology. These rules give insight into the evolutionary history of M. tuberculosis and E. coli. PMID:11119305

  2. Herbivore-induced plant volatiles accurately predict history of coexistence, diet breadth, and feeding mode of herbivores.

    PubMed

    Danner, Holger; Desurmont, Gaylord A; Cristescu, Simona M; van Dam, Nicole M

    2017-01-30

    Herbivore-induced plant volatiles (HIPVs) serve as specific cues to higher trophic levels. Novel, exotic herbivores entering native foodwebs may disrupt the infochemical network as a result of changes in HIPV profiles. Here, we analysed HIPV blends of native Brassica rapa plants infested with one of 10 herbivore species with different coexistence histories, diet breadths and feeding modes. Partial least squares (PLS) models were fitted to assess whether HIPV blends emitted by Dutch B. rapa differ between native and exotic herbivores, between specialists and generalists, and between piercing-sucking and chewing herbivores. These models were used to predict the status of two additional herbivores. We found that HIPV blends predicted the evolutionary history, diet breadth and feeding mode of the herbivore with an accuracy of 80% or higher. Based on the HIPVs, the PLS models reliably predicted that Trichoplusia ni and Spodoptera exigua are perceived as exotic, leaf-chewing generalists by Dutch B. rapa plants. These results indicate that there are consistent and predictable differences in HIPV blends depending on global herbivore characteristics, including coexistence history. Consequently, native organisms may be able to rapidly adapt to potentially disruptive effects of exotic herbivores on the infochemical network.

  3. Genomic Models of Short-Term Exposure Accurately Predict Long-Term Chemical Carcinogenicity and Identify Putative Mechanisms of Action

    PubMed Central

    Gusenleitner, Daniel; Auerbach, Scott S.; Melia, Tisha; Gómez, Harold F.; Sherr, David H.; Monti, Stefano

    2014-01-01

    Background Despite an overall decrease in incidence of and mortality from cancer, about 40% of Americans will be diagnosed with the disease in their lifetime, and around 20% will die of it. Current approaches to test carcinogenic chemicals adopt the 2-year rodent bioassay, which is costly and time-consuming. As a result, fewer than 2% of the chemicals on the market have actually been tested. However, evidence accumulated to date suggests that gene expression profiles from model organisms exposed to chemical compounds reflect underlying mechanisms of action, and that these toxicogenomic models could be used in the prediction of chemical carcinogenicity. Results In this study, we used a rat-based microarray dataset from the NTP DrugMatrix Database to test the ability of toxicogenomics to model carcinogenicity. We analyzed 1,221 gene-expression profiles obtained from rats treated with 127 well-characterized compounds, including genotoxic and non-genotoxic carcinogens. We built a classifier that predicts a chemical's carcinogenic potential with an AUC of 0.78, and validated it on an independent dataset from the Japanese Toxicogenomics Project consisting of 2,065 profiles from 72 compounds. Finally, we identified differentially expressed genes associated with chemical carcinogenesis, and developed novel data-driven approaches for the molecular characterization of the response to chemical stressors. Conclusion Here, we validate a toxicogenomic approach to predict carcinogenicity and provide strong evidence that, with a larger set of compounds, we should be able to improve the sensitivity and specificity of the predictions. We found that the prediction of carcinogenicity is tissue-dependent and that the results also confirm and expand upon previous studies implicating DNA damage, the peroxisome proliferator-activated receptor, the aryl hydrocarbon receptor, and regenerative pathology in the response to carcinogen exposure. PMID:25058030

  4. Methodology for Quantitative Characterization of Fluorophore Photoswitching to Predict Superresolution Microscopy Image Quality

    PubMed Central

    Bittel, Amy M.; Nickerson, Andrew; Saldivar, Isaac S.; Dolman, Nick J.; Nan, Xiaolin; Gibbs, Summer L.

    2016-01-01

    Single-molecule localization microscopy (SMLM) image quality and resolution strongly depend on the photoswitching properties of fluorophores used for sample labeling. Development of fluorophores with optimized photoswitching will considerably improve SMLM spatial and spectral resolution. Currently, evaluating fluorophore photoswitching requires protein-conjugation before assessment mandating specific fluorophore functionality, which is a major hurdle for systematic characterization. Herein, we validated polyvinyl alcohol (PVA) as a single-molecule environment to efficiently quantify the photoswitching properties of fluorophores and identified photoswitching properties predictive of quality SMLM images. We demonstrated that the same fluorophore photoswitching properties measured in PVA films and using antibody adsorption, a protein-conjugation environment analogous to labeled cells, were significantly correlated to microtubule width and continuity, surrogate measures of SMLM image quality. Defining PVA as a fluorophore photoswitching screening platform will facilitate SMLM fluorophore development and optimal image buffer assessment through facile and accurate photoswitching property characterization, which translates to SMLM fluorophore imaging performance. PMID:27412307

  5. Methodology for Quantitative Characterization of Fluorophore Photoswitching to Predict Superresolution Microscopy Image Quality

    NASA Astrophysics Data System (ADS)

    Bittel, Amy M.; Nickerson, Andrew; Saldivar, Isaac S.; Dolman, Nick J.; Nan, Xiaolin; Gibbs, Summer L.

    2016-07-01

    Single-molecule localization microscopy (SMLM) image quality and resolution strongly depend on the photoswitching properties of fluorophores used for sample labeling. Development of fluorophores with optimized photoswitching will considerably improve SMLM spatial and spectral resolution. Currently, evaluating fluorophore photoswitching requires protein-conjugation before assessment mandating specific fluorophore functionality, which is a major hurdle for systematic characterization. Herein, we validated polyvinyl alcohol (PVA) as a single-molecule environment to efficiently quantify the photoswitching properties of fluorophores and identified photoswitching properties predictive of quality SMLM images. We demonstrated that the same fluorophore photoswitching properties measured in PVA films and using antibody adsorption, a protein-conjugation environment analogous to labeled cells, were significantly correlated to microtubule width and continuity, surrogate measures of SMLM image quality. Defining PVA as a fluorophore photoswitching screening platform will facilitate SMLM fluorophore development and optimal image buffer assessment through facile and accurate photoswitching property characterization, which translates to SMLM fluorophore imaging performance.

  6. Accurate and efficient prediction of fine-resolution hydrologic and carbon dynamic simulations from coarse-resolution models

    NASA Astrophysics Data System (ADS)

    Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning

    2016-02-01

    The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.

  7. How accurately can subject-specific finite element models predict strains and strength of human femora? Investigation using full-field measurements.

    PubMed

    Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna

    2016-03-21

    Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (<2% error) for two out of three specimens. In the third specimen, an accidental change in the boundary conditions occurred during the experiment, which compromised the femoral strength validation. The achieved strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response.

  8. SnowyOwl: accurate prediction of fungal genes by using RNA-Seq and homology information to select among ab initio models

    PubMed Central

    2014-01-01

    Background Locating the protein-coding genes in novel genomes is essential to understanding and exploiting the genomic information but it is still difficult to accurately predict all the genes. The recent availability of detailed information about transcript structure from high-throughput sequencing of messenger RNA (RNA-Seq) delineates many expressed genes and promises increased accuracy in gene prediction. Computational gene predictors have been intensively developed for and tested in well-studied animal genomes. Hundreds of fungal genomes are now or will soon be sequenced. The differences of fungal genomes from animal genomes and the phylogenetic sparsity of well-studied fungi call for gene-prediction tools tailored to them. Results SnowyOwl is a new gene prediction pipeline that uses RNA-Seq data to train and provide hints for the generation of Hidden Markov Model (HMM)-based gene predictions and to evaluate the resulting models. The pipeline has been developed and streamlined by comparing its predictions to manually curated gene models in three fungal genomes and validated against the high-quality gene annotation of Neurospora crassa; SnowyOwl predicted N. crassa genes with 83% sensitivity and 65% specificity. SnowyOwl gains sensitivity by repeatedly running the HMM gene predictor Augustus with varied input parameters and selectivity by choosing the models with best homology to known proteins and best agreement with the RNA-Seq data. Conclusions SnowyOwl efficiently uses RNA-Seq data to produce accurate gene models in both well-studied and novel fungal genomes. The source code for the SnowyOwl pipeline (in Python) and a web interface (in PHP) is freely available from http://sourceforge.net/projects/snowyowl/. PMID:24980894

  9. Prediction of psychotropic properties of lisuride hydrogen maleate by quantitative pharmaco-electroencephalogram.

    PubMed

    Itil, T M; Herrmann, W M; Akpinar, S

    1975-07-01

    Based on "quantitative pharmaco-EEG" using computer-analyzed EEG (CEEG) measurements, unknown CNS effects of lisuride hydrogen maleate (LHM) were established. CEEG profiles of LHM in low dosages (less than or equal to 10 mcg) are similar to CNS "inhibitory" compounds, while in higher dosages (25 mcg to 100 mcg) they resemble "psychostimulant" compounds. By measuring the brain function using computer period analysis of cerebral biopotentials, dose-efficacy relations were found (in the range of 25-75 mcg) which suggest the bioavailability of LHM at the CNS level. By comparing the CEEG profiles of LHM with the previously studied compounds, five different clinical uses of LHM were predicted. The pilot trials suggest that LHM may have therapeutic potentials in patients with "aging" and/or organic brain syndromes, and in children with behavioral disturbances.

  10. Translating HIV sequences into quantitative fitness landscapes predicts viral vulnerabilities for rational immunogen design.

    PubMed

    Ferguson, Andrew L; Mann, Jaclyn K; Omarjee, Saleha; Ndung'u, Thumbi; Walker, Bruce D; Chakraborty, Arup K

    2013-03-21

    A prophylactic or therapeutic vaccine offers the best hope to curb the HIV-AIDS epidemic gripping sub-Saharan Africa, but it remains elusive. A major challenge is the extreme viral sequence variability among strains. Systematic means to guide immunogen design for highly variable pathogens like HIV are not available. Using computational models, we have developed an approach to translate available viral sequence data into quantitative landscapes of viral fitness as a function of the amino acid sequences of its constituent proteins. Predictions emerging from our computationally defined landscapes for the proteins of HIV-1 clade B Gag were positively tested against new in vitro fitness measurements and were consistent with previously defined in vitro measurements and clinical observations. These landscapes chart the peaks and valleys of viral fitness as protein sequences change and inform the design of immunogens and therapies that can target regions of the virus most vulnerable to selection pressure.

  11. Translating HIV sequences into quantitative fitness landscapes predicts viral vulnerabilities for rational immunogen design

    PubMed Central

    Ferguson, Andrew L.; Mann, Jaclyn K.; Omarjee, Saleha; Ndung’u, Thumbi; Walker, Bruce D.; Chakraborty, Arup K.

    2013-01-01

    Summary A prophylactic or therapeutic vaccine offers the best hope to curb the HIV-AIDS epidemic gripping sub-Saharan Africa, but remains elusive. A major challenge is the extreme viral sequence variability among strains. Systematic means to guide immunogen design for highly variable pathogens like HIV are not available. Using computational models, we have developed an approach to translate available viral sequence data into quantitative landscapes of viral fitness as a function of the amino acid sequences of its constituent proteins. Predictions emerging from our computationally defined landscapes for the proteins of HIV-1 clade B Gag were positively tested against new in vitro fitness measurements, and were consistent with previously defined in vitro measurements and clinical observations. These landscapes chart the peaks and valleys of viral fitness as protein sequences change, and inform the design of immunogens and therapies that can target regions of the virus most vulnerable to selection pressure. PMID:23521886

  12. Quantitative structure-property relationships for prediction of boiling point, vapor pressure, and melting point.

    PubMed

    Dearden, John C

    2003-08-01

    Boiling point, vapor pressure, and melting point are important physicochemical properties in the modeling of the distribution and fate of chemicals in the environment. However, such data often are not available, and therefore must be estimated. Over the years, many attempts have been made to calculate boiling points, vapor pressures, and melting points by using quantitative structure-property relationships, and this review examines and discusses the work published in this area, and concentrates particularly on recent studies. A number of software programs are commercially available for the calculation of boiling point, vapor pressure, and melting point, and these have been tested for their predictive ability with a test set of 100 organic chemicals.

  13. Conformation of polyelectrolytes in poor solvents: Variational approach and quantitative comparison with scaling predictions

    NASA Astrophysics Data System (ADS)

    Tang, Haozhe; Liao, Qi; Zhang, Pingwen

    2014-05-01

    We present the results of variational calculations of a polyelectrolyte solution with low salt in poor solvent conditions for a polymer backbone. By employing the variation method, we quantitatively determined the diagram of the state of the polyelectrolyte in poor solvents as a function of the charge density and the molecular weight. The exact structure and diagram of the polyelectrolyte were compared to the scaling predictions of the necklace model developed by Dobrynin and Rubinstein [Prog. Polym. Sci. 30, 1049-1118 (2005); Dobrynin and Rubinstein, Macromolecules 32, 915-922 (1999); Dobrynin and Rubinstein, Macromolecules 34, 1964-1972 (2001)]. We find that the scaling necklace model may be used as a rather good estimation and analytical approximation of the exact variational model. It is also pointed out that the molecular connection of polymer is crucial for ellipsoid and necklace conformation.

  14. Quantitatively Predict the Potential of MnO2 Polymorphs as Magnesium Battery Cathodes.

    PubMed

    Ling, Chen; Zhang, Ruigang; Mizuno, Fuminori

    2016-02-01

    Despite tremendous efforts denoted to magnesium battery research, the realization of magnesium battery is still challenged by the lack of cathode candidate with high energy density, rate capability and good recyclability. This situation can be largely attributed to the failure to achieve sustainable magnesium intercalation chemistry. In current work we explored the magnesiation of distinct MnO2 polymorphs using first-principles calculations, focusing on providing quantitative analysis about the feasibility of magnesium intercalation. Consistent with experimental observations, we predicted that ramsdellite-MnO2 and α-MnO2 are conversion-type cathodes while nanosized spinel-MnO2 and MnO2 isostructual to CaFe2O4 are better candidates for Mg intercalation. Key properties that restrict Mg intercalation include not only sluggish Mg migration but also stronger distortion that damages structure integrity and undesirable conversion reaction. We demonstrate that by evaluating the reaction free energy, structural deformation associated with the insertion of magnesium, and the diffusion barriers, a quantitative evaluation about the feasibility of magnesium intercalation can be well established. Although our current work focuses on the study of MnO2 polymorphs, the same evaluation can be applied to other cathode candidates, thus paving the road to identify better cathode candidates in future.

  15. Quantitative podocyte parameters predict human native kidney and allograft half-lives

    PubMed Central

    Naik, Abhijit S.; Afshinnia, Farsad; Cibrik, Diane; Hodgin, Jeffrey B.; Zhang, Min; Kikuchi, Masao; Wickman, Larysa; Samaniego, Milagros; Bitzer, Markus; Wiggins, Jocelyn E.; Ojo, Akinlolu; Li, Yi; Wiggins, Roger C.

    2016-01-01

    BACKGROUND. Kidney function decreases with age. A potential mechanistic explanation for kidney and allograft half-life has evolved through the realization that linear reduction in glomerular podocyte density could drive progressive glomerulosclerosis to impact both native kidney and allograft half-lives. METHODS. Predictions from podometrics (quantitation of podocyte parameters) were tested using independent pathologic, functional, and outcome data for native kidneys and allografts derived from published reports and large registries. RESULTS. With age, native kidneys exponentially develop glomerulosclerosis, reduced renal function, and end-stage kidney disease, projecting a finite average kidney life span. The slope of allograft failure rate versus age parallels that of reduction in podocyte density versus age. Quantitative modeling projects allograft half-life at any donor age, and rate of podocyte detachment parallels the observed allograft loss rate. CONCLUSION. Native kidneys are designed to have a limited average life span of about 100–140 years. Allografts undergo an accelerated aging-like process that accounts for their unexpectedly short half-life (about 15 years), the observation that older donor age is associated with shorter allograft half-life, and the fact that long-term allograft survival has not substantially improved. Podometrics provides potential readouts for these processes, thereby offering new approaches for monitoring and intervention. FUNDING: National Institutes of Health. PMID:27280173

  16. An Electroacoustic Hearing Protector Simulator That Accurately Predicts Pressure Levels in the Ear Based on Standard Performance Metrics

    DTIC Science & Technology

    2013-08-01

    24 Figure 20. ABQ experiment showing five volunteers located 1.0 m from source in upper-left panel wearing...study (Royster et al.,1996) in which users self-fit hearing protectors (ANSI S12.6- 2008 method B: user fit) with no experimenter instruction gives an...values provided by the experimenters and simulator fits for the intact and modified muffs. Figure 22 (upper panel) shows the simulator prediction

  17. Knowledge-guided docking: accurate prospective prediction of bound configurations of novel ligands using Surflex-Dock

    NASA Astrophysics Data System (ADS)

    Cleves, Ann E.; Jain, Ajay N.

    2015-06-01

    Prediction of the bound configuration of small-molecule ligands that differ substantially from the cognate ligand of a protein co-crystal structure is much more challenging than re-docking the cognate ligand. Success rates for cross-docking in the range of 20-30 % are common. We present an approach that uses structural information known prior to a particular cutoff-date to make predictions on ligands whose bounds structures were determined later. The knowledge-guided docking protocol was tested on a set of ten protein targets using a total of 949 ligands. The benchmark data set, called PINC ("PINC Is Not Cognate"), is publicly available. Protein pocket similarity was used to choose representative structures for ensemble-docking. The docking protocol made use of known ligand poses prior to the cutoff-date, both to help guide the configurational search and to adjust the rank of predicted poses. Overall, the top-scoring pose family was correct over 60 % of the time, with the top-two pose families approaching a 75 % success rate. Correct poses among all those predicted were identified nearly 90 % of the time. The largest improvements came from the use of molecular similarity to improve ligand pose rankings and the strategy for identifying representative protein structures. With the exception of a single outlier target, the knowledge-guided docking protocol produced results matching the quality of cognate-ligand re-docking, but it did so on a very challenging temporally-segregated cross-docking benchmark.

  18. Knowledge-guided docking: accurate prospective prediction of bound configurations of novel ligands using Surflex-Dock.

    PubMed

    Cleves, Ann E; Jain, Ajay N

    2015-06-01

    Prediction of the bound configuration of small-molecule ligands that differ substantially from the cognate ligand of a protein co-crystal structure is much more challenging than re-docking the cognate ligand. Success rates for cross-docking in the range of 20-30 % are common. We present an approach that uses structural information known prior to a particular cutoff-date to make predictions on ligands whose bounds structures were determined later. The knowledge-guided docking protocol was tested on a set of ten protein targets using a total of 949 ligands. The benchmark data set, called PINC ("PINC Is Not Cognate"), is publicly available. Protein pocket similarity was used to choose representative structures for ensemble-docking. The docking protocol made use of known ligand poses prior to the cutoff-date, both to help guide the configurational search and to adjust the rank of predicted poses. Overall, the top-scoring pose family was correct over 60 % of the time, with the top-two pose families approaching a 75 % success rate. Correct poses among all those predicted were identified nearly 90 % of the time. The largest improvements came from the use of molecular similarity to improve ligand pose rankings and the strategy for identifying representative protein structures. With the exception of a single outlier target, the knowledge-guided docking protocol produced results matching the quality of cognate-ligand re-docking, but it did so on a very challenging temporally-segregated cross-docking benchmark.

  19. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing

    PubMed Central

    Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628

  20. Fast and accurate multivariate Gaussian modeling of protein families: predicting residue contacts and protein-interaction partners.

    PubMed

    Baldassi, Carlo; Zamparo, Marco; Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.

  1. A highly accurate protein structural class prediction approach using auto cross covariance transformation and recursive feature elimination.

    PubMed

    Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming

    2015-12-01

    Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets.

  2. How many clinic BP readings are needed to predict cardiovascular events as accurately as ambulatory BP monitoring?

    PubMed

    Eguchi, K; Hoshide, S; Shimada, K; Kario, K

    2014-12-01

    We tested the hypothesis that multiple clinic blood pressure (BP) readings over an extended baseline period would be as predictive as ambulatory BP (ABP) for cardiovascular disease (CVD). Clinic and ABP monitoring were performed in 457 hypertensive patients at baseline. Clinic BP was measured monthly and the means of the first 3, 5 and 10 clinic BP readings were taken as the multiple clinic BP readings. The subjects were followed up, and stroke, HARD CVD, and ALL CVD events were determined as outcomes. In multivariate Cox regression analyses, ambulatory systolic BP (SBP) best predicted three outcomes independently of baseline and multiple clinic SBP readings. The mean of 10 clinic SBP readings predicted stroke (hazards ratio (HR)=1.39, 95% confidence interval (CI)=1.02-1.90, P=0.04) and ALL CVD (HR=1.41, 95% CI=1.13-1.74, P=0.002) independently of baseline clinic SBP. Clinic SBPs by three and five readings were not associated with any CVD events, except that clinic SBP by three readings was associated with ALL CVD (P=0.015). Besides ABP values, the mean of the first 10 clinic SBP values was a significant predictor of stroke and ALL CVD events. It is important to take more than several clinic BP readings early after the baseline period for the risk stratification of future CVD events.

  3. Prediction of Motor Recovery Using Quantitative Parameters of Motor Evoked Potential in Patients With Stroke

    PubMed Central

    2016-01-01

    Objective To investigate the clinical significance of quantitative parameters in transcranial magnetic stimulation (TMS)-induced motor evoked potentials (MEP) which can be adopted to predict functional recovery of the upper limb in stroke patients in the early subacute phase. Methods One hundred thirteen patients (61 men, 52 women; mean age 57.8±12.2 years) who suffered faiarst-ever stroke were included in this study. All participants underwent TMS-induced MEP session to assess the corticospinal excitability of both hand motor cortices within 3 weeks after stroke onset. After the resting motor threshold (rMT) was assessed, five sweeps of MEP were performed, and the mean amplitude of the MEP was measured. Latency of MEP, volume of the MEP output curve, recruitment ratios, and intracortical inhibition and facilitation were also measured. Motor function was assessed using the Fugl-Meyer Assessment scale (FMA) within 3 weeks and at 3 months after stroke onset. Correlation analysis was performed between TMS-induced MEP derived measures and FMA scores. Results In the MEP response group, rMT and rMT ratio measures within 3 weeks after stroke onset showed a significant negative correlation with the total and upper limb FMA scores at 3 months after stroke (p<0.001). Multiple regression analysis revealed that FMA score and rMT ratio, but not rMT within 3 weeks were independent prognostic factors for FMA scores at 3 months after stroke. Conclusion These results indicated that the quantitative parameter of TMS-induced MEP, especially rMT ratio in the early subacute phase, could be used as a parameter to predict motor function in patients with stroke. PMID:27847710

  4. Dual X-ray absorptiometry accurately predicts carcass composition from live sheep and chemical composition of live and dead sheep.

    PubMed

    Pearce, K L; Ferguson, M; Gardner, G; Smith, N; Greef, J; Pethick, D W

    2009-01-01

    Fifty merino wethers (liveweight range from 44 to 81kg, average of 58.6kg) were lot fed for 42d and scanned through a dual X-ray absorptiometry (DXA) as both a live animal and whole carcass (carcass weight range from 15 to 32kg, average of 22.9kg) producing measures of total tissue, lean, fat and bone content. The carcasses were subsequently boned out into saleable cuts and the weights and yield of boned out muscle, fat and bone recorded. The relationship between chemical lean (protein+water) was highly correlated with DXA carcass lean (r(2)=0.90, RSD=0.674kg) and moderately with DXA live lean (r(2)=0.72, RSD=1.05kg). The relationship between the chemical fat was moderately correlated with DXA carcass fat (r(2)=0.86, RSD=0.42kg) and DXA live fat (r(2)=0.70, RSD=0.71kg). DXA carcass and live animal bone was not well correlated with chemical ash (both r(2)=0.38, RSD=0.3). DXA carcass lean was moderately well predicted from DXA live lean with the inclusion of bodyweight in the regression (r(2)=0.82, RSD=0.87kg). DXA carcass fat was well predicted from DXA live fat (r(2)=0.86, RSD=0.54kg). DXA carcass lean and DXA carcass fat with the inclusion of carcass weight in the regression significantly predicted boned out muscle (r(2)=0.97, RSD=0.32kg) and fat weight, respectively (r(2)=0.92, RSD=0.34kg). The use of DXA live lean and DXA live fat with the inclusion of bodyweight to predict boned out muscle (r(2)=0.83, RSD=0.75kg) and fat (r(2)=0.86, RSD=0.46kg) weight, respectively, was moderate. The use of DXA carcass and live lean and fat to predict boned out muscle and fat yield was not correlated as weight. The future for the DXA will exist in the determination of body composition in live animals and carcasses in research experiments but there is potential for the DXA to be used as an online carcass grading system.

  5. Network Biomarkers Constructed from Gene Expression and Protein-Protein Interaction Data for Accurate Prediction of Leukemia

    PubMed Central

    Yuan, Xuye; Chen, Jiajia; Lin, Yuxin; Li, Yin; Xu, Lihua; Chen, Luonan; Hua, Haiying; Shen, Bairong

    2017-01-01

    Leukemia is a leading cause of cancer deaths in the developed countries. Great efforts have been undertaken in search of diagnostic biomarkers of leukemia. However, leukemia is highly complex and heterogeneous, involving interaction among multiple molecular components. Individual molecules are not necessarily sensitive diagnostic indicators. Network biomarkers are considered to outperform individual molecules in disease characterization. We applied an integrative approach that identifies active network modules as putative biomarkers for leukemia diagnosis. We first reconstructed the leukemia-specific PPI network using protein-protein interactions from the Protein Interaction Network Analysis (PINA) and protein annotations from GeneGo. The network was further integrated with gene expression profiles to identify active modules with leukemia relevance. Finally, the candidate network-based biomarker was evaluated for the diagnosing performance. A network of 97 genes and 400 interactions was identified for accurate diagnosis of leukemia. Functional enrichment analysis revealed that the network biomarkers were enriched in pathways in cancer. The network biomarkers could discriminate leukemia samples from the normal controls more effectively than the known biomarkers. The network biomarkers provide a useful tool to diagnose leukemia and also aids in further understanding the molecular basis of leukemia. PMID:28243332

  6. Periscope: quantitative prediction of soluble protein expression in the periplasm of Escherichia coli

    NASA Astrophysics Data System (ADS)

    Chang, Catherine Ching Han; Li, Chen; Webb, Geoffrey I.; Tey, Bengti; Song, Jiangning; Ramanan, Ramakrishnan Nagasundara

    2016-03-01

    Periplasmic expression of soluble proteins in Escherichia coli not only offers a much-simplified downstream purification process, but also enhances the probability of obtaining correctly folded and biologically active proteins. Different combinations of signal peptides and target proteins lead to different soluble protein expression levels, ranging from negligible to several grams per litre. Accurate algorithms for rational selection of promising candidates can serve as a powerful tool to complement with current trial-and-error approaches. Accordingly, proteomics studies can be conducted with greater efficiency and cost-effectiveness. Here, we developed a predictor with a two-stage architecture, to predict the real-valued expression level of target protein in the periplasm. The output of the first-stage support vector machine (SVM) classifier determines which second-stage support vector regression (SVR) classifier to be used. When tested on an independent test dataset, the predictor achieved an overall prediction accuracy of 78% and a Pearson’s correlation coefficient (PCC) of 0.77. We further illustrate the relative importance of various features with respect to different models. The results indicate that the occurrence of dipeptide glutamine and aspartic acid is the most important feature for the classification model. Finally, we provide access to the implemented predictor through the Periscope webserver, freely accessible at http://lightning.med.monash.edu/periscope/.

  7. Periscope: quantitative prediction of soluble protein expression in the periplasm of Escherichia coli

    PubMed Central

    Chang, Catherine Ching Han; Li, Chen; Webb, Geoffrey I.; Tey, BengTi; Song, Jiangning; Ramanan, Ramakrishnan Nagasundara

    2016-01-01

    Periplasmic expression of soluble proteins in Escherichia coli not only offers a much-simplified downstream purification process, but also enhances the probability of obtaining correctly folded and biologically active proteins. Different combinations of signal peptides and target proteins lead to different soluble protein expression levels, ranging from negligible to several grams per litre. Accurate algorithms for rational selection of promising candidates can serve as a powerful tool to complement with current trial-and-error approaches. Accordingly, proteomics studies can be conducted with greater efficiency and cost-effectiveness. Here, we developed a predictor with a two-stage architecture, to predict the real-valued expression level of target protein in the periplasm. The output of the first-stage support vector machine (SVM) classifier determines which second-stage support vector regression (SVR) classifier to be used. When tested on an independent test dataset, the predictor achieved an overall prediction accuracy of 78% and a Pearson’s correlation coefficient (PCC) of 0.77. We further illustrate the relative importance of various features with respect to different models. The results indicate that the occurrence of dipeptide glutamine and aspartic acid is the most important feature for the classification model. Finally, we provide access to the implemented predictor through the Periscope webserver, freely accessible at http://lightning.med.monash.edu/periscope/. PMID:26931649

  8. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  9. Application of quantitative structure activity relationship (QSAR) models to predict ozone toxicity in the lung.

    PubMed

    Kafoury, Ramzi M; Huang, Ming-Ju

    2005-08-01

    The sequence of events leading to ozone-induced airway inflammation is not well known. To elucidate the molecular and cellular events underlying ozone toxicity in the lung, we hypothesized that lipid ozonation products (LOPs) generated by the reaction of ozone with unsaturated fatty acids in the epithelial lining fluid and cell membranes play a key role in mediating ozone-induced airway inflammation. To test our hypothesis, we ozonized 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylcholine (POPC) and generated LOPs. Confluent human bronchial epithelial cells were exposed to the derivatives of ozonized POPC-9-oxononanoyl, 9-hydroxy-9-hydroperoxynonanoyl, and 8-(5-octyl-1,2,4-trioxolan-3-yl-)octanoyl-at a concentration of 10 muM, and the activity of phospholipases A2 (PLA2), C (PLC), and D (PLD) was measured (1, 0.5, and 1 h, respectively). Quantitative structure-activity relationship (QSAR) models were utilized to predict the biological activity of LOPs in airway epithelial cells. The QSAR results showed a strong correlation between experimental and computed activity (r = 0.97, 0.98, 0.99, for PLA2, PLC, and PLD, respectively). The results indicate that QSAR models can be utilized to predict the biological activity of the various ozone-derived LOP species in the lung.

  10. Quantitative prediction of perceptual decisions during near-threshold fear detection

    NASA Astrophysics Data System (ADS)

    Pessoa, Luiz; Padmala, Srikanth

    2005-04-01

    A fundamental goal of cognitive neuroscience is to explain how mental decisions originate from basic neural mechanisms. The goal of the present study was to investigate the neural correlates of perceptual decisions in the context of emotional perception. To probe this question, we investigated how fluctuations in functional MRI (fMRI) signals were correlated with behavioral choice during a near-threshold fear detection task. fMRI signals predicted behavioral choice independently of stimulus properties and task accuracy in a network of brain regions linked to emotional processing: posterior cingulate cortex, medial prefrontal cortex, right inferior frontal gyrus, and left insula. We quantified the link between fMRI signals and behavioral choice in a whole-brain analysis by determining choice probabilities by means of signal-detection theory methods. Our results demonstrate that voxel-wise fMRI signals can reliably predict behavioral choice in a quantitative fashion (choice probabilities ranged from 0.63 to 0.78) at levels comparable to neuronal data. We suggest that the conscious decision that a fearful face has been seen is represented across a network of interconnected brain regions that prepare the organism to appropriately handle emotionally challenging stimuli and that regulate the associated emotional response. decision making | emotion | functional MRI

  11. IrisPlex: a sensitive DNA tool for accurate prediction of blue and brown eye colour in the absence of ancestry information.

    PubMed

    Walsh, Susan; Liu, Fan; Ballantyne, Kaye N; van Oven, Mannis; Lao, Oscar; Kayser, Manfred

    2011-06-01

    A new era of 'DNA intelligence' is arriving in forensic biology, due to the impending ability to predict externally visible characteristics (EVCs) from biological material such as those found at crime scenes. EVC prediction from forensic samples, or from body parts, is expected to help concentrate police investigations towards finding unknown individuals, at times when conventional DNA profiling fails to provide informative leads. Here we present a robust and sensitive tool, termed IrisPlex, for the accurate prediction of blue and brown eye colour from DNA in future forensic applications. We used the six currently most eye colour-informative single nucleotide polymorphisms (SNPs) that previously revealed prevalence-adjusted prediction accuracies of over 90% for blue and brown eye colour in 6168 Dutch Europeans. The single multiplex assay, based on SNaPshot chemistry and capillary electrophoresis, both widely used in forensic laboratories, displays high levels of genotyping sensitivity with complete profiles generated from as little as 31pg of DNA, approximately six human diploid cell equivalents. We also present a prediction model to correctly classify an individual's eye colour, via probability estimation solely based on DNA data, and illustrate the accuracy of the developed prediction test on 40 individuals from various geographic origins. Moreover, we obtained insights into the worldwide allele distribution of these six SNPs using the HGDP-CEPH samples of 51 populations. Eye colour prediction analyses from HGDP-CEPH samples provide evidence that the test and model presented here perform reliably without prior ancestry information, although future worldwide genotype and phenotype data shall confirm this notion. As our IrisPlex eye colour prediction test is capable of immediate implementation in forensic casework, it represents one of the first steps forward in the creation of a fully individualised EVC prediction system for future use in forensic DNA intelligence.

  12. Genomic inference accurately predicts the timing and severity of a recent bottleneck in a non-model insect population

    PubMed Central

    McCoy, Rajiv C.; Garud, Nandita R.; Kelley, Joanna L.; Boggs, Carol L.; Petrov, Dmitri A.

    2015-01-01

    The analysis of molecular data from natural populations has allowed researchers to answer diverse ecological questions that were previously intractable. In particular, ecologists are often interested in the demographic history of populations, information that is rarely available from historical records. Methods have been developed to infer demographic parameters from genomic data, but it is not well understood how inferred parameters compare to true population history or depend on aspects of experimental design. Here we present and evaluate a method of SNP discovery using RNA-sequencing and demographic inference using the program δaδi, which uses a diffusion approximation to the allele frequency spectrum to fit demographic models. We test these methods in a population of the checkerspot butterfly Euphydryas gillettii. This population was intentionally introduced to Gothic, Colorado in 1977 and has since experienced extreme fluctuations including bottlenecks of fewer than 25 adults, as documented by nearly annual field surveys. Using RNA-sequencing of eight individuals from Colorado and eight individuals from a native population in Wyoming, we generate the first genomic resources for this system. While demographic inference is commonly used to examine ancient demography, our study demonstrates that our inexpensive, all-in-one approach to marker discovery and genotyping provides sufficient data to accurately infer the timing of a recent bottleneck. This demographic scenario is relevant for many species of conservation concern, few of which have sequenced genomes. Our results are remarkably insensitive to sample size or number of genomic markers, which has important implications for applying this method to other non-model systems. PMID:24237665

  13. An accurate method to predict the stress concentration in composite laminates with a circular hole under tensile loading

    NASA Astrophysics Data System (ADS)

    Russo, A.; Zuccarello, B.

    2007-07-01

    The paper presents a theoretical-numerical hybrid method for determining the stresses distribution in composite laminates containing a circular hole and subjected to uniaxial tensile loading. The method is based upon an appropriate corrective function allowing a simple and rapid evaluation of stress distributions in a generic plate of finite width with a hole based on the theoretical stresses distribution in an infinite plate with the same hole geometry and material. In order to verify the accuracy of the method proposed, various numerical and experimental tests have been performed by considering different laminate lay-ups; in particular, the experimental results have shown that a combined use of the method proposed and the well-know point-stress criterion leads to reliable strength predictions for GFRP or CFRP laminates with a circular hole.

  14. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems

    SciTech Connect

    Samudrala, Ram; Heffron, Fred; McDermott, Jason E.

    2009-04-24

    The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, after eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.

  15. A single bioavailability model can accurately predict Ni toxicity to green microalgae in soft and hard surface waters.

    PubMed

    Deleebeeck, Nele M E; De Laender, Frederik; Chepurnov, Victor A; Vyverman, Wim; Janssen, Colin R; De Schamphelaere, Karel A C

    2009-04-01

    The major research questions addressed in this study were (i) whether green microalgae living in soft water (operationally defined water hardness<10mg CaCO(3)/L) are intrinsically more sensitive to Ni than green microalgae living in hard water (operationally defined water hardness >25mg CaCO(3)/L), and (ii) whether a single bioavailability model can be used to predict the effect of water hardness on the toxicity of Ni to green microalgae in both soft and hard water. Algal growth inhibition tests were conducted with clones of 10 different species collected in soft and hard water lakes in Sweden. Soft water algae were tested in a 'soft' and a 'moderately hard' test medium (nominal water hardness=6.25 and 16.3mg CaCO(3)/L, respectively), whereas hard water algae were tested in a 'moderately hard' and a 'hard' test medium (nominal water hardness=16.3 and 43.4 mg CaCO(3)/L, respectively). The results from the growth inhibition tests in the 'moderately hard' test medium revealed no significant sensitivity differences between the soft and the hard water algae used in this study. Increasing water hardness significantly reduced Ni toxicity to both soft and hard water algae. Because it has previously been demonstrated that Ca does not significantly protect the unicellular green alga Pseudokirchneriella subcapitata against Ni toxicity, it was assumed that the protective effect of water hardness can be ascribed to Mg alone. The logK(MgBL) (=5.5) was calculated to be identical for the soft and the hard water algae used in this study. A single bioavailability model can therefore be used to predict Ni toxicity to green microalgae in soft and hard surface waters as a function of water hardness.

  16. Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer

    PubMed Central

    2016-01-01

    Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R. PMID:27271781

  17. COMPARISON OF QUANTITATIVE COMPUTED TOMOGRAPHY-BASED MEASURES IN PREDICTING VERTEBRAL COMPRESSIVE STRENGTH

    PubMed Central

    Buckley, Jenni M.; Loo, Kenneth; Motherway, Julie

    2007-01-01

    Patient-specific measures derived from quantitative computed tomography (QCT) scans are currently being developed as a clinical tool for vertebral strength prediction. QCT-based measurement techniques vary greatly in structural complexity and generally fall into one of three categories: 1) bone mineral density (BMD), 2) “mechanics of solids” (MOS) models, such as minimum axial rigidity (the product of axial stiffness and vertebral height), or 3) three dimensional finite element (FE) models. There is no clear consensus as to the relative performance of these measures due to differences in experimental protocols, sample sizes and demographics, and outcome metrics. The goal of this study was to directly compare the performance of QCT-based assessment techniques of varying degrees of structural sophistication in predicting experimental vertebral compressive strength. Eighty-one human thoracic vertebrae (T6 – T10) from 44 donors cadavers (F = 32, M = 12; 85 + 8 y.o., max = 97 y.o., min = 54 y.o.) were QCT scanned and destructively tested in uniaxial compression. The QCT scans were processed to generate FE models and various BMD and MOS measures, including trabecular bone mineral density (tBMD), integral bone mineral density (iBMD), and axial rigidity. Bone mineral density was weakly to moderately predictive of compressive strength (R2 = 0.16 and 0.62 for tBMD and iBMD, respectively). Ex vivo vertebral strength was strongly correlated with both axial rigidity (R2 = 0.81) and FE strength measurements (R2 = 0.80), and the predictive capabilities of these two metrics were statistically equivalent (p > 0.05 for differences between FE and axial rigidity). The results of this study indicate that non-invasive predictive measures of vertebral strength should include some level of structural sophistication, specifically, gross geometric and material property distribution information. However, for uniaxial compression of isolated vertebrae, which is the current biomechanical

  18. Generalized spin-ratio scaled MP2 method for accurate prediction of intermolecular interactions for neutral and ionic species

    NASA Astrophysics Data System (ADS)

    Tan, Samuel; Barrera Acevedo, Santiago; Izgorodina, Ekaterina I.

    2017-02-01

    The accurate calculation of intermolecular interactions is important to our understanding of properties in large molecular systems. The high computational cost of the current "gold standard" method, coupled cluster with singles and doubles and perturbative triples (CCSD(T), limits its application to small- to medium-sized systems. Second-order Møller-Plesset perturbation (MP2) theory is a cheaper alternative for larger systems, although at the expense of its decreased accuracy, especially when treating van der Waals complexes. In this study, a new modification of the spin-component scaled MP2 method was proposed for a wide range of intermolecular complexes including two well-known datasets, S22 and S66, and a large dataset of ionic liquids consisting of 174 single ion pairs, IL174. It was found that the spin ratio, ɛΔ s=E/INT O SEIN T S S , calculated as the ratio of the opposite-spin component to the same-spin component of the interaction correlation energy fell in the range of 0.1 and 1.6, in contrast to the range of 3-4 usually observed for the ratio of absolute correlation energy, ɛs=E/OSES S , in individual molecules. Scaled coefficients were found to become negative when the spin ratio fell in close proximity to 1.0, and therefore, the studied intermolecular complexes were divided into two groups: (1) complexes with ɛΔ s< 1 and (2) complexes with ɛΔ s≥ 1 . A separate set of coefficients was obtained for both groups. Exclusion of counterpoise correction during scaling was found to produce superior results due to decreased error. Among a series of Dunning's basis sets, cc-pVTZ and cc-pVQZ were found to be the best performing ones, with a mean absolute error of 1.4 kJ mol-1 and maximum errors below 6.2 kJ mol-1. The new modification, spin-ratio scaled second-order Møller-Plesset perturbation, treats both dispersion-driven and hydrogen-bonded complexes equally well, thus validating its robustness with respect to the interaction type ranging from ionic

  19. Towards a chromatographic similarity index to establish localized quantitative structure-retention models for retention prediction: Use of retention factor ratio.

    PubMed

    Tyteca, Eva; Talebi, Mohammad; Amos, Ruth; Park, Soo Hyun; Taraji, Maryam; Wen, Yabin; Szucs, Roman; Pohl, Christopher A; Dolan, John W; Haddad, Paul R

    2017-02-24

    Quantitative Structure-Retention Relationships (QSRR) have the potential to speed up the screening phase of chromatographic method development as the initial exploratory experiments are replaced by prediction of analyte retention based solely on the structure of the molecule. The present study offers further proof-of-concept of localized QSRR modelling, in which the retention of any given compound is predicted using only the most chromatographically similar compounds in the available dataset. To this end, each compound in the dataset was sequentially removed from the database and individually utilized as a test analyte. In this study, we propose the retention factor k as the most relevant chromatographic similarity measure and compare it with the Tanimoto index, the most popular similarity measure based on chemical structure. Prediction error was reduced by up to 8 fold when QSRR was based only on chromatographically similar compounds rather than using the entire dataset. The study therefore shows that the design of a practically useful structural similarity index should select the same compounds in the dataset as does the k-similarity filter in order to establish accurate predictive localized QSRR models. While low average prediction errors (Mean Absolute Error (MAE)<0.5min) and slopes of the regression lines through the origin close to 1.00 were obtained using k-similarity searching, the use of the structural Tanimoto similarity index, considered as the gold standard in Quantitative Structure-Activity Relationships (QSAR) studies, generally resulted in much higher prediction errors (MAE>1min) and significant deviations from the reference slope of 1.0. The Tanomoto similarity index therefore appears to have limited general utility in QSRR studies. Future studies therefore aim at designing a more appropriate chromatographic similarity index that can then be applied for unknown compounds (that is, compounds which have not been tested previously on the

  20. Predictive power of quantitative and qualitative fecal immunochemical tests for hemoglobin in population screening for colorectal neoplasm.

    PubMed

    Huang, Yanqin; Li, Qilong; Ge, Weiting; Cai, Shanrong; Zhang, Suzhan; Zheng, Shu

    2014-01-01

    The aim of this study was to evaluate the performance of qualitative and quantitative fecal immunochemical tests (FITs) in population screening for colorectal neoplasm. A total of 9000 participants aged between 40 and 74 years were enrolled in this study. Each participant received two stool sampling tubes and was asked to simultaneously submit two stool samples from the same bowel movement. The stool samples of each participant were tested using an immunogold labeling FIT dipstick (qualitative FIT) and an automated fecal blood analyzer (quantitative FIT). Colonoscopy was performed for those who test positive in either FIT. The positive predictive values and population detection rates of the FITs for predicting colorectal neoplasm were compared. A total of 6494 (72.16%) participants simultaneously submitted two stool samples. The diagnostic consistency for a positive result between quantitative and qualitative FITs was poor (κ=0.278, 95% confidence interval=0.223-0.333). The positive predictive values of the quantitative FIT were significantly higher than those of the qualitative FIT for predicting large (≥1 cm) adenomas (23 cases, 14.29% and 16 cases, 6.72%, P=0.013) and colorectal cancer (10 cases, 6.21% and 5 cases, 2.10%, P=0.034); however, the population detection rate for advanced neoplasm of the quantitative FIT was not significantly different from that of the qualitative FIT. Quantitative FIT is superior to qualitative FIT in predicting advanced colorectal neoplasm during colorectal cancer screening. Further studies are needed to elucidate the causes of the predictive superiority.

  1. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field

  2. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  3. Predicting College Students' First Year Success: Should Soft Skills Be Taken into Consideration to More Accurately Predict the Academic Achievement of College Freshmen?

    ERIC Educational Resources Information Center

    Powell, Erica Dion

    2013-01-01

    This study presents a survey developed to measure the skills of entering college freshmen in the areas of responsibility, motivation, study habits, literacy, and stress management, and explores the predictive power of this survey as a measure of academic performance during the first semester of college. The survey was completed by 334 incoming…

  4. Microdosing of a Carbon-14 Labeled Protein in Healthy Volunteers Accurately Predicts Its Pharmacokinetics at Therapeutic Dosages.

    PubMed

    Vlaming, M L H; van Duijn, E; Dillingh, M R; Brands, R; Windhorst, A D; Hendrikse, N H; Bosgra, S; Burggraaf, J; de Koning, M C; Fidder, A; Mocking, J A J; Sandman, H; de Ligt, R A F; Fabriek, B O; Pasman, W J; Seinen, W; Alves, T; Carrondo, M; Peixoto, C; Peeters, P A M; Vaes, W H J

    2015-08-01

    Preclinical development of new biological entities (NBEs), such as human protein therapeutics, requires considerable expenditure of time and costs. Poor prediction of pharmacokinetics in humans further reduces net efficiency. In this study, we show for the first time that pharmacokinetic data of NBEs in humans can be successfully obtained early in the drug development process by the use of microdosing in a small group of healthy subjects combined with ultrasensitive accelerator mass spectrometry (AMS). After only minimal preclinical testing, we performed a first-in-human phase 0/phase 1 trial with a human recombinant therapeutic protein (RESCuing Alkaline Phosphatase, human recombinant placental alkaline phosphatase [hRESCAP]) to assess its safety and kinetics. Pharmacokinetic analysis showed dose linearity from microdose (53 μg) [(14) C]-hRESCAP to therapeutic doses (up to 5.3 mg) of the protein in healthy volunteers. This study demonstrates the value of a microdosing approach in a very small cohort for accelerating the clinical development of NBEs.

  5. Women's age and embryo developmental speed accurately predict clinical pregnancy after single vitrified-warmed blastocyst transfer.

    PubMed

    Kato, Keiichi; Ueno, Satoshi; Yabuuchi, Akiko; Uchiyama, Kazuo; Okuno, Takashi; Kobayashi, Tamotsu; Segawa, Tomoya; Teramoto, Shokichi

    2014-10-01

    The aim of this study was to establish a simple, objective blastocyst grading system using women's age and embryo developmental speed to predict clinical pregnancy after single vitrified-warmed blastocyst transfer. A 6-year retrospective cohort study was conducted in a private infertility centre. A total of 7341 single vitrified-armed blastocyst transfer cycles were included, divided into those carried out between 2006 and 2011 (6046 cycles) and 2012 (1295 cycles). Clinical pregnancy rate, ongoing pregnancy rate and delivery rates were stratified by women's age (<35, 35-37, 38-39, 40-41, 42-45 years) and time to blastocyst expansion (<120, 120-129, 130-139, 140-149, >149 h) as embryo developmental speed. In all the age groups, clinical pregnancy rate, ongoing pregnancy rate and delivery rates decreased as the embryo developmental speed decreased (P < 0.0001). A simple five-grade score based on women's age and embryo developmental speed was determined by actual clinical pregnancy rates observed in the 2006-2011 cohort. Subsequently, the novel grading score was validated in the 2012 cohort (1295 cycles), finding an excellent association. In conclusion, we established a novel blastocyst grading system using women's age and embryo developmental speed as objective parameters.

  6. Transcription factor regulation can be accurately predicted from the presence of target gene signatures in microarray gene expression data

    PubMed Central

    Essaghir, Ahmed; Toffalini, Federica; Knoops, Laurent; Kallin, Anders; van Helden, Jacques; Demoulin, Jean-Baptiste

    2010-01-01

    Deciphering transcription factor networks from microarray data remains difficult. This study presents a simple method to infer the regulation of transcription factors from microarray data based on well-characterized target genes. We generated a catalog containing transcription factors associated with 2720 target genes and 6401 experimentally validated regulations. When it was available, a distinction between transcriptional activation and inhibition was included for each regulation. Next, we built a tool (www.tfacts.org) that compares submitted gene lists with target genes in the catalog to detect regulated transcription factors. TFactS was validated with published lists of regulated genes in various models and compared to tools based on in silico promoter analysis. We next analyzed the NCI60 cancer microarray data set and showed the regulation of SOX10, MITF and JUN in melanomas. We then performed microarray experiments comparing gene expression response of human fibroblasts stimulated by different growth factors. TFactS predicted the specific activation of Signal transducer and activator of transcription factors by PDGF-BB, which was confirmed experimentally. Our results show that the expression levels of transcription factor target genes constitute a robust signature for transcription factor regulation, and can be efficiently used for microarray data mining. PMID:20215436

  7. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches

    PubMed Central

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506

  8. Third-Kind Encounters in Biomedicine: Immunology Meets Mathematics and Informatics to Become Quantitative and Predictive.

    PubMed

    Eberhardt, Martin; Lai, Xin; Tomar, Namrata; Gupta, Shailendra; Schmeck, Bernd; Steinkasserer, Alexander; Schuler, Gerold; Vera, Julio

    2016-01-01

    The understanding of the immune response is right now at the center of biomedical research. There are growing expectations that immune-based interventions will in the midterm provide new, personalized, and targeted therapeutic options for many severe and highly prevalent diseases, from aggressive cancers to infectious and autoimmune diseases. To this end, immunology should surpass its current descriptive and phenomenological nature, and become quantitative, and thereby predictive.Immunology is an ideal field for deploying the tools, methodologies, and philosophy of systems biology, an approach that combines quantitative experimental data, computational biology, and mathematical modeling. This is because, from an organism-wide perspective, the immunity is a biological system of systems, a paradigmatic instance of a multi-scale system. At the molecular scale, the critical phenotypic responses of immune cells are governed by large biochemical networks, enriched in nested regulatory motifs such as feedback and feedforward loops. This network complexity confers them the ability of highly nonlinear behavior, including remarkable examples of homeostasis, ultra-sensitivity, hysteresis, and bistability. Moving from the cellular level, different immune cell populations communicate with each other by direct physical contact or receiving and secreting signaling molecules such as cytokines. Moreover, the interaction of the immune system with its potential targets (e.g., pathogens or tumor cells) is far from simple, as it involves a number of attack and counterattack mechanisms that ultimately constitute a tightly regulated multi-feedback loop system. From a more practical perspective, this leads to the consequence that today's immunologists are facing an ever-increasing challenge of integrating massive quantities from multi-platforms.In this chapter, we support the idea that the analysis of the immune system demands the use of systems-level approaches to ensure the success in

  9. New quantitative approaches for classifying and predicting local-scale habitats in estuaries

    NASA Astrophysics Data System (ADS)

    Valesini, Fiona J.; Hourston, Mathew; Wildsmith, Michelle D.; Coen, Natasha J.; Potter, Ian C.

    2010-03-01

    This study has developed quantitative approaches for firstly classifying local-scale nearshore habitats in an estuary and then predicting the habitat of any nearshore site in that system. Both approaches employ measurements for a suite of enduring environmental criteria that are biologically relevant and can be easily derived from readily available maps. While the approaches were developed for south-western Australian estuaries, with a focus here on the Swan and Peel-Harvey, they can easily be tailored to any system. Classification of the habitats in each of the above estuaries was achieved by subjecting to hierarchical agglomerative clustering (CLUSTER) and a Similarity Profiles test (SIMPROF), a Manhattan distance matrix constructed from measurements of a suite of enduring criteria recorded at numerous environmentally diverse sites. Groups of sites within the resultant dendogram that were shown by SIMPROF to not contain any significant internal differences, but differ significantly from all other groups in their enduring characteristics, were considered to represent habitat types. The enduring features of the 18 and 17 habitats identified among the 101 and 102 sites in the Swan and Peel-Harvey estuaries, respectively, are presented. The average measurements of the enduring characteristics at each habitat were then used in a novel application of the Linkage Tree (LINKTREE) and SIMPROF routines to produce a "decision tree" for predicting, on the basis of measurements for particular enduring variables, the habitat to which any further site in an estuary is best assigned. In both estuaries, the pattern of relative differences among habitats, as defined by their enduring characteristics, was significantly correlated with that defined by their non-enduring water physico-chemical characteristics recorded seasonally in the field. However, those correlations were substantially higher for the Swan, particularly when salinity was the only water physico-chemical variable

  10. QSTR modeling for qualitative and quantitative toxicity predictions of diverse chemical pesticides in honey bee for regulatory purposes.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Basant, Nikita; Mohan, Dinesh

    2014-09-15

    Pesticides are designed toxic chemicals for specific purposes and can harm nontarget species as well. The honey bee is considered a nontarget test species for toxicity evaluation of chemicals. Global QSTR (quantitative structure-toxicity relationship) models were established for qualitative and quantitative toxicity prediction of pesticides in honey bee (Apis mellifera) based on the experimental toxicity data of 237 structurally diverse pesticides. Structural diversity of the chemical pesticides and nonlinear dependence in the toxicity data were evaluated using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) QSTR models were constructed for classification (two and four categories) and function optimization problems using the toxicity end point in honey bees. The predictive power of the QSTR models was tested through rigorous validation performed using the internal and external procedures employing a wide series of statistical checks. In complete data, the PNN-QSTR model rendered a classification accuracy of 96.62% (two-category) and 95.57% (four-category), while the GRNN-QSTR model yielded a correlation (R(2)) of 0.841 between the measured and predicted toxicity values with a mean squared error (MSE) of 0.22. The results suggest the appropriateness of the developed QSTR models for reliably predicting qualitative and quantitative toxicities of pesticides in honey bee. Both the PNN and GRNN based QSTR models constructed here can be useful tools in predicting the qualitative and quantitative toxicities of the new chemical pesticides for regulatory purposes.

  11. Noncontrast computed tomography can predict the outcome of shockwave lithotripsy via accurate stone measurement and abdominal fat distribution determination.

    PubMed

    Geng, Jiun-Hung; Tu, Hung-Pin; Shih, Paul Ming-Chen; Shen, Jung-Tsung; Jang, Mei-Yu; Wu, Wen-Jen; Li, Ching-Chia; Chou, Yii-Her; Juan, Yung-Shun

    2015-01-01

    Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL) has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT) to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD) were analyzed. In all, 197 (60%) were classified as stone-free and 132 (40%) as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA), abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals): 9.49 (3.72-24.20), 2.25 (1.22-4.14), 2.20 (1.10-4.40), and 2.89 (1.35-6.21) respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these predictors for selecting

  12. Quantitative Proteomic Approach for MicroRNA Target Prediction Based on 18O/16O Labeling

    PubMed Central

    Ma, Xuepo; Zhu, Ying; Huang, Yufei; Tegeler, Tony; Gao, Shou-Jiang; Zhang, Jianqiu

    2015-01-01

    MOTIVATION Among many large-scale proteomic quantification methods, 18O/16O labeling requires neither specific amino acid in peptides nor label incorporation through several cell cycles, as in metabolic labeling; it does not cause significant elution time shifts between heavy- and light-labeled peptides, and its dynamic range of quantification is larger than that of tandem mass spectrometry-based quantification methods. These properties offer 18O/16O labeling the maximum flexibility in application. However, 18O/16O labeling introduces large quantification variations due to varying labeling efficiency. There lacks a processing pipeline that warrants the reliable identification of differentially expressed proteins (DEPs). This motivates us to develop a quantitative proteomic approach based on 18O/16O labeling and apply it on Kaposi sarcoma-associated herpesvirus (KSHV) microRNA (miR) target prediction. KSHV is a human pathogenic γ-herpesvirus strongly associated with the development of B-cell proliferative disorders, including primary effusion lymphoma. Recent studies suggest that miRs have evolved a highly complex network of interactions with the cellular and viral transcriptomes, and relatively few KSHV miR targets have been characterized at the functional level. While the new miR target prediction method, photoactivatable ribonucleoside-enhanced cross-linking and immunoprecipitation (PAR-CLIP), allows the identification of thousands of miR targets, the link between miRs and their targets still cannot be determined. We propose to apply the developed proteomic approach to establish such links. METHOD We integrate several 18O/16O data processing algorithms that we published recently and identify the messenger RNAs of downregulated proteins as potential targets in KSHV miR-transfected human embryonic kidney 293T cells. Various statistical tests are employed for picking DEPs, and we select the best test by examining the enrichment of PAR-CLIP-reported targets with

  13. PET guidance in prostate cancer radiotherapy: Quantitative imaging to predict response and guide treatment.

    PubMed

    Cattaneo, G M; Bettinardi, V; Mapelli, P; Picchio, M

    2016-03-01

    Positron emission tomography (PET) allows a monitoring and recording of the spatial and temporal distribution of molecular/cellular processes for diagnostic and therapeutic applications. The aim of this review is to describe the current applications and to explore the role of PET in prostate cancer management, mainly in the radiation therapy (RT) scenario. The state-of-the art of PET for prostate cancer will be presented together with the impact of new specific PET tracers and technological developments aiming at obtaining better imaging quality, increased tumor detectability and more accurate volume delineation. An increased number of studies have been focusing on PET quantification methods as predictive biomarkers capable of guiding individualized treatment and improving patient outcome; the sophisticated advanced intensity modulated and imaged guided radiation therapy techniques (IMRT/IGRT) are capable of boosting more radioresistant tumor (sub)volumes. The use of advanced feature analyses of PET images is an approach that holds great promise with regard to several oncological diseases, but needs further validation in managing prostate diseases.

  14. Quantitative analysis and prediction of G-quadruplex forming sequences in double-stranded DNA

    PubMed Central

    Kim, Minji; Kreig, Alex; Lee, Chun-Ying; Rube, H. Tomas; Calvert, Jacob; Song, Jun S.; Myong, Sua

    2016-01-01

    G-quadruplex (GQ) is a four-stranded DNA structure that can be formed in guanine-rich sequences. GQ structures have been proposed to regulate diverse biological processes including transcription, replication, translation and telomere maintenance. Recent studies have demonstrated the existence of GQ DNA in live mammalian cells and a significant number of potential GQ forming sequences in the human genome. We present a systematic and quantitative analysis of GQ folding propensity on a large set of 438 GQ forming sequences in double-stranded DNA by integrating fluorescence measurement, single-molecule imaging and computational modeling. We find that short minimum loop length and the thymine base are two main factors that lead to high GQ folding propensity. Linear and Gaussian process regression models further validate that the GQ folding potential can be predicted with high accuracy based on the loop length distribution and the nucleotide content of the loop sequences. Our study provides important new parameters that can inform the evaluation and classification of putative GQ sequences in the human genome. PMID:27095201

  15. Quantitative predictions on auxin-induced polar distribution of PIN proteins during vein formation in leaves.

    PubMed

    Alim, K; Frey, E

    2010-10-01

    The dynamic patterning of the plant hormone auxin and its efflux facilitator the PIN protein are the key regulators for the spatial and temporal organization of plant development. In particular auxin induces the polar localization of its own efflux facilitator. Due to this positive feedback, auxin flow is directed and patterns of auxin and PIN arise. During the earliest stage of vein initiation in leaves auxin accumulates in a single cell in a rim of epidermal cells from which it flows into the ground meristem tissue of the leaf blade. There the localized auxin supply yields the successive polarization of PIN distribution along a strand of cells. We model the auxin and PIN dynamics within cells with a minimal canalization model. Solving the model analytically we uncover an excitable polarization front that triggers a polar distribution of PIN proteins in cells. As polarization fronts may extend to opposing directions from their initiation site, we suggest a possible resolution to the puzzling occurrence of bipolar cells, thus we offer an explanation for the development of closed, looped veins. Employing non-linear analysis, we identify the role of the contributing microscopic processes during polarization. Furthermore, we deduce quantitative predictions on polarization fronts establishing a route to determine the up to now largely unknown kinetic rates of auxin and PIN dynamics.

  16. Quantitative Characterization of Local Protein Solvation To Predict Solvent Effects on Protein Structure

    PubMed Central

    Vagenende, Vincent; Trout, Bernhardt L.

    2012-01-01

    Characterization of solvent preferences of proteins is essential to the understanding of solvent effects on protein structure and stability. Although it is generally believed that solvent preferences at distinct loci of a protein surface may differ, quantitative characterization of local protein solvation has remained elusive. In this study, we show that local solvation preferences can be quantified over the entire protein surface from extended molecular dynamics simulations. By subjecting microsecond trajectories of two proteins (lysozyme and antibody fragment D1.3) in 4 M glycerol to rigorous statistical analyses, solvent preferences of individual protein residues are quantified by local preferential interaction coefficients. Local solvent preferences for glycerol vary widely from residue to residue and may change as a result of protein side-chain motions that are slower than the longest intrinsic solvation timescale of ∼10 ns. Differences of local solvent preferences between distinct protein side-chain conformations predict solvent effects on local protein structure in good agreement with experiment. This study extends the application scope of preferential interaction theory and enables molecular understanding of solvent effects on protein structure through comprehensive characterization of local protein solvation. PMID:22995508

  17. Turbidity currents and turbidites: towards quantitative interpretation and prediction of process and product.

    NASA Astrophysics Data System (ADS)

    Eggenhuisen, J. T.; Cartigny, M.; de Leeuw, J.; Pohl, F.

    2015-12-01

    Many decades of studies of deposits and seascapes formed by turbidity currents have established a robust observational framework that demonstrates that depositional and morphological patterns are repeated through time and space. The process-modeling community has similarly made progress in the understanding of the distribution of suspended sediment, velocity, and turbulence in turbidity currents, together shaping the "flow structure". Thus, now is the time to integrate, and investigate in more detail how the process of sediment erosion, transport, and deposition by turbidity currents is related to observed systematics in the physical products preserved in the geological record. Here we review recent breakthroughs in theoretical understanding of turbulent suspended sediment transport capacity. These breakthroughs allow us to understand the coupling between the flow field of turbidity currents, the kinematics of which have long been established, and the carrying capacity of sediment. This leads to robust first order estimators of the velocity and suspended sediment distribution within turbidity currents. These estimators can be applied straightforwardly to investigate natural systems. Two types of examples are explored: application to modern seafloor systems results in sediment budget estimations of natural turbidity current channels and canyons. Application to ancient turbidite deposits in the rock record displays how the present state of understanding can be used for quantitative process inversion from the product. This should ultimately lead to predictive capabilities of rock-body characteristics in the subsurface.

  18. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  19. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  20. Profile-QSAR: a novel meta-QSAR method that combines activities across the kinase family to accurately predict affinity, selectivity, and cellular activity.

    PubMed

    Martin, Eric; Mukherjee, Prasenjit; Sullivan, David; Jansen, Johanna

    2011-08-22

    Profile-QSAR is a novel 2D predictive model building method for kinases. This "meta-QSAR" method models the activity of each compound against a new kinase target as a linear combination of its predicted activities against a large panel of 92 previously studied kinases comprised from 115 assays. Profile-QSAR starts with a sparse incomplete kinase by compound (KxC) activity matrix, used to generate Bayesian QSAR models for the 92 "basis-set" kinases. These Bayesian QSARs generate a complete "synthetic" KxC activity matrix of predictions. These synthetic activities are used as "chemical descriptors" to train partial-least squares (PLS) models, from modest amounts of medium-throughput screening data, for predicting activity against new kinases. The Profile-QSAR predictions for the 92 kinases (115 assays) gave a median external R²(ext) = 0.59 on 25% held-out test sets. The method has proven accurate enough to predict pairwise kinase selectivities with a median correlation of R²(ext) = 0.61 for 958 kinase pairs with at least 600 common compounds. It has been further expanded by adding a "C(k)XC" cellular activity matrix to the KxC matrix to predict cellular activity for 42 kinase driven cellular assays with median R²(ext) = 0.58 for 24 target modulation assays and R²(ext) = 0.41 for 18 cell proliferation assays. The 2D Profile-QSAR, along with the 3D Surrogate AutoShim, are the foundations of an internally developed iterative medium-throughput screening (IMTS) methodology for virtual screening (VS) of compound archives as an alternative to experimental high-throughput screening (HTS). The method has been applied to 20 actual prospective kinase projects. Biological results have so far been obtained in eight of them. Q² values ranged from 0.3 to 0.7. Hit-rates at 10 uM for experimentally tested compounds varied from 25% to 80%, except in K5, which was a special case aimed specifically at finding "type II" binders, where none of the compounds were predicted to be

  1. Accurate prediction of death by serial determination of galactose elimination capacity in primary biliary cirrhosis: a comparison with the Mayo model.

    PubMed

    Reichen, J; Widmer, T; Cotting, J

    1991-09-01

    We retrospectively analyzed the predictive accuracy of serial determinations of galactose elimination capacity in 61 patients with primary biliary cirrhosis. Death was predicted from the time that the regression line describing the decline in galactose elimination capacity vs. time intersected a value of 4 mg.min-1.kg-1. Thirty-one patients exhibited decreasing galactose elimination capacity; in 11 patients it remained stable and in 19 patients only one value was available. Among those patients with decreasing galactose elimination capacity, 10 died and three underwent liver transplantation; prediction of death was accurate to 7 +/- 19 mo. This criterion incorrectly predicted death in two patients with portal-vein thrombosis; otherwise, it did better than or as well as the Mayo clinic score. The latter was also tested on our patients and was found to adequately describe risk in yet another independent population of patients with primary biliary cirrhosis. Cox regression analysis selected only bilirubin and galactose elimination capacity, however, as independent predictors of death. We submit that serial determination of galactose elimination capacity in patients with primary biliary cirrhosis may be a useful adjunct to optimize the timing of liver transplantation and to evaluate new pharmacological treatment modalities of this disease.

  2. Qualitative and quantitative structure-activity relationship modelling for predicting blood-brain barrier permeability of structurally diverse chemicals.

    PubMed

    Gupta, S; Basant, N; Singh, K P

    2015-01-01

    In this study, structure-activity relationship (SAR) models have been established for qualitative and quantitative prediction of the blood-brain barrier (BBB) permeability of chemicals. The structural diversity of the chemicals and nonlinear structure in the data were tested. The predictive and generalization ability of the developed SAR models were tested through internal and external validation procedures. In complete data, the QSAR models rendered ternary classification accuracy of >98.15%, while the quantitative SAR models yielded correlation (r(2)) of >0.926 between the measured and the predicted BBB permeability values with the mean squared error (MSE) <0.045. The proposed models were also applied to an external new in vitro data and yielded classification accuracy of >82.7% and r(2) > 0.905 (MSE < 0.019). The sensitivity analysis revealed that topological polar surface area (TPSA) has the highest effect in qualitative and quantitative models for predicting the BBB permeability of chemicals. Moreover, these models showed predictive performance superior to those reported earlier in the literature. This demonstrates the appropriateness of the developed SAR models to reliably predict the BBB permeability of new chemicals, which can be used for initial screening of the molecules in the drug development process.

  3. Predicting Fluid Flow in Stressed Fractures: A Quantitative Evaluation of Methods

    NASA Astrophysics Data System (ADS)

    Weihmann, S. A.; Healy, D.

    2015-12-01

    Reliable estimation of fracture stability in the subsurface is crucial to the success of exploration and production in the petroleum industry, and also for wider applications to earthquake mechanics, hydrogeology and waste disposal. Previous work suggests that fracture stability is related to fluid flow in crystalline basement rocks through shear or tensile instabilities of fractures. Our preliminary scoping analysis compares the fracture stability of 60 partly open (apertures 1.5-3 cm) and electrically conductive (low acoustic amplitudes relative to matrix) fractures from a 16 m section of a producing zone in a basement well in Bayoot field, Yemen, to a non-producing zone in the same well (also 16 m). We determine the Critically Stressed Fractures (CSF; Barton et al., 1995) and dilatation tendency (Td; Ferrill et al., 1999). We find that: 1. CSF (Fig. 1) is a poor predictor of high fluid flow in the inflow zone; 88% of the fractures are predicted to be NOT critically stressed and yet they all occur within a zone of high fluid flow rate 2. Td (Fig. 2) is also a poor predictor of high fluid flow in the inflow zone; 67% of the fractures have a LOW Td(< 0.6) 3. For the non-producing zone CSF is a very reliable predictor (100% are not critically stressed) whereas the values of Tdare consistent with their location in non-producing interval (81% are < 0.6) (Fig. 3 & 4). In summary, neither method correlates well with the observed abundance of hydraulically conductive fractures within the producing zone. Within the non-producing zone CSF and Td make reasonably accurate predictions. Fractures may be filled or partially filled with drilling mud or a lower density and electrically conductive fill such as clay in the producing zone and therefore appear (partly) open. In situ stress, fluid pressure, rock properties (friction, strength) and fracture orientation data used as inputs for the CSF and Td calculations are all subject to uncertainty. Our results suggest that scope

  4. Quantitative and qualitative models for carcinogenicity prediction for non-congeneric chemicals using CP ANN method for regulatory uses.

    PubMed

    Fjodorova, Natalja; Vračko, Marjan; Tušar, Marjan; Jezierska, Aneta; Novič, Marjana; Kühne, Ralph; Schüürmann, Gerrit

    2010-08-01

    The new European chemicals regulation Registration, Evaluation, Authorization and Restriction of Chemicals entered into force in June 2007 and accelerated the development of quantitative structure-activity relationship (QSAR) models for a variety of endpoints, including carcinogenicity. Here, we would like to present quantitative (continuous) and qualitative (categorical) models for non-congeneric chemicals for prediction of carcinogenic potency. A dataset of 805 substances was obtained after a preliminary screening of findings of rodent carcinogenicity for 1,481 chemicals accessible via Distributed Structure-Searchable Toxicity (DSSTox) Public Database Network originated from the Lois Gold Carcinogenic Potency Database (CPDB). Twenty seven two-dimensional MDL descriptors were selected using Kohonen mapping and principal component analysis. The counter propagation artificial neural network (CP ANN) technique was applied. Quantitative models were developed exploring the relationship between the experimental and predicted carcinogenic potency expressed as a tumorgenic dose TD(50) for rats. The obtained models showed low prediction power with correlation coefficient less than 0.5 for the test set. In the next step, qualitative models were developed. We found that the qualitative models exhibit good accuracy for the training set (92%). The model demonstrated good predicted performance for the test set. It was obtained accuracy (68%), sensitivity (73%), and specificity (63%). We believe that CP ANN method is a good in silico approach for modeling and predicting rodent carcinogenicity for non-congeneric chemicals and may find application for other toxicological endpoints.

  5. Unprecedently Large-Scale Kinase Inhibitor Set Enabling the Accurate Prediction of Compound–Kinase Activities: A Way toward Selective Promiscuity by Design?

    PubMed Central

    2016-01-01

    Drug discovery programs frequently target members of the human kinome and try to identify small molecule protein kinase inhibitors, primarily for cancer treatment, additional indications being increasingly investigated. One of the challenges is controlling the inhibitors degree of selectivity, assessed by in vitro profiling against panels of protein kinases. We manually extracted, compiled, and standardized such profiles published in the literature: we collected 356 908 data points corresponding to 482 protein kinases, 2106 inhibitors, and 661 patents. We then analyzed this data set in terms of kinome coverage, results reproducibility, popularity, and degree of selectivity of both kinases and inhibitors. We used the data set to create robust proteochemometric models capable of predicting kinase activity (the ligand–target space was modeled with an externally validated RMSE of 0.41 ± 0.02 log units and R02 0.74 ± 0.03), in order to account for missing or unreliable measurements. The influence on the prediction quality of parameters such as number of measurements, Murcko scaffold frequency or inhibitor type was assessed. Interpretation of the models enabled to highlight inhibitors and kinases properties correlated with higher affinities, and an analysis in the context of kinases crystal structures was performed. Overall, the models quality allows the accurate prediction of kinase-inhibitor activities and their structural interpretation, thus paving the way for the rational design of compounds with a targeted selectivity profile. PMID:27482722

  6. Quantitative AOP-based predictions for two aromatase inhibitors evaluating the influence of bioaccumulation on prediction accuracy

    EPA Science Inventory

    The adverse outcome pathway (AOP) framework can be used to support the use of mechanistic toxicology data as a basis for risk assessment. For certain risk contexts this includes defining, quantitative linkages between the molecular initiating event (MIE) and subsequent key events...

  7. Differential label-free quantitative proteomic analysis of Shewanella oneidensis cultured under aerobic and suboxic conditions by accurate mass and time tag approach.

    PubMed

    Fang, Ruihua; Elias, Dwayne A; Monroe, Matthew E; Shen, Yufeng; McIntosh, Martin; Wang, Pei; Goddard, Carrie D; Callister, Stephen J; Moore, Ronald J; Gorby, Yuri A; Adkins, Joshua N; Fredrickson, Jim K; Lipton, Mary S; Smith, Richard D

    2006-04-01

    We describe the application of LC-MS without the use of stable isotope labeling for differential quantitative proteomic analysis of whole cell lysates of Shewanella oneidensis MR-1 cultured under aerobic and suboxic conditions. LC-MS/MS was used to initially identify peptide sequences, and LC-FTICR was used to confirm these identifications as well as measure relative peptide abundances. 2343 peptides covering 668 proteins were identified with high confidence and quantified. Among these proteins, a subset of 56 changed significantly using statistical approaches such as statistical analysis of microarrays, whereas another subset of 56 that were annotated as performing housekeeping functions remained essentially unchanged in relative abundance. Numerous proteins involved in anaerobic energy metabolism exhibited up to a 10-fold increase in relative abundance when S. oneidensis was transitioned from aerobic to suboxic conditions.

  8. Differential Label-free Quantitative Proteomic Analysis of Shewanella oneidensis Cultured under Aerobic and Suboxic Conditions by Accurate Mass and Time Tag Approach

    SciTech Connect

    Fang, Ruihua; Elias, Dwayne A.; Monroe, Matthew E.; Shen, Yufeng; McIntosh, Martin; Wang, Pei; Goddard, Carrie D.; Callister, Stephen J.; Moore, Ronald J.; Gorby, Yuri A.; Adkins, Joshua N.; Fredrickson, Jim K.; Lipton, Mary S.; Smith, Richard D.

    2006-04-01

    We describe the application of liquid chromatography coupled to mass spectrometry (LC/MS) without the use of stable isotope labeling for differential quantitative proteomics analysis of whole cell lysates of Shewanella oneidensis MR-1 cultured under aerobic and sub-oxic conditions. Liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) was used to initially identify peptide sequences, and LC coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR) was used to confirm these identifications, as well as measure relative peptide abundances. 2343 peptides, covering 668 proteins were identified with high confidence and quantified. Among these proteins, a subset of 56 changed significantly using statistical approaches such as SAM, while another subset of 56 that were annotated as performing housekeeping functions remained essentially unchanged in relative abundance. Numerous proteins involved in anaerobic energy metabolism exhibited up to a 10-fold increase in relative abundance when S. oneidensis is transitioned from aerobic to sub-oxic conditions.

  9. Dose Addition Models Based on Biologically Relevant Reductions in Fetal Testosterone Accurately Predict Postnatal Reproductive Tract Alterations by a Phthalate Mixture in Rats

    PubMed Central

    Howdeshell, Kembra L.; Rider, Cynthia V.; Wilson, Vickie S.; Furr, Johnathan R.; Lambright, Christy R.; Gray, L. Earl

    2015-01-01

    Challenges in cumulative risk assessment of anti-androgenic phthalate mixtures include a lack of data on all the individual phthalates and difficulty determining the biological relevance of reduction in fetal testosterone (T) on postnatal development. The objectives of the current study were 2-fold: (1) to test whether a mixture model of dose addition based on the fetal T production data of individual phthalates would predict the effects of a 5 phthalate mixture on androgen-sensitive postnatal male reproductive tract development, and (2) to determine the biological relevance of the reductions in fetal T to induce abnormal postnatal reproductive tract development using data from the mixture study. We administered a dose range of the mixture (60, 40, 20, 10, and 5% of the top dose used in the previous fetal T production study consisting of 300 mg/kg per chemical of benzyl butyl (BBP), di(n)butyl (DBP), diethyl hexyl phthalate (DEHP), di-isobutyl phthalate (DiBP), and 100 mg dipentyl (DPP) phthalate/kg; the individual phthalates were present in equipotent doses based on their ability to reduce fetal T production) via gavage to Sprague Dawley rat dams on GD8-postnatal day 3. We compared observed mixture responses to predictions of dose addition based on the previously published potencies of the individual phthalates to reduce fetal T production relative to a reference chemical and published postnatal data for the reference chemical (called DAref). In addition, we predicted DA (called DAall) and response addition (RA) based on logistic regression analysis of all 5 individual phthalates when complete data were available. DA ref and DA all accurately predicted the observed mixture effect for 11 of 14 endpoints. Furthermore, reproductive tract malformations were seen in 17–100% of F1 males when fetal T production was reduced by about 25–72%, respectively. PMID:26350170

  10. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    PubMed

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  11. The Need for Accurate Risk Prediction Models for Road Mapping, Shared Decision Making and Care Planning for the Elderly with Advanced Chronic Kidney Disease.

    PubMed

    Stryckers, Marijke; Nagler, Evi V; Van Biesen, Wim

    2016-11-01

    As people age, chronic kidney disease becomes more common, but it rarely leads to end-stage kidney disease. When it does, the choice between dialysis and conservative care can be daunting, as much depends on life expectancy and personal expectations of medical care. Shared decision making implies adequately informing patients about their options, and facilitating deliberation of the available information, such that decisions are tailored to the individual's values and preferences. Accurate estimations of one's risk of progression to end-stage kidney disease and death with or without dialysis are essential for shared decision making to be effective. Formal risk prediction models can help, provided they are externally validated, well-calibrated and discriminative; include unambiguous and measureable variables; and come with readily applicable equations or scores. Reliable, externally validated risk prediction models for progression of chronic kidney disease to end-stage kidney disease or mortality in frail elderly with or without chronic kidney disease are scant. Within this paper, we discuss a number of promising models, highlighting both the strengths and limitations physicians should understand for using them judiciously, and emphasize the need for external validation over new development for further advancing the field.

  12. Accurate Prediction of Glucuronidation of Structurally Diverse Phenolics by Human UGT1A9 Using Combined Experimental and In Silico Approaches

    PubMed Central

    Wu, Baojian; Wang, Xiaoqiang; Zhang, Shuxing; Hu, Ming

    2012-01-01

    Purpose The catalytic selectivity of human UGT1A9, an important membrane-bound enzyme catalyzing glucuronidation of xenobiotics were determined experimentally using 145 phenolics, and analyzed by 3D-QSAR methods. Methods The catalytic efficiency of UGT1A9 was determined by kinetic profiling. Quantitative structure activity relationships were analyzed using the CoMFA and CoMSIA techniques. Molecular alignment of the substrate structures was made by superimposing the glucuronidation site and its adjacent aromatic ring to achieve maximal steric overlap. For a substrate with multiple active glucuronidation sites, each site was considered as a separate substrate. Results The 3D-QSAR analyses produced statistically reliable models with good predictive power (CoMFA: q2 = 0.548, r2= 0.949, r2pred = 0.775; CoMSIA: q2 = 0.579, r2= 0.876, r2pred = 0.700). The contour coefficient maps were applied to elucidate structural features among substrates that are responsible for the selectivity differences. Furthermore, the contour coefficient maps were overlaid in the catalytic pocket of a homology model of UGT1A9; this enabled us to identify the UGT1A9 catalytic pocket with a high degree of confidence. Conclusion The CoMFA/CoMSIA models can predict the substrate selectivity and in vitro clearance of UGT1A9. Our findings also provide a possible molecular basis for understanding UGT1A9 functions and its substrate selectivity. PMID:22302521

  13. How Accurately Can Extended X-ray Absorption Spectra Be Predicted from First Principles? Implications for Modeling the Oxygen-Evolving Complex in Photosystem II.

    PubMed

    Beckwith, Martha A; Ames, William; Vila, Fernando D; Krewald, Vera; Pantazis, Dimitrios A; Mantel, Claire; Pécaut, Jacques; Gennari, Marcello; Duboc, Carole; Collomb, Marie-Noëlle; Yano, Junko; Rehr, John J; Neese, Frank; DeBeer, Serena

    2015-10-14

    First principle calculations of extended X-ray absorption fine structure (EXAFS) data have seen widespread use in bioinorganic chemistry, perhaps most notably for modeling the Mn4Ca site in the oxygen evolving complex (OEC) of photosystem II (PSII). The logic implied by the calculations rests on the assumption that it is possible to a priori predict an accurate EXAFS spectrum provided that the underlying geometric structure is correct. The present study investigates the extent to which this is possible using state of the art EXAFS theory. The FEFF program is used to evaluate the ability of a multiple scattering-based approach to directly calculate the EXAFS spectrum of crystallographically defined model complexes. The results of these parameter free predictions are compared with the more traditional approach of fitting FEFF calculated spectra to experimental data. A series of seven crystallographically characterized Mn monomers and dimers is used as a test set. The largest deviations between the FEFF calculated EXAFS spectra and the experimental EXAFS spectra arise from the amplitudes. The amplitude errors result from a combination of errors in calculated S0(2) and Debye-Waller values as well as uncertainties in background subtraction. Additional errors may be attributed to structural parameters, particularly in cases where reliable high-resolution crystal structures are not available. Based on these investigations, the strengths and weaknesses of using first-principle EXAFS calculations as a predictive tool are discussed. We demonstrate that a range of DFT optimized structures of the OEC may all be considered consistent with experimental EXAFS data and that caution must be exercised when using EXAFS data to obtain topological arrangements of complex clusters.

  14. The VACS Index Accurately Predicts Mortality and Treatment Response among Multi-Drug Resistant HIV Infected Patients Participating in the Options in Management with Antiretrovirals (OPTIMA) Study

    PubMed Central

    Brown, Sheldon T.; Tate, Janet P.; Kyriakides, Tassos C.; Kirkwood, Katherine A.; Holodniy, Mark; Goulet, Joseph L.; Angus, Brian J.; Cameron, D. William; Justice, Amy C.

    2014-01-01

    Objectives The VACS Index is highly predictive of all-cause mortality among HIV infected individuals within the first few years of combination antiretroviral therapy (cART). However, its accuracy among highly treatment experienced individuals and its responsiveness to treatment interventions have yet to be evaluated. We compared the accuracy and responsiveness of the VACS Index with a Restricted Index of age and traditional HIV biomarkers among patients enrolled in the OPTIMA study. Methods Using data from 324/339 (96%) patients in OPTIMA, we evaluated associations between indices and mortality using Kaplan-Meier estimates, proportional hazards models, Harrel’s C-statistic and net reclassification improvement (NRI). We also determined the association between study interventions and risk scores over time, and change in score and mortality. Results Both the Restricted Index (c = 0.70) and VACS Index (c = 0.74) predicted mortality from baseline, but discrimination was improved with the VACS Index (NRI = 23%). Change in score from baseline to 48 weeks was more strongly associated with survival for the VACS Index than the Restricted Index with respective hazard ratios of 0.26 (95% CI 0.14–0.49) and 0.39(95% CI 0.22–0.70) among the 25% most improved scores, and 2.08 (95% CI 1.27–3.38) and 1.51 (95%CI 0.90–2.53) for the 25% least improved scores. Conclusions The VACS Index predicts all-cause mortality more accurately among multi-drug resistant, treatment experienced individuals and is more responsive to changes in risk associated with treatment intervention than an index restricted to age and HIV biomarkers. The VACS Index holds promise as an intermediate outcome for intervention research. PMID:24667813

  15. Early Prediction of the Response of Breast Tumors to Neoadjuvant Chemotherapy using Quantitative MRI and Machine Learning

    PubMed Central

    Mani, Subramani; Chen, Yukun; Arlinghaus, Lori R.; Li, Xia; Chakravarthy, A. Bapsi; Bhave, Sandeep R.; Welch, E. Brian; Levy, Mia A.; Yankeelov, Thomas E.

    2011-01-01

    The ability to predict early in the course of treatment the response of breast tumors to neoadjuvant chemotherapy can stratify patients based on response for patient-specific treatment strategies. Currently response to neoadjuvant chemotherapy is evaluated based on physical exam or breast imaging (mammogram, ultrasound or conventional breast MRI). There is a poor correlation among these measurements and with the actual tumor size when measured by the pathologist during definitive surgery. We tested the feasibility of using quantitative MRI as a tool for early prediction of tumor response. Between 2007 and 2010 twenty consecutive patients diagnosed with Stage II/III breast cancer and receiving neoadjuvant chemotherapy were enrolled on a prospective imaging study. Our study showed that quantitative MRI parameters along with routine clinical measures can predict responders from non-responders to neoadjuvant chemotherapy. The best predictive model had an accuracy of 0.9, a positive predictive value of 0.91 and an AUC of 0.96. PMID:22195145

  16. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  17. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    PubMed

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts.

  18. Assessment of quantitative structure-activity relationship of toxicity prediction models for Korean chemical substance control legislation

    PubMed Central

    Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai

    2015-01-01

    Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used

  19. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    PubMed

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  20. Accurate near-field lithography modeling and quantitative mapping of the near-field distribution of a plasmonic nanoaperture in a metal.

    PubMed

    Kim, Yongwoo; Jung, Howon; Kim, Seok; Jang, Jinhee; Lee, Jae Yong; Hahn, Jae W

    2011-09-26

    In nanolithography using optical near-field sources to push the critical dimension below the diffraction limit, optimization of process parameters is of utmost importance. Herein we present a simple analytic model to predict photoresist profiles with a localized evanescent exposure that decays exponentially in a photoresist of finite contrast. We introduce the concept of nominal developing thickness (NDT) to determine the proper developing process that yields the best topography of the exposure profile fitting to the isointensity contour. Based on this model, we experimentally investigated the NDT and obtained exposure profiles produced by the near-field distribution of a bowtie-shaped nanoaperture. The profiles were properly fit to the calculated results obtained by the finite differential time domain method. Using the threshold exposure dose of a photoresist, we can determine the absolute intensity of the intensity distribution of the near field and analyze the difference in decay rates of the near field distributions obtained via experiment and calculation. For maximum depth of 41 nm, we estimate the uncertainties in the measurements of profile and intensity to be less than 6% and about 1%, respectively. We expect this method will be useful in detecting the absolute value of the near-field distribution produced by nano-scale devices.

  1. A Quantitative Structure Activity Relationship for acute oral toxicity of pesticides on rats: Validation, domain of application and prediction.

    PubMed

    Hamadache, Mabrouk; Benkortbi, Othmane; Hanini, Salah; Amrane, Abdeltif; Khaouane, Latifa; Si Moussa, Cherif

    2016-02-13

    Quantitative Structure Activity Relationship (QSAR) models are expected to play an important role in the risk assessment of chemicals on humans and the environment. In this study, we developed a validated QSAR model to predict acute oral toxicity of 329 pesticides to rats because a few QSAR models have been devoted to predict the Lethal Dose 50 (LD50) of pesticides on rats. This QSAR model is based on 17 molecular descriptors, and is robust, externally predictive and characterized by a good applicability domain. The best results were obtained with a 17/9/1 Artificial Neural Network model trained with the Quasi Newton back propagation (BFGS) algorithm. The prediction accuracy for the external validation set was estimated by the Q(2)ext and the root mean square error (RMS) which are equal to 0.948 and 0.201, respectively. 98.6% of external validation set is correctly predicted and the present model proved to be superior to models previously published. Accordingly, the model developed in this study provides excellent predictions and can be used to predict the acute oral toxicity of pesticides, particularly for those that have not been tested as well as new pesticides.

  2. Using metal-ligand binding characteristics to predict metal toxicity: quantitative ion character-activity relationships (QICARs).

    PubMed Central

    Newman, M C; McCloskey, J T; Tatara, C P

    1998-01-01

    Ecological risk assessment can be enhanced with predictive models for metal toxicity. Modelings of published data were done under the simplifying assumption that intermetal trends in toxicity reflect relative metal-ligand complex stabilities. This idea has been invoked successfully since 1904 but has yet to be applied widely in quantitative ecotoxicology. Intermetal trends in toxicity were successfully modeled with ion characteristics reflecting metal binding to ligands for a wide range of effects. Most models were useful for predictive purposes based on an F-ratio criterion and cross-validation, but anomalous predictions did occur if speciation was ignored. In general, models for metals with the same valence (i.e., divalent metals) were better than those combining mono-, di-, and trivalent metals. The softness parameter (sigma p) and the absolute value of the log of the first hydrolysis constant ([symbol: see text] log KOH [symbol: see text]) were especially useful in model construction. Also, delta E0 contributed substantially to several of the two-variable models. In contrast, quantitative attempts to predict metal interactions in binary mixtures based on metal-ligand complex stabilities were not successful. PMID:9860900

  3. Toward Relatively General and Accurate Quantum Chemical Predictions of Solid-State (17)O NMR Chemical Shifts in Various Biologically Relevant Oxygen-Containing Compounds.

    PubMed

    Rorick, Amber; Michael, Matthew A; Yang, Liu; Zhang, Yong

    2015-09-03

    Oxygen is an important element in most biologically significant molecules, and experimental solid-state (17)O NMR studies have provided numerous useful structural probes to study these systems. However, computational predictions of solid-state (17)O NMR chemical shift tensor properties are still challenging in many cases, and in particular, each of the prior computational works is basically limited to one type of oxygen-containing system. This work provides the first systematic study of the effects of geometry refinement, method, and basis sets for metal and nonmetal elements in both geometry optimization and NMR property calculations of some biologically relevant oxygen-containing compounds with a good variety of XO bonding groups (X = H, C, N, P, and metal). The experimental range studied is of 1455 ppm, a major part of the reported (17)O NMR chemical shifts in organic and organometallic compounds. A number of computational factors toward relatively general and accurate predictions of (17)O NMR chemical shifts were studied to provide helpful and detailed suggestions for future work. For the studied kinds of oxygen-containing compounds, the best computational approach results in a theory-versus-experiment correlation coefficient (R(2)) value of 0.9880 and a mean absolute deviation of 13 ppm (1.9% of the experimental range) for isotropic NMR shifts and an R(2) value of 0.9926 for all shift-tensor properties. These results shall facilitate future computational studies of (17)O NMR chemical shifts in many biologically relevant systems, and the high accuracy may also help the refinement and determination of active-site structures of some oxygen-containing substrate-bound proteins.

  4. Toward quantitative prediction of charge mobility in organic semiconductors: tunneling enabled hopping model.

    PubMed

    Geng, Hua; Peng, Qian; Wang, Linjun; Li, Haijiao; Liao, Yi; Ma, Zhiying; Shuai, Zhigang

    2012-07-10

    A tunneling-enabled hopping mechanism is proposed, providing a pratical tool to quantitatively assess charge mobility in organic semiconductors. The paradoxical phenomena in TIPS-pentacene is well explained in that the optical probe indicates localized charges while transport measurements show bands of charge.

  5. Quantitative Approach to Collaborative Learning: Performance Prediction, Individual Assessment, and Group Composition

    ERIC Educational Resources Information Center

    Cen, Ling; Ruta, Dymitr; Powell, Leigh; Hirsch, Benjamin; Ng, Jason

    2016-01-01

    The benefits of collaborative learning, although widely reported, lack the quantitative rigor and detailed insight into the dynamics of interactions within the group, while individual contributions and their impacts on group members and their collaborative work remain hidden behind joint group assessment. To bridge this gap we intend to address…

  6. QUANTITATIVE MODELING APPROACHES TO PREDICTING THE ACUTE NEUROTOXICITY OF VOLATILE ORGANIC COMPOUNDS (VOCS).

    EPA Science Inventory

    Lack of complete and appropriate human data requires prediction of the hazards for exposed human populations by extrapolation from available animal and in vitro data. Predictive models for the toxicity of chemicals can be constructed by linking kinetic and mode of action data uti...

  7. Integrating trans-abdominal ultrasonography with fecal steroid metabolite monitoring to accurately diagnose pregnancy and predict the timing of parturition in the red panda (Ailurus fulgens styani).

    PubMed

    Curry, Erin; Browning, Lissa J; Reinhart, Paul; Roth, Terri L

    2017-02-23

    Red pandas (Ailurus fulgens styani) exhibit a variable gestation length and may experience a pseudopregnancy indistinguishable from true pregnancy; therefore, it is not possible to deduce an individual's true pregnancy status and parturition date based on breeding dates or fecal progesterone excretion patterns alone. The goal of this study was to evaluate the use of transabdominal ultrasonography for pregnancy diagnosis in red pandas. Two to three females were monitored over 4 consecutive years, generating a total of seven profiles (four pregnancies, two pseudopregnancies, and one lost pregnancy). Fecal samples were collected and assayed for progesterone (P4) and estrogen conjugate (EC) to characterize patterns associated with breeding activity and parturition events. Animals were trained for voluntary transabdominal ultrasound and examinations were performed weekly. Breeding behaviors and fecal EC data suggest that the estrus cycle of this species is 11-12 days in length. Fecal steroid metabolite analyses also revealed that neither P4 nor EC concentrations were suitable indicators of pregnancy in this species; however, a secondary increase in P4 occurred 69-71 days prior to parturition in all pregnant females, presumably coinciding with embryo implantation. Using ultrasonography, embryos were detected as early as 62 days post-breeding/50 days pre-partum and serial measurements of uterine lumen diameter were documented throughout four pregnancies. Advances in reproductive diagnostics, such as the implementation of ultrasonography, may facilitate improved husbandry of pregnant females and allow for the accurate prediction of parturition.

  8. Comparative analysis of local and consensus quantitative structure-activity relationship approaches for the prediction of bioconcentration factor.

    PubMed

    Piir, G; Sild, S; Maran, U

    2013-01-01

    Quantitative structure-activity relationships (QSARs) are broadly classified as global or local, depending on their molecular constitution. Global models use large and diverse training sets covering a wide range of chemical space. Local models focus on smaller structurally or chemically similar subsets that are conventionally selected by human experts or alternatively using clustering analysis. The current study focuses on the comparative analysis of different clustering algorithms (expectation-maximization, K-means and hierarchical) for seven different descriptor sets as structural characteristics and two rule-based approaches to select subsets for designing local QSAR models. A total of 111 local QSAR models are developed for predicting bioconcentration factor. Predictions from local models were compared with corresponding predictions from the global model. The comparison of coefficients of determination (r(2)) and standard deviations for local models with similar subsets from the global model show improved prediction quality in 97% of cases. The descriptor content of derived QSARs is discussed and analyzed. Local QSAR models were further consolidated within the framework of consensus approach. All different consensus approaches increased performance over the global and local models. The consensus approach reduced the number of strongly deviating predictions by evening out prediction errors, which were produced by some local QSARs.

  9. Pitch control margin at high angle of attack - Quantitative requirements (flight test correlation with simulation predictions)

    NASA Technical Reports Server (NTRS)

    Lackey, J.; Hadfield, C.

    1992-01-01

    Recent mishaps and incidents on Class IV aircraft have shown a need for establishing quantitative longitudinal high angle of attack (AOA) pitch control margin design guidelines for future aircraft. NASA Langley Research Center has conducted a series of simulation tests to define these design guidelines. Flight test results have confirmed the simulation studies in that pilot rating of high AOA nose-down recoveries were based on the short-term response interval in the forms of pitch acceleration and rate.

  10. FDG-PET measurement is more accurate than neuropsychological assessments to predict global cognitive deterioration in patients with mild cognitive impairment.

    PubMed

    Chételat, Gaël; Eustache, Francis; Viader, Fausto; De La Sayette, Vincent; Pélerin, Alice; Mézenge, Florence; Hannequin, Didier; Dupuy, Benoît; Baron, Jean-Claude; Desgranges, Béatrice

    2005-02-01

    The accurate prediction, at a pre-dementia stage of Alzheimer's disease (AD), of the subsequent clinical evolution of patients would be a major breakthrough from both therapeutic and research standpoints. Amnestic mild cognitive impairment (MCI) is presently the most common reference to address the pre-dementia stage of AD. However, previous longitudinal studies on patients with MCI assessing neuropsychological and PET markers of future conversion to AD are sparse and yield discrepant findings, while a comprehensive comparison of the relative accuracy of these two categories of measure is still lacking. In the present study, we assessed the global cognitive decline as measured by the Mattis scale in 18 patients with amnestic MCI over an 18-month follow-up period, studying which subtest of this scale showed significant deterioration over time. Using baseline measurements from neuropsychological evaluation of memory and PET, we then assessed significant markers of global cognitive change, that is, percent annual change in the Mattis scale total score, and searched for the best predictor of this global cognitive decline. Altogether, our results revealed significant decline over the 18-month follow-up period in the total score and the verbal initiation and memory-recall subscores of the Mattis scale. The percent annual change in the total Mattis score significantly correlated with age and baseline performances in delayed episodic memory recall as well as semantic autobiographical and category word fluencies. Regarding functional imaging, significant correlations were also found with baseline PET values in the right temporo-parietal and medial frontal areas. Age and right temporo-parietal PET values were the most significant predictors of subsequent global cognitive decline, and the only ones to survive stepwise regression analyses. Our findings are consistent with previous works showing predominant delayed recall and semantic memory impairment at a pre-dementia stage

  11. Genome-enabled prediction of quantitative traits in chickens using genomic annotation

    PubMed Central

    2014-01-01

    Background Genome-wide association studies have been deemed successful for identifying statistically associated genetic variants of large effects on complex traits. Past studies have found enrichment of trait-associated SNPs in functionally annotated regions, while depletion was reported for intergenic regions (IGR). However, no systematic examination of connections between genomic regions and predictive ability of complex phenotypes has been carried out. Results In this study, we partitioned SNPs based on their annotation to characterize genomic regions that deliver low and high predictive power for three broiler traits in chickens using a whole-genome approach. Additive genomic relationship kernels were constructed for each of the genic regions considered, and a kernel-based Bayesian ridge regression was employed as prediction machine. We found that the predictive performance for ultrasound area of breast meat from using genic regions marked by SNPs was consistently better than that from SNPs in IGR, while IGR tagged by SNPs were better than the genic regions for body weight and hen house egg production. We also noted that predictive ability delivered by the whole battery of markers was close to the best prediction achieved by one of the genomic regions. Conclusions Whole-genome regression methods use all available quality filtered SNPs into a model, contrary to accommodating only validated SNPs from exonic or coding regions. Our results suggest that, while differences among genomic regions in terms of predictive ability were observed, the whole-genome approach remains as a promising tool if interest is on prediction of complex traits. PMID:24502227

  12. Quantitative vapor-phase IR intensities and DFT computations to predict absolute IR spectra based on molecular structure: I. Alkanes

    NASA Astrophysics Data System (ADS)

    Williams, Stephen D.; Johnson, Timothy J.; Sharpe, Steven W.; Yavelak, Veronica; Oates, R. P.; Brauer, Carolyn S.

    2013-11-01

    Recently recorded quantitative IR spectra of a variety of gas-phase alkanes are shown to have integrated intensities in both the C3H stretching and C3H bending regions that depend linearly on the molecular size, i.e. the number of C3H bonds. This result is well predicted from CH4 to C15H32 by density functional theory (DFT) computations of IR spectra using Becke's three parameter functional (B3LYP/6-31+G(d,p)). Using the experimental data, a simple model predicting the absolute IR band intensities of alkanes based only on structural formula is proposed: For the C3H stretching band envelope centered near 2930 cm-1 this is given by (km/mol) CH_str=(34±1)×CH-(41±23) where CH is number of C3H bonds in the alkane. The linearity is explained in terms of coordinated motion of methylene groups rather than the summed intensities of autonomous -CH2-units. The effect of alkyl chain length on the intensity of a C3H bending mode is explored and interpreted in terms of conformer distribution. The relative intensity contribution of a methyl mode compared to the total C3H stretch intensity is shown to be linear in the number of methyl groups in the alkane, and can be used to predict quantitative spectra a priori based on structure alone.

  13. Quantitative Vapor-phase IR Intensities and DFT Computations to Predict Absolute IR Spectra based on Molecular Structure: I. Alkanes

    SciTech Connect

    Williams, Stephen D.; Johnson, Timothy J.; Sharpe, Steven W.; Yavelak, Veronica; Oats, R. P.; Brauer, Carolyn S.

    2013-11-13

    Recently recorded quantitative IR spectra of a variety of gas-phase alkanes are shown to have integrated intensities in both the C-H stretching and C-H bending regions that depend linearly on the molecular size, i.e. the number of C-H bonds. This result is well predicted from CH4 to C15H32 by DFT computations of IR spectra at the B3LYP/6-31+G(d,p) level of DFT theory. A simple model predicting the absolute IR band intensities of alkanes based only on structural formula is proposed: For the C-H stretching band near 2930 cm-1 this is given by (in km/mol): CH¬_str = (34±3)*CH – (41±60) where CH is number of C-H bonds in the alkane. The linearity is explained in terms of coordinated motion of methylene groups rather than the summed intensities of autonomous -CH2- units. The effect of alkyl chain length on the intensity of a C-H bending mode is explored and interpreted in terms of conformer distribution. The relative intensity contribution of a methyl mode compared to the total C-H stretch intensity is shown to be linear in the number of terminal methyl groups in the alkane, and can be used to predict quantitative spectra a priori based on structure alone.

  14. Semiquantitative and Quantitative Dynamic Contrast-Enhanced Magnetic Resonance Imaging Measurements Predict Radiation Response in Cervix Cancer

    SciTech Connect

    Zahra, Mark A. Tan, Li Tee; Priest, Andrew N.; Graves, Martin J.; Arends, Mark; Crawford, Robin A.F.; Brenton, James D.; Lomas, David J.; Sala, Evis

    2009-07-01

    Purpose: To evaluate semiquantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) measurements in predicting the response to radiotherapy in cervix cancer. Methods and Materials: Patients with cervix cancer treated radically with chemoradiotherapy had DCE-MRI at three time points: before starting treatment, after 2 weeks of radiotherapy, and in the 5th week of radiotherapy. Semiquantitative measurements obtained from the signal intensity vs. time plots included arrival time of contrast, the slope and maximum slope of contrast uptake, time for peak enhancement, and the contrast enhancement ratio (CER). Pharmacokinetic modeling with a modeled vascular input function was used for the quantitative measurements volume transfer constant (K{sup trans}), rate constant (k{sub ep}), fraction plasma volume (fPV), and the initial area under gadolinium-time curve. The correlation of these measurements at each of the three time points with radiologic tumor response was investigated. Results: Thirteen patients had a total of 38 scans. There was no correlation between the DCE-MRI measurements and the corresponding tumor volumes. A statistically significant correlation with percentage tumor regression was shown with the pretreatment DCE-MRI semiquantitative parameters of peak time (p = 0.046), slope (p = 0.025), maximum slope (p = 0.046), and CER (p = 0.025) and the quantitative parameters K{sup trans} (p = 0.043) and k{sub ep} (p = 0.022). Second and third scan measurements did not show any correlation. Conclusions: This is the first study to show that pretreatment DCE-MRI quantitative parameters predict the radiation response in cervix cancer. These measurements may allow a more meaningful comparison of DCE-MRI studies from different centers.

  15. Quantitative Prediction of Beef Quality Using Visible and NIR Spectroscopy with Large Data Samples Under Industry Conditions

    NASA Astrophysics Data System (ADS)

    Qiao, T.; Ren, J.; Craigie, C.; Zabalza, J.; Maltin, Ch.; Marshall, S.

    2015-03-01

    It is well known that the eating quality of beef has a significant influence on the repurchase behavior of consumers. There are several key factors that affect the perception of quality, including color, tenderness, juiciness, and flavor. To support consumer repurchase choices, there is a need for an objective measurement of quality that could be applied to meat prior to its sale. Objective approaches such as offered by spectral technologies may be useful, but the analytical algorithms used remain to be optimized. For visible and near infrared (VISNIR) spectroscopy, Partial Least Squares Regression (PLSR) is a widely used technique for meat related quality modeling and prediction. In this paper, a Support Vector Machine (SVM) based machine learning approach is presented to predict beef eating quality traits. Although SVM has been successfully used in various disciplines, it has not been applied extensively to the analysis of meat quality parameters. To this end, the performance of PLSR and SVM as tools for the analysis of meat tenderness is evaluated, using a large dataset acquired under industrial conditions. The spectral dataset was collected using VISNIR spectroscopy with the wavelength ranging from 350 to 1800 nm on 234 beef M. longissimus thoracis steaks from heifers, steers, and young bulls. As the dimensionality with the VISNIR data is very high (over 1600 spectral bands), the Principal Component Analysis (PCA) technique was applied for feature extraction and data reduction. The extracted principal components (less than 100) were then used for data modeling and prediction. The prediction results showed that SVM has a greater potential to predict beef eating quality than PLSR, especially for the prediction of tenderness. The infl uence of animal gender on beef quality prediction was also investigated, and it was found that beef quality traits were predicted most accurately in beef from young bulls.

  16. Building quantitative prediction models for tissue residue of two explosives compounds in earthworms from microarray gene expression data.

    PubMed

    Gong, Ping; Loh, Po-Ru; Barker, Natalie D; Tucker, George; Wang, Nan; Zhang, Chenhua; Escalon, B Lynn; Berger, Bonnie; Perkins, Edward J

    2012-01-03

    Soil contamination near munitions plants and testing grounds is a serious environmental concern that can result in the formation of tissue chemical residue in exposed animals. Quantitative prediction of tissue residue still represents a challenging task despite long-term interest and pursuit, as tissue residue formation is the result of many dynamic processes including uptake, transformation, and assimilation. The availability of high-dimensional microarray gene expression data presents a new opportunity for computational predictive modeling of tissue residue from changes in expression profile. Here we analyzed a 240-sample data set with measurements of transcriptomic-wide gene expression and tissue residue of two chemicals, 2,4,6-trinitrotoluene (TNT) and 1,3,5-trinitro-1,3,5-triazacyclohexane (RDX), in the earthworm Eisenia fetida. We applied two different computational approaches, LASSO (Least Absolute Shrinkage and Selection Operator) and RF (Random Forest), to identify predictor genes and built predictive models. Each approach was tested alone and in combination with a prior variable selection procedure that involved the Wilcoxon rank-sum test and HOPACH (Hierarchical Ordered Partitioning And Collapsing Hybrid). Model evaluation results suggest that LASSO was the best performer of minimum complexity on the TNT data set, whereas the combined Wilcoxon-HOPACH-RF approach achieved the highest prediction accuracy on the RDX data set. Our models separately identified two small sets of ca. 30 predictor genes for RDX and TNT. We have demonstrated that both LASSO and RF are powerful tools for quantitative prediction of tissue residue. They also leave more unknown than explained, however, allowing room for improvement with other computational methods and extension to mixture contamination scenarios.

  17. Quantitative structure-property relationships for predicting Henry's law constant from molecular structure.

    PubMed

    Dearden, John C; Schüürmann, Gerrit

    2003-08-01

    Various models are available for the prediction of Henry's law constant (H) or the air-water partition coefficient (Kaw), its dimensionless counterpart. Incremental methods are based on structural features such as atom types, bond types, and local structural environments; other regression models employ physicochemical properties, structural descriptors such as connectivity indices, and descriptors reflecting the electronic structure. There are also methods to calculate H from the ratio of vapor pressure (p(v)) and water solubility (S(w)) that in turn can be estimated from molecular structure, and quantum chemical continuum-solvation models to predict H via the solvation-free energy (deltaG(s)). This review is confined to methods that calculate H from molecular structure without experimental information and covers more than 40 methods published in the last 26 years. For a subset of eight incremental methods and four continuum-solvation models, a comparative analysis of their prediction performance is made using a test set of 700 compounds that includes a significant number of more complex and drug-like chemical structures. The results reveal substantial differences in the application range as well as in the prediction capability, a general decrease in prediction performance with decreasing H, and surprisingly large individual prediction errors, which are particularly striking for some quantum chemical schemes. The overall best-performing method appears to be the bond contribution method as implemented in the HENRYWIN software package, yielding a predictive squared correlation coefficient (q2) of 0.87 and a standard error of 1.03 log units for the test set.

  18. Quantitative Analysis of Defects in Silicon. [to predict energy conversion efficiency of silicon samples for solar cells

    NASA Technical Reports Server (NTRS)

    Natesh, R.; Smith, J. M.; Qidwai, H. A.; Bruce, T.

    1979-01-01

    The evaluation and prediction of the conversion efficiency for a variety of silicon samples with differences in structural defects, such as grain boundaries, twin boundaries, precipitate particles, dislocations, etc. are discussed. Quantitative characterization of these structural defects, which were revealed by etching the surface of silicon samples, is performed by using an image analyzer. Due to different crystal growth and fabrication techniques the various types of silicon contain a variety of trace impurity elements and structural defects. The two most important criteria in evaluating the various silicon types for solar cell applications are cost and conversion efficiency.

  19. IMPre: An Accurate and Efficient Software for Prediction of T- and B-Cell Receptor Germline Genes and Alleles from Rearranged Repertoire Data

    PubMed Central

    Zhang, Wei; Wang, I-Ming; Wang, Changxi; Lin, Liya; Chai, Xianghua; Wu, Jinghua; Bett, Andrew J.; Dhanasekaran, Govindarajan; Casimiro, Danilo R.; Liu, Xiao

    2016-01-01

    Large-scale study of the properties of T-cell receptor (TCR) and B-cell receptor (BCR) repertoires through next-generation sequencing is providing excellent insights into the understanding of adaptive immune responses. Variable(Diversity)Joining [V(D)J] germline genes and alleles must be characterized in detail to facilitate repertoire analyses. However, most species do not have well-characterized TCR/BCR germline genes because of their high homology. Also, more germline alleles are required for humans and other species, which limits the capacity for studying immune repertoires. Herein, we developed “Immune Germline Prediction” (IMPre), a tool for predicting germline V/J genes and alleles using deep-sequencing data derived from TCR/BCR repertoires. We developed a new algorithm, “Seed_Clust,” for clustering, produced a multiway tree for assembly and optimized the sequence according to the characteristics of rearrangement. We trained IMPre on human samples of T-cell receptor beta (TRB) and immunoglobulin heavy chain and then tested it on additional human samples. Accuracy of 97.7, 100, 92.9, and 100% was obtained for TRBV, TRBJ, IGHV, and IGHJ, respectively. Analyses of subsampling performance for these samples showed IMPre to be robust using different data quantities. Subsequently, IMPre was tested on samples from rhesus monkeys and human long sequences: the highly accurate results demonstrated IMPre to be stable with animal and multiple data types. With rapid accumulation of high-throughput sequence data for TCR and BCR repertoires, IMPre can be applied broadly for obtaining novel genes and a large number of novel alleles. IMPre is available at https://github.com/zhangwei2015/IMPre. PMID:27867380

  20. Cosmological constraints from the CFHTLenS shear measurements using a new, accurate, and flexible way of predicting non-linear mass clustering

    NASA Astrophysics Data System (ADS)

    Angulo, Raul E.; Hilbert, Stefan

    2015-03-01

    We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.

  1. An assessment of computer model techniques to predict quantitative and qualitative measures of speech perception in university classrooms for varying room sizes and noise levels

    NASA Astrophysics Data System (ADS)

    Kim, Hyeong-Seok

    computer models can accurately predict both quantitative and qualitative acoustical measures of speech intelligibility. This means that computer models can be used as a design tool during the early stage of a project.

  2. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  3. Prediction of Quantitative Traits Using Common Genetic Variants: Application to Body Mass Index

    PubMed Central

    Bae, Sunghwan; Choi, Sungkyoung; Kim, Sung Min

    2016-01-01

    With the success of the genome-wide association studies (GWASs), many candidate loci for complex human diseases have been reported in the GWAS catalog. Recently, many disease prediction models based on penalized regression or statistical learning methods were proposed using candidate causal variants from significant single-nucleotide polymorphisms of GWASs. However, there have been only a few systematic studies comparing existing methods. In this study, we first constructed risk prediction models, such as stepwise linear regression (SLR), least absolute shrinkage and selection operator (LASSO), and Elastic-Net (EN), using a GWAS chip and GWAS catalog. We then compared the prediction accuracy by calculating the mean square error (MSE) value on data from the Korea Association Resource (KARE) with body mass index. Our results show that SLR provides a smaller MSE value than the other methods, while the numbers of selected variables in each model were similar. PMID:28154505

  4. Logistics for Working Together to Facilitate Genomic/Quantitative Genetic Prediction

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The incorporation of DNA tests into the national cattle evaluation system will require estimation of variances of and covariances among the additive genetic components of the DNA tests and the phenotypic traits they are intended to predict. Populations with both DNA test results and phenotypes will ...

  5. Playing off the curve - testing quantitative predictions of skill acquisition theories in development of chess performance

    PubMed Central

    Gaschler, Robert; Progscha, Johanna; Smallbone, Kieran; Ram, Nilam; Bilalić, Merim

    2014-01-01

    Learning curves have been proposed as an adequate description of learning processes, no matter whether the processes manifest within minutes or across years. Different mechanisms underlying skill acquisition can lead to differences in the shape of learning curves. In the current study, we analyze the tournament performance data of 1383 chess players who begin competing at young age and play tournaments for at least 10 years. We analyze the performance development with the goal to test the adequacy of learning curves, and the skill acquisition theories they are based on, for describing and predicting expertise acquisition. On the one hand, we show that the skill acquisition theories implying a negative exponential learning curve do a better job in both describing early performance gains and predicting later trajectories of chess performance than those theories implying a power function learning curve. On the other hand, the learning curves of a large proportion of players show systematic qualitative deviations from the predictions of either type of skill acquisition theory. While skill acquisition theories predict larger performance gains in early years and smaller gains in later years, a substantial number of players begin to show substantial improvements with a delay of several years (and no improvement in the first years), deviations not fully accounted for by quantity of practice. The current work adds to the debate on how learning processes on a small time scale combine to large-scale changes. PMID:25202292

  6. TU-G-303-00: Radiomics: Advances in the Use of Quantitative Imaging Used for Predictive Modeling

    SciTech Connect

    2015-06-15

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with other biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding

  7. Application of a quantitative structure retention relationship approach for the prediction of the two-dimensional gas chromatography retention times of polycyclic aromatic sulfur heterocycle compounds.

    PubMed

    Gieleciak, Rafal; Hager, Darcy; Heshka, Nicole E

    2016-03-11

    Information on the sulfur classes present in petroleum is a key factor in determining the value of refined products and processing behavior in the refinery. A large part of the sulfur present is included in polycyclic aromatic sulfur heterocycles (PASHs), which in turn are difficult to desulfurize. Furthermore, some PASHs are potentially more mutagenic and carcinogenic than polycyclic aromatic hydrocarbons, PAHs. All of this calls for improved methods for the identification and quantification of individual sulfur species. Recent advances in analytical techniques such as comprehensive two-dimensional gas chromatography (GC×GC) have enabled the identification of many individual sulfur species. However, full identification of individual components, particularly in virgin oil fractions, is still out of reach as standards for numerous compounds are unavailable. In this work, a method for accurately predicting retention times in GC×GC using a QSRR (quantitative structure retention relationship) method was very helpful for the identification of individual sulfur compounds. Retention times for 89 saturated, aromatic, and polyaromatic sulfur-containing heterocyclic compounds were determined using two-dimensional gas chromatography. These retention data were correlated with molecular descriptors generated with CODESSA software. Two independent QSRR relationships were derived for the primary as well as the secondary retention characteristics. The predictive ability of the relationships was tested by using both independent sets of compounds and a cross-validation technique. When the corresponding chemical standards are unavailable, the equations developed for predicting retention times can be used to identify unknown chromatographic peaks by matching their retention times with those of sulfur compounds of known molecular structure.

  8. The quantitative prediction of in vivo enzyme-induction caused by drug exposure from in vitro information on human hepatocytes.

    PubMed

    Kato, Motohiro; Chiba, Koji; Horikawa, Masato; Sugiyama, Yuichi

    2005-08-01

    There have been no reports of the quantitative prediction of induction for drug-metabolizing enzymes in humans. We have tried to predict such enzyme induction in humans from in vitro data obtained using human hepatocytes. The in vitro and in vivo data on enzyme induction by inducers, such as rifampicin, phenobarbital and omeprazole, were collected from the published literature. The degree of enzyme induction in humans was compared with that predicted from in vitro data on human hepatocytes. Using the in vivo data, we calculated the hepatic intrinsic clearance of typical CYP substrates, such as midazolam and caffeine, before and after inducer treatment and estimated the induction ratios of hepatic intrinsic clearance following treatment. In the in vitro studies, the amount of mRNA or enzyme and enzyme activity in human hepatocytes, with or without an inducer, were compared and the induction ratios were estimated. The unbound mean concentration was taken as an index of drug exposure and the induction ratios in the in vivo and in vitro studies were compared. The unbound mean concentrations of inducers used in the in vitro studies were higher than those in the in vivo studies. The maximum induction ratios by inducers in the in vitro studies were higher than those in the in vivo studies. The induction ratio for rifampicin, omeprazole, troglitazone, dexamethasone and phenobarbital increased as the unbound mean concentration increased to reach a constant value. The induction of CYP3A and 1A was analyzed by the Emax model. The maximum induction ratio (Emax) and the concentration at half maximum induction (EC50) for rifampicin, omeprazole, troglitazone, dexamethasone and phenobarbital were 12.3, 0.847 micromol/L, 2.36, 0.225 micromol/L, 6.86, 0.002 micromol/L, 8.30, 9.32 micromol/L, and 7.62, 58.4 micromol/L, respectively. The Emax and EC50 of omeprazole for CYP1A were 12.02 and 0.075 micromol/L, respectively. The predicted induction ratio of all those inducers, except for

  9. Databases applicable to quantitative hazard/risk assessment-Towards a predictive systems toxicology

    SciTech Connect

    Waters, Michael Jackson, Marcus

    2008-11-15

    The Workshop on The Power of Aggregated Toxicity Data addressed the requirement for distributed databases to support quantitative hazard and risk assessment. The authors have conceived and constructed with federal support several databases that have been used in hazard identification and risk assessment. The first of these databases, the EPA Gene-Tox Database was developed for the EPA Office of Toxic Substances by the Oak Ridge National Laboratory, and is currently hosted by the National Library of Medicine. This public resource is based on the collaborative evaluation, by government, academia, and industry, of short-term tests for the detection of mutagens and presumptive carcinogens. The two-phased evaluation process resulted in more than 50 peer-reviewed publications on test system performance and a qualitative database on thousands of chemicals. Subsequently, the graphic and quantitative EPA/IARC Genetic Activity Profile (GAP) Database was developed in collaboration with the International Agency for Research on Cancer (IARC). A chemical database driven by consideration of the lowest effective dose, GAP has served IARC for many years in support of hazard classification of potential human carcinogens. The Toxicological Activity Profile (TAP) prototype database was patterned after GAP and utilized acute, subchronic, and chronic data from the Office of Air Quality Planning and Standards. TAP demonstrated the flexibility of the GAP format for air toxics, water pollutants and other environmental agents. The GAP format was also applied to developmental toxicants and was modified to represent quantitative results from the rodent carcinogen bioassay. More recently, the authors have constructed: 1) the NIEHS Genetic Alterations in Cancer (GAC) Database which quantifies specific mutations found in cancers induced by environmental agents, and 2) the NIEHS Chemical Effects in Biological Systems (CEBS) Knowledgebase that integrates genomic and other biological data including

  10. Retention prediction of low molecular weight anions in ion chromatography based on quantitative structure-retention relationships applied to the linear solvent strength model.

    PubMed

    Park, Soo Hyun; Haddad, Paul R; Talebi, Mohammad; Tyteca, Eva; Amos, Ruth I J; Szucs, Roman; Dolan, John W; Pohl, Christopher A

    2017-02-24

    Quantitative Structure-Retention Relationships (QSRRs) represent a popular technique to predict the retention times of analytes, based on molecular descriptors encoding the chemical structures of the analytes. The linear solvent strength (LSS) model relating the retention factor, k to the eluent concentration (log k=a-blog [eluent]), is a well-known and accurate retention model in ion chromatography (IC). In this work, QSRRs for inorganic and small organic anions were used to predict the regression parameters a and b in the LSS model (and hence retention times) for these analytes under a wide range of eluent conditions, based solely on their chemical structures. This approach was performed on retention data of inorganic and small organic anions from the "Virtual Column" software (Thermo Fisher Scientific). These retention data were recalibrated via a "porting" methodology on three columns (AS20, AS19, and AS11HC), prior to the QSRR modeling. This provided retention data more applicable on recently produced columns which may exhibit changes of column behavior due to batch-to-batch variability. Molecular descriptors for the analytes were calculated with Dragon software using the geometry-optimized molecular structures, employing the AM1 semi-empirical method. An optimal subset of molecular descriptors was then selected using an evolutionary algorithm (EA). Finally, the QSRR models were generated by multiple linear regression (MLR). As a result, six QSRR models with good predictive performance were successfully derived for a- and b-values on three columns (R(2)>0.98 and RMSE<0.11). External validation showed the possibility of using the developed QSRR models as predictive tools in IC (Qext(F3)(2)>0.7 and RMSEP<0.4). Moreover, it was demonstrated that the obtained QSRR models for the a- and b-values can predict the retention times for new analytes with good accuracy and predictability (R(2) of 0.98, RMSE of 0.89min, Qext(F3)(2) of 0.96 and RMSEP of 1.18min).

  11. The value of assessing pulmonary venous flow velocity for predicting severity of mitral regurgitation: A quantitative assessment integrating left ventricular function

    NASA Technical Reports Server (NTRS)

    Pu, M.; Griffin, B. P.; Vandervoort, P. M.; Stewart, W. J.; Fan, X.; Cosgrove, D. M.; Thomas, J. D.

    1999-01-01

    Although alteration in pulmonary venous flow has been reported to relate to mitral regurgitant severity, it is also known to vary with left ventricular (LV) systolic and diastolic dysfunction. There are few data relating pulmonary venous flow to quantitative indexes of mitral regurgitation (MR). The object of this study was to assess quantitatively the accuracy of pulmonary venous flow for predicting MR severity by using transesophageal echocardiographic measurement in patients with variable LV dysfunction. This study consisted of 73 patients undergoing heart surgery with mild to severe MR. Regurgitant orifice area (ROA), regurgitant stroke volume (RSV), and regurgitant fraction (RF) were obtained by quantitative transesophageal echocardiography and proximal isovelocity surface area. Both left and right upper pulmonary venous flow velocities were recorded and their patterns classified by the ratio of systolic to diastolic velocity: normal (>/=1), blunted (<1), and systolic reversal (<0). Twenty-three percent of patients had discordant patterns between the left and right veins. When the most abnormal patterns either in the left or right vein were used for analysis, the ratio of peak systolic to diastolic flow velocity was negatively correlated with ROA (r = -0.74, P <.001), RSV (r = -0.70, P <.001), and RF (r = -0.66, P <.001) calculated by the Doppler thermodilution method; values were r = -0.70, r = -0.67, and r = -0.57, respectively (all P <.001), for indexes calculated by the proximal isovelocity surface area method. The sensitivity, specificity, and predictive values of the reversed pulmonary venous flow pattern for detecting a large ROA (>0.3 cm(2)) were 69%, 98%, and 97%, respectively. The sensitivity, specificity, and predictive values of the normal pulmonary venous flow pattern for detecting a small ROA (<0.3 cm(2)) were 60%, 96%, and 94%, respectively. However, the blunted pattern had low sensitivity (22%), specificity (61%), and predictive values (30

  12. Quantitative prediction and interpretation of spin energy gaps in polyradicals: the virtual magnetic balance.

    PubMed

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-29

    Open-shell organic molecules possessing more than two unpaired electrons and sufficient stability even at room temperature are very unusual, but few were recently synthesized that promise a number of fascinating applications. Unfortunately, reliable structural information is not available and only lower limits can be estimated for energy splittings between the different spin states. On these grounds, we introduce here an effective 'virtual magnetic balance', a robust and user-friendly tool purposely tailored for polyradicals and devised to be used in parallel with experimental studies. The main objective of this tool is to provide reliable structures and quantitative splittings of spin states of large, complex molecules. We achieved this objective with reasonable computation times and in a theoretical framework that allows disentanglement of different stereo-electronic effects contributing to the overall experimental result. A recently synthesized tetraradical with remarkable chemical stability was used as a case study.

  13. Boiling points of halogenated aliphatic compounds: a quantitative structure-property relationship for prediction and validation.

    PubMed

    Oberg, Tomas

    2004-01-01

    Halogenated aliphatic compounds have many technical uses, but substances within this group are also ubiquitous environmental pollutants that can affect the ozone layer and contribute to global warming. The establishment of quantitative structure-property relationships is of interest not only to fill in gaps in the available database but also to validate experimental data already acquired. The three-dimensional structures of 240 compounds were modeled with molecular mechanics prior to the generation of empirical descriptors. Two bilinear projection methods, principal component analysis (PCA) and partial-least-squares regression (PLSR), were used to identify outliers. PLSR was subsequently used to build a multivariate calibration model by extracting the latent variables that describe most of the covariation between the molecular structure and the boiling point. Boiling points were also estimated with an extension of the group contribution method of Stein and Brown.

  14. Cortical thickness and medullary canal dimensions of the bone phalanx are predicted by quantitative ultrasound parameters.

    PubMed

    Guglielmi, Giuseppe; de Terlizzi, Francesca; Scalzo, Giacomo; Battista, Claudia; Scillitani, Alfredo

    2010-01-01

    The aim of the study was to investigate the relationship between quantitative ultrasound (QUS) parameters extracted from the analysis of the ultrasound (US) signal and the geometric properties of the bones. One hundred and one subjects in the age range of 20-7 4yr (mean: 52+/-12 yr) have been measured by QUS at the phalanges for the evaluation of amplitude-dependent speed of sound (AD-SoS), bone transmission time (BTT), US peak amplitude (UPA), signal dynamic (SDY), slope, energy, and fast wave amplitude (FWA). Hand radiograph, lumbar spine dual-energy X-ray absorptiometry (DXA) and quantitative computed tomography (QCT), and femoral neck DXA forearm peripheral QCT were performed on all patients. BTT is related to cortical thickness (CTh) (r=0.62, p<0.0001), and FWA is related to medullary canal thickness (r=-0.64, p<0.0001). Other parameters are related to both medullary canal thickness (AD-SoS: r=-0.21; UPA: r=-0.53; SDY: r=-0.56; slope: r=-0.64; energy: r=-0.44, p<0.05) and CTh (AD-SoS: r=0.54, p<0.0001; UPA: r=0.51; SDY: r=0.38; slope: r=0.32; energy: r=0.56, p<0.001). Linear multivariate models indicate that BTT, UPA, and energy measured at the phalanges carry independent information on CTh of the bone, whereas FWA, SDY, and slope are related only to medullary canal thickness.

  15. Prediction of Radix Astragali Immunomodulatory Effect of CD80 Expression from Chromatograms by Quantitative Pattern-Activity Relationship

    PubMed Central

    Ng, Michelle Chun-har; Lau, Tsui-yan; Fan, Kei; Xu, Qing-song; Lam, Mary K.

    2017-01-01

    The current use of a single chemical component as the representative quality control marker of herbal food supplement is inadequate. In this CD80-Quantitative-Pattern-Activity-Relationship (QPAR) study, we built a bioactivity predictive model that can be applicable for complex mixtures. Through integrating the chemical fingerprinting profiles of the immunomodulating herb Radix Astragali (RA) extracts, and their related biological data of immunological marker CD80 expression on dendritic cells, a chemometric model using the Elastic Net Partial Least Square (EN-PLS) algorithm was established. The EN-PLS algorithm increased the biological predictive capability with lower value of RMSEP (11.66) and higher values of Rp2 (0.55) when compared to the standard PLS model. This CD80-QPAR platform provides a useful predictive model for unknown RA extract's bioactivities using the chemical fingerprint inputs. Furthermore, this bioactivity prediction platform facilitates identification of key bioactivity-related chemical components within complex mixtures for future drug discovery and understanding of the batch-to-batch consistency for quality clinical trials. PMID:28337449

  16. Comparison of Risk Predicted by Multiple Norovirus Dose-Response Models and Implications for Quantitative Microbial Risk Assessment.

    PubMed

    Van Abel, Nicole; Schoen, Mary E; Kissel, John C; Meschke, J Scott

    2016-06-10

    The application of quantitative microbial risk assessments (QMRAs) to understand and mitigate risks associated with norovirus is increasingly common as there is a high frequency of outbreaks worldwide. A key component of QMRA is the dose-response analysis, which is the mathematical characterization of the association between dose and outcome. For Norovirus, multiple dose-response models are available that assume either a disaggregated or an aggregated intake dose. This work reviewed the dose-response models currently used in QMRA, and compared predicted risks from waterborne exposures (recreational and drinking) using all available dose-response models. The results found that the majority of published QMRAs of norovirus use the 1 F1 hypergeometric dose-response model with α = 0.04, β = 0.055. This dose-response model predicted relatively high risk estimates compared to other dose-response models for doses in the range of 1-1,000 genomic equivalent copies. The difference in predicted risk among dose-response models was largest for small doses, which has implications for drinking water QMRAs where the concentration of norovirus is low. Based on the review, a set of best practices was proposed to encourage the careful consideration and reporting of important assumptions in the selection and use of dose-response models in QMRA of norovirus. Finally, in the absence of one best norovirus dose-response model, multiple models should be used to provide a range of predicted outcomes for probability of infection.

  17. Predictive value of quantitative dipyridamole-thallium scintigraphy in assessing cardiovascular risk after vascular surgery in diabetes mellitus

    SciTech Connect

    Lane, S.E.; Lewis, S.M.; Pippin, J.J.; Kosinski, E.J.; Campbell, D.; Nesto, R.W.; Hill, T. )

    1989-12-01

    Cardiac complications represent a major risk to patients undergoing vascular surgery. Diabetic patients may be particularly prone to such complications due to the high incidence of concomitant coronary artery disease, the severity of which may be clinically unrecognized. Attempts to stratify groups by clinical criteria have been useful but lack the predictive value of currently used noninvasive techniques such as dipyridamole-thallium scintigraphy. One hundred one diabetic patients were evaluated with dipyridamole-thallium scintigraphy before undergoing vascular surgery. The incidence of thallium abnormalities was high (80%) and did not correlate with clinical markers of coronary disease. Even in a subgroup of patients with no overt clinical evidence of underlying heart disease, thallium abnormalities were present in 59%. Cardiovascular complications, however, occurred in only 11% of all patients. Statistically significant prediction of risk was not achieved with simple assessment of thallium results as normal or abnormal. Quantification of total number of reversible defects, as well as assessment of ischemia in the distribution of the left anterior descending coronary artery was required for optimum predictive accuracy. The prevalence of dipyridamole-thallium abnormalities in a diabetic population is much higher than that reported in nondiabetic patients and cannot be predicted by usual clinical indicators of heart disease. In addition, cardiovascular risk of vascular surgery can be optimally assessed by quantitative analysis of dipyridamole-thallium scintigraphy and identification of high- and low-risk subgroups.

  18. Interspecies quantitative structure-activity-activity relationships (QSAARs) for prediction of acute aquatic toxicity of aromatic amines and phenols.

    PubMed

    Furuhama, A; Hasunuma, K; Aoki, Y

    2015-01-01

    We propose interspecies quantitative structure-activity-activity relationships (QSAARs), that is, QSARs with descriptors, to estimate species-specific acute aquatic toxicity. Using training datasets consisting of more than 100 aromatic amines and phenols, we found that the descriptors that predicted acute toxicities to fish (Oryzias latipes) and algae were daphnia toxicity, molecular weight (an indicator of molecular size and uptake) and selected indicator variables that discriminated between the absence or presence of various substructures. Molecular weight and the selected indicator variables improved the goodness-of-fit of the fish and algae toxicity prediction models. External validations of the QSAARs proved that algae toxicity could be predicted within 1.0 log unit and revealed structural profiles of outlier chemicals with respect to fish toxicity. In addition, applicability domains based on leverage values provided structural alerts for the predicted fish toxicity of chemicals with more than one hydroxyl or amino group attached to an aromatic ring, but not for fluoroanilines, which were not included in the training dataset. Although these simple QSAARs have limitations, their applicability is defined so clearly that they may be practical for screening chemicals with molecular weights of ≤364.9.

  19. Quantitative prediction of intermittent high-frequency oscillations in neural networks with supralinear dendritic interactions

    PubMed Central

    Memmesheimer, Raoul-Martin

    2010-01-01

    The explanation of higher neural processes requires an understanding of the dynamics of complex, spiking neural networks. So far, modeling studies have focused on networks with linear or sublinear dendritic input summation. However, recent single-neuron experiments have demonstrated strongly supralinear dendritic enhancement of synchronous inputs. What are the implications of this amplification for networks of neurons? Here, I show numerically and analytically that such networks can generate intermittent, strong increases of activity with high-frequency oscillations; the models developed predict the shape of these events and the oscillation frequency. As an example, for the hippocampal region CA1, events with 200-Hz oscillations are predicted. I argue that these dynamics provide a plausible explanation for experimentally observed sharp-wave/ripple events. High-frequency oscillations can involve the replay of spike patterns. The models suggest that these patterns may reflect underlying network structures. PMID:20511534

  20. Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set.

    PubMed

    Li, Hui; Zhu, Yitan; Burnside, Elizabeth S; Huang, Erich; Drukker, Karen; Hoadley, Katherine A; Fan, Cheng; Conzen, Suzanne D; Zuley, Margarita; Net, Jose M; Sutton, Elizabeth; Whitman, Gary J; Morris, Elizabeth; Perou, Charles M; Ji, Yuan; Giger, Maryellen L

    2016-01-01

    Using quantitative radiomics, we demonstrate that computer-extracted magnetic resonance (MR) image-based tumor phenotypes can be predictive of the molecular classification of invasive breast cancers. Radiomics analysis was performed on 91 MRIs of biopsy-proven invasive breast cancers from National Cancer Institute's multi-institutional TCGA/TCIA. Immunohistochemistry molecular classification was performed including estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2, and for 84 cases, the molecular subtype (normal-like, luminal A, luminal B, HER2-enriched, and basal-like). Computerized quantitative image analysis included: three-dimensional lesion segmentation, phenotype extraction, and leave-one-case-out cross validation involving stepwise feature selection and linear discriminant analysis. The performance of the classifier model for molecular subtyping was evaluated using receiver operating characteristic analysis. The computer-extracted tumor phenotypes were able to distinguish between molecular prognostic indicators; area under the ROC curve values of 0.89, 0.69, 0.65, and 0.67 in the tasks of distinguishing between ER+ versus ER-, PR+ versus PR-, HER2+ versus HER2-, and triple-negative versus others, respectively. Statistically significant associations between tumor phenotypes and receptor status were observed. More aggressive cancers are likely to be larger in size with more heterogeneity in their contrast enhancement. Even after controlling for tumor size, a statistically significant trend was observed within each size group (P = 0.04 for lesions ≤ 2 cm; P = 0.02 for lesions >2 to ≤5 cm) as with the entire data set (P-value = 0.006) for the relationship between enhancement texture (entropy) and molecular subtypes (normal-like, luminal A, luminal B, HER2-enriched, basal-like). In conclusion, computer-extracted image phenotypes show promise for high-throughput discrimination of breast cancer subtypes and may yield a

  1. Quantitative Computed Tomography Features for Predicting Tumor Recurrence in Patients with Surgically Resected Adenocarcinoma of the Lung

    PubMed Central

    Shim, Woo Hyun; Xu, Hai; Choi, Chang-Min; Kim, Hyeong Ryul; Lee, Jung Bok

    2017-01-01

    Purpose The purpose of this study was to determine if preoperative quantitative computed tomography (CT) features including texture and histogram analysis measurements are associated with tumor recurrence in patients with surgically resected adenocarcinoma of the lung. Methods The study included 194 patients with surgically resected lung adenocarcinoma who underwent preoperative CT between January 2013 and December 2013. Quantitative CT feature analysis of the lung adenocarcinomas were performed using in-house software based on plug-in package for ImageJ. Ten quantitative features demonstrating the tumor size, attenuation, shape and texture were extracted. The CT parameters obtained from 1-mm and 5-mm data were compared using intraclass correlation coefficients. Univariate and multivariable logistic regression methods were used to investigate the association between tumor recurrence and preoperative CT findings. Results The 1-mm and 5-mm data were highly correlated in terms of diameter, perimeter, area, mean attenuation and entropy. Circularity and aspect ratio were moderately correlated. However, skewness and kurtosis were poorly correlated. Multivariable logistic regression analysis revealed that area (odds ratio [OR], 1.002 for each 1-mm2 increase; P = 0.003) and mean attenuation (OR, 1.005 for each 1.0-Hounsfield unit increase; P = 0.022) were independently associated with recurrence. The receiver operating curves using these two independent predictive factors showed high diagnostic performance in predicting recurrence (C-index = 0.81, respectively). Conclusion Tumor area and mean attenuation are independently associated with recurrence in patients with surgically resected adenocarcinoma of the lung. PMID:28068363

  2. Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set

    PubMed Central

    Li, Hui; Zhu, Yitan; Burnside, Elizabeth S; Huang, Erich; Drukker, Karen; Hoadley, Katherine A; Fan, Cheng; Conzen, Suzanne D; Zuley, Margarita; Net, Jose M; Sutton, Elizabeth; Whitman, Gary J; Morris, Elizabeth; Perou, Charles M; Ji, Yuan; Giger, Maryellen L

    2016-01-01

    Using quantitative radiomics, we demonstrate that computer-extracted magnetic resonance (MR) image-based tumor phenotypes can be predictive of the molecular classification of invasive breast cancers. Radiomics analysis was performed on 91 MRIs of biopsy-proven invasive breast cancers from National Cancer Institute’s multi-institutional TCGA/TCIA. Immunohistochemistry molecular classification was performed including estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2, and for 84 cases, the molecular subtype (normal-like, luminal A, luminal B, HER2-enriched, and basal-like). Computerized quantitative image analysis included: three-dimensional lesion segmentation, phenotype extraction, and leave-one-case-out cross validation involving stepwise feature selection and linear discriminant analysis. The performance of the classifier model for molecular subtyping was evaluated using receiver operating characteristic analysis. The computer-extracted tumor phenotypes were able to distinguish between molecular prognostic indicators; area under the ROC curve values of 0.89, 0.69, 0.65, and 0.67 in the tasks of distinguishing between ER+ versus ER−, PR+ versus PR−, HER2+ versus HER2−, and triple-negative versus others, respectively. Statistically significant associations between tumor phenotypes and receptor status were observed. More aggressive cancers are likely to be larger in size with more heterogeneity in their contrast enhancement. Even after controlling for tumor size, a statistically significant trend was observed within each size group (P = 0.04 for lesions ≤ 2 cm; P = 0.02 for lesions >2 to ≤5 cm) as with the entire data set (P-value = 0.006) for the relationship between enhancement texture (entropy) and molecular subtypes (normal-like, luminal A, luminal B, HER2-enriched, basal-like). In conclusion, computer-extracted image phenotypes show promise for high-throughput discrimination of breast cancer subtypes and may yield a

  3. A bayesian mixed regression based prediction of quantitative traits from molecular marker and gene expression data.

    PubMed

    Bhattacharjee, Madhuchhanda; Sillanpää, Mikko J

    2011-01-01

    Both molecular marker and gene expression data were considered alone as well as jointly to serve as additive predictors for two pathogen-activity-phenotypes in real recombinant inbred lines of soybean. For unobserved phenotype prediction, we used a bayesian hierarchical regression modeling, where the number of possible predictors in the model was controlled by different selection strategies tested. Our initial findings were submitted for DREAM5 (the 5th Dialogue on Reverse Engineering Assessment and Methods challenge) and were judged to be the best in sub-challenge B3 wherein both functional genomic and genetic data were used to predict the phenotypes. In this work we further improve upon this previous work by considering various predictor selection strategies and cross-validation was used to measure accuracy of in-data and out-data predictions. The results from various model choices indicate that for this data use of both data types (namely functional genomic and genetic) simultaneously improves out-data prediction accuracy. Adequate goodness-of-fit can be easily achieved with more complex models for both phenotypes, since the number of potential predictors is large and the sample size is not small. We also further studied gene-set enrichment (for continuous phenotype) in the biological process in question and chromosomal enrichment of the gene set. The methodological contribution of this paper is in exploration of variable selection techniques to alleviate the problem of over-fitting. Different strategies based on the nature of covariates were explored and all methods were implemented under the bayesian hierarchical modeling framework with indicator-based covariate selection. All the models based in careful variable selection procedure were found to produce significant results based on permutation test.

  4. Prediction of Genetic Values of Quantitative Traits in Plant Breeding Using Pedigree and Molecular Markers

    PubMed Central

    Crossa, José; Campos, Gustavo de los; Pérez, Paulino; Gianola, Daniel; Burgueño, Juan; Araus, José Luis; Makumbi, Dan; Singh, Ravi P.; Dreisigacker, Susanne; Yan, Jianbing; Arief, Vivi; Banziger, Marianne; Braun, Hans-Joachim

    2010-01-01

    The availability of dense molecular markers has made possible the use of genomic selection (GS) for plant breeding. However, the evaluation of models for GS in real plant populations is very limited. This article evaluates the performance of parametric and semiparametric models for GS using wheat (Triticum aestivum L.) and maize (Zea mays) data in which different traits were measured in several environmental conditions. The findings, based on extensive cross-validations, indicate that models including marker information had higher predictive ability than pedigree-based models. In the wheat data set, and relative to a pedigree model, gains in predictive ability due to inclusion of markers ranged from 7.7 to 35.7%. Correlation between observed and predictive values in the maize data set achieved values up to 0.79. Estimates of marker effects were different across environmental conditions, indicating that genotype × environment interaction is an important component of genetic variability. These results indicate that GS in plant breeding can be an effective strategy for selecting among lines whose phenotypes have yet to be observed. PMID:20813882

  5. Quantitative CD3 PET Imaging Predicts Tumor Growth Response to Anti-CTLA-4 Therapy

    PubMed Central

    Larimer, Benjamin M.; Wehrenberg-Klee, Eric; Caraballo, Alexander

    2016-01-01

    Immune checkpoint inhibitors have made rapid advances, resulting in multiple Food and Drug Administration–approved therapeutics that have markedly improved survival. However, these benefits are limited to a minority subpopulation that achieves a response. Predicting which patients are most likely to benefit would be valuable for individual therapy optimization. T-cell markers such as CD3—by examining active recruitment of the T cells responsible for cancer-cell death—represent a more direct approach to monitoring tumor immune response than pretreatment biopsy or genetic screening. This approach could be especially effective as numerous different therapeutic strategies emerge, decreasing the need for drug-specific biomarkers and instead focusing on T-cell infiltration, which has been previously correlated with treatment response. Methods: A CD3 PET imaging agent targeting T cells was synthesized to test the role of such imaging as a predictive marker. The 89Zr-p-isothiocyanatobenzyl-deferoxamine-CD3 PET probe was assessed in a murine tumor xenograft model of anti–cytotoxic T-lymphocyte antigen-4 (CTLA-4) immunotherapy of colon cancer. Results: Imaging on day 14 revealed 2 distinct groups of mice stratified by PET signal intensity. Although there was no significant difference in tumor volume on the day of imaging, in the high-uptake group subsequent measurements revealed significantly smaller tumors than in either the low-uptake group or the untreated controls. In contrast, there was no significant difference in the size of tumors between the low-uptake and untreated control mice. Conclusion: These findings indicate that high CD3 PET uptake in the anti-CTLA-4–treated mice correlated with subsequent reduced tumor volume and was a predictive biomarker of response. PMID:27230929

  6. Quantitative comparison of automatic and manual IMRT optimization for prostate cancer: the benefits of DVH prediction.

    PubMed

    Yang, Yun; Li, Taoran; Yuan, Lunlin; Ge, Yaorong; Yin, Fang-Fang; Lee, W Robert; Wu, Q Jackie

    2015-03-08

    A recent publication indicated that the patient anatomical feature (PAF) model was capable of predicting optimal objectives based on past experience. In this study, the benefits of IMRT optimization using PAF-predicted objectives as guidance for prostate were evaluated. Three different optimization methods were compared.1) Expert Plan: Ten prostate cases (16 plans) were planned by an expert planner using conventional trial-and-error approach started with institutional modified OAR and PTV constraints. Optimization was stopped at 150 iterations and that plan was saved as Expert Plan. 2) Clinical Plan: The planner would keep working on the Expert Plan till he was satisfied with the dosimetric quality and the final plan was referred to as Clinical Plan. 3) PAF Plan: A third sets of plans for the same ten patients were generated fully automatically using predicted DVHs as guidance. The optimization was based on PAF-based predicted objectives, and was continued to 150 iterations without human interaction. DMAX and D98% for PTV, DMAX for femoral heads, DMAX, D10cc, D25%/D17%, and D40% for bladder/rectum were compared. Clinical Plans are further optimized with more iterations and adjustments, but in general provided limited dosimetric benefits over Expert Plans. PTV D98% agreed within 2.31% among Expert, Clinical, and PAF plans. Between Clinical and PAF Plans, differences for DMAX of PTV, bladder, and rectum were within 2.65%, 2.46%, and 2.20%, respectively. Bladder D10cc was higher for PAF but < 1.54% in general. Bladder D25% and D40% were lower for PAF, by up to 7.71% and 6.81%, respectively. Rectum D10cc, D17%, and D40% were 2.11%, 2.72%, and 0.27% lower for PAF, respectively. DMAX for femoral heads were comparable (< 35 Gy on average). Compared to Clinical Plan (Primary + Boost), the average optimization time for PAF plan was reduced by 5.2 min on average, with a maximum reduction of 7.1min. Total numbers of MUs per plan for PAF Plans were lower than Clinical Plans

  7. Application of quantitative precipitation forecasting and precipitation ensemble prediction for hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Tao, P.; Tie-Yuan, S.; Zhi-Yuan, Y.; Jun-Chao, W.

    2015-05-01

    The precipitation in the forecast period influences flood forecasting precision, due to the uncertainty of the input to the hydrological model. Taking the ZhangHe basin as the example, the research adopts the precipitation forecast and ensemble precipitation forecast product of the AREM model, uses the Xin Anjiang hydrological model, and tests the flood forecasts. The results show that the flood forecast result can be clearly improved when considering precipitation during the forecast period. Hydrological forecast based on Ensemble Precipitation prediction gives better hydrological forecast information, better satisfying the need for risk information for flood prevention and disaster reduction, and has broad development opportunities.

  8. How predictive quantitative modelling of tissue organisation can inform liver disease pathogenesis.

    PubMed

    Drasdo, Dirk; Hoehme, Stefan; Hengstler, Jan G

    2014-10-01

    From the more than 100 liver diseases described, many of those with high incidence rates manifest themselves by histopathological changes, such as hepatitis, alcoholic liver disease, fatty liver disease, fibrosis, and, in its later stages, cirrhosis, hepatocellular carcinoma, primary biliary cirrhosis and other disorders. Studies of disease pathogeneses are largely based on integrating -omics data pooled from cells at different locations with spatial information from stained liver structures in animal models. Even though this has led to significant insights, the complexity of interactions as well as the involvement of processes at many different time and length scales constrains the possibility to condense disease processes in illustrations, schemes and tables. The combination of modern imaging modalities with image processing and analysis, and mathematical models opens up a promising new approach towards a quantitative understanding of pathologies and of disease processes. This strategy is discussed for two examples, ammonia metabolism after drug-induced acute liver damage, and the recovery of liver mass as well as architecture during the subsequent regeneration process. This interdisciplinary approach permits integration of biological mechanisms and models of processes contributing to disease progression at various scales into mathematical models. These can be used to perform in silico simulations to promote unravelling the relation between architecture and function as below illustrated for liver regeneration, and bridging from the in vitro situation and animal models to humans. In the near future novel mechanisms will usually not be directly elucidated by modelling. However, models will falsify hypotheses and guide towards the most informative experimental design.

  9. Skin biopsy and quantitative sensory testing do not predict response to lidocaine patch in painful neuropathies.

    PubMed

    Herrmann, David N; Pannoni, Valerie; Barbano, Richard L; Pennella-Vaughan, Janet; Dworkin, Robert H

    2006-01-01

    Predictors of response to neuropathic pain treatment in patients with painful distal sensory neuropathies are lacking. The 5% lidocaine patch is believed to exert its effects on neuropathic pain via a local stabilizing effect on cutaneous sensory afferents. As such, it provides a model to assess whether the status of epidermal innervation as determined by skin biopsy or quantitative sensory testing (QST) of small- and large-diameter sensory afferents might serve as predictors of response to topical, locally active treatment. In this study we assessed associations between epidermal nerve fiber (ENF) densities, sensory nerve conduction studies (NCS), QST, and response to a 5% lidocaine patch in patients with painful distal sensory neuropathies. We observed no association between distal leg epidermal and subepidermal innervation and response to the lidocaine patch. Several patients with complete loss of distal leg ENF showed a response to the lidocaine patch. Similarly we observed no consistent association between treatment response and QST for vibration, cooling, warm, heat-pain, and cold-pain thresholds, or distal sensory NCS. Thus, distal-leg skin biopsy, QST, and sensory NCS cannot be used to identify patients with painful polyneuropathy likely to respond to a lidocaine patch in clinical practice. Further studies are required to clarify precisely the mechanism and site of action of the lidocaine patch in patients with peripheral neuropathic pain.

  10. Quantitative research on critical thinking and predicting nursing students' NCLEX-RN performance.

    PubMed

    Romeo, Elizabeth M

    2010-07-01

    The concept of critical thinking has been influential in several disciplines. Both education and nursing in general have been attempting to define, teach, and measure this concept for decades. Nurse educators realize that critical thinking is the cornerstone of the objectives and goals for nursing students. The purpose of this article is to review and analyze quantitative research findings relevant to the measurement of critical thinking abilities and skills in undergraduate nursing students and the usefulness of critical thinking as a predictor of National Council Licensure Examination-Registered Nurse (NCLEX-RN) performance. The specific issues that this integrative review examined include assessment and analysis of the theoretical and operational defini