Science.gov

Sample records for accurate computational predictions

  1. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  2. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes.

    PubMed

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-10-30

    Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD(50) with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure-toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model. PMID:22959133

  3. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  4. RepurposeVS: A Drug Repurposing-Focused Computational Method for Accurate Drug-Target Signature Predictions.

    PubMed

    Issa, Naiem T; Peters, Oakland J; Byers, Stephen W; Dakshanamurthy, Sivanesan

    2015-01-01

    We describe here RepurposeVS for the reliable prediction of drug-target signatures using X-ray protein crystal structures. RepurposeVS is a virtual screening method that incorporates docking, drug-centric and protein-centric 2D/3D fingerprints with a rigorous mathematical normalization procedure to account for the variability in units and provide high-resolution contextual information for drug-target binding. Validity was confirmed by the following: (1) providing the greatest enrichment of known drug binders for multiple protein targets in virtual screening experiments, (2) determining that similarly shaped protein target pockets are predicted to bind drugs of similar 3D shapes when RepurposeVS is applied to 2,335 human protein targets, and (3) determining true biological associations in vitro for mebendazole (MBZ) across many predicted kinase targets for potential cancer repurposing. Since RepurposeVS is a drug repurposing-focused method, benchmarking was conducted on a set of 3,671 FDA approved and experimental drugs rather than the Database of Useful Decoys (DUDE) so as to streamline downstream repurposing experiments. We further apply RepurposeVS to explore the overall potential drug repurposing space for currently approved drugs. RepurposeVS is not computationally intensive and increases performance accuracy, thus serving as an efficient and powerful in silico tool to predict drug-target associations in drug repurposing. PMID:26234515

  5. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations.

    PubMed

    Harb, Moussab

    2015-10-14

    Using accurate first-principles quantum calculations based on DFT (including the DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we can predict the essential fundamental properties (such as bandgap, optical absorption co-efficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit a relatively high absorption efficiency in the visible range, high dielectric constant, high charge carrier mobility and much lower exciton binding energy than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties were found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices such as Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications. PMID:26351755

  6. Identification of fidgety movements and prediction of CP by the use of computer-based video analysis is more accurate when based on two video recordings.

    PubMed

    Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild

    2013-08-01

    This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies. PMID:23343036

  7. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  8. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  9. Accurate Prediction of Docked Protein Structure Similarity.

    PubMed

    Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit

    2015-09-01

    One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807

  10. Is scoring system of computed tomography based metric parameters can accurately predicts shock wave lithotripsy stone-free rates and aid in the development of treatment strategies?

    PubMed Central

    Badran, Yasser Ali; Abdelaziz, Alsayed Saad; Shehab, Mohamed Ahmed; Mohamed, Hazem Abdelsabour Dief; Emara, Absel-Aziz Ali; Elnabtity, Ali Mohamed Ali; Ghanem, Maged Mohammed; ELHelaly, Hesham Abdel Azim

    2016-01-01

    Objective: The objective was to determine the predicting success of shock wave lithotripsy (SWL) using a combination of computed tomography based metric parameters to improve the treatment plan. Patients and Methods: Consecutive 180 patients with symptomatic upper urinary tract calculi 20 mm or less were enrolled in our study underwent extracorporeal SWL were divided into two main groups, according to the stone size, Group A (92 patients with stone ≤10 mm) and Group B (88 patients with stone >10 mm). Both groups were evaluated, according to the skin to stone distance (SSD) and Hounsfield units (≤500, 500–1000 and >1000 HU). Results: Both groups were comparable in baseline data and stone characteristics. About 92.3% of Group A rendered stone-free, whereas 77.2% were stone-free in Group B (P = 0.001). Furthermore, in both group SWL success rates was a significantly higher for stones with lower attenuation <830 HU than with stones >830 HU (P < 0.034). SSD were statistically differences in SWL outcome (P < 0.02). Simultaneous consideration of three parameters stone size, stone attenuation value, and SSD; we found that stone-free rate (SFR) was 100% for stone attenuation value <830 HU for stone <10 mm or >10 mm but total number SWL sessions and shock waves required for the larger stone group were higher than in the smaller group (P < 0.01). Furthermore, SFR was 83.3% and 37.5% for stone <10 mm, mean HU >830, SSD 90 mm and SSD >120 mm, respectively. On the other hand, SFR was 52.6% and 28.57% for stone >10 mm, mean HU >830, SSD <90 mm and SSD >120 mm, respectively. Conclusion: Stone size, stone density (HU), and SSD is simple to calculate and can be reported by radiologists to applying combined score help to augment predictive power of SWL, reduce cost, and improving of treatment strategies. PMID:27141192

  11. Predicting performance of parallel computations

    NASA Technical Reports Server (NTRS)

    Mak, Victor W.; Lundstrom, Stephen F.

    1990-01-01

    An accurate and computationally efficient method for predicting the performance of a class of parallel computations running on concurrent systems is described. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queuing network model. Using these two models as inputs, the method outputs predictions of expected execution time of the parallel computation and the concurrent system utilization. The method is validated against both detailed simulation and actual execution on a commercial multiprocessor. Using 100 test cases, the average error of the prediction when compared to simulation statistics is 1.7 percent, with a standard deviation of 1.5 percent; the maximum error is about 10 percent.

  12. Predicting accurate probabilities with a ranking loss

    PubMed Central

    Menon, Aditya Krishna; Jiang, Xiaoqian J; Vembu, Shankar; Elkan, Charles; Ohno-Machado, Lucila

    2013-01-01

    In many real-world applications of machine learning classifiers, it is essential to predict the probability of an example belonging to a particular class. This paper proposes a simple technique for predicting probabilities based on optimizing a ranking loss, followed by isotonic regression. This semi-parametric technique offers both good ranking and regression performance, and models a richer set of probability distributions than statistical workhorses such as logistic regression. We provide experimental results that show the effectiveness of this technique on real-world applications of probability prediction. PMID:25285328

  13. You Can Accurately Predict Land Acquisition Costs.

    ERIC Educational Resources Information Center

    Garrigan, Richard

    1967-01-01

    Land acquisition costs were tested for predictability based upon the 1962 assessed valuations of privately held land acquired for campus expansion by the University of Wisconsin from 1963-1965. By correlating the land acquisition costs of 108 properties acquired during the 3 year period with--(1) the assessed value of the land, (2) the assessed…

  14. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGESBeta

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; et al

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  15. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    SciTech Connect

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.

  16. Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng

    Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.

  17. A fast and accurate computational approach to protein ionization

    PubMed Central

    Spassov, Velin Z.; Yan, Lisa

    2008-01-01

    We report a very fast and accurate physics-based method to calculate pH-dependent electrostatic effects in protein molecules and to predict the pK values of individual sites of titration. In addition, a CHARMm-based algorithm is included to construct and refine the spatial coordinates of all hydrogen atoms at a given pH. The present method combines electrostatic energy calculations based on the Generalized Born approximation with an iterative mobile clustering approach to calculate the equilibria of proton binding to multiple titration sites in protein molecules. The use of the GBIM (Generalized Born with Implicit Membrane) CHARMm module makes it possible to model not only water-soluble proteins but membrane proteins as well. The method includes a novel algorithm for preliminary refinement of hydrogen coordinates. Another difference from existing approaches is that, instead of monopeptides, a set of relaxed pentapeptide structures are used as model compounds. Tests on a set of 24 proteins demonstrate the high accuracy of the method. On average, the RMSD between predicted and experimental pK values is close to 0.5 pK units on this data set, and the accuracy is achieved at very low computational cost. The pH-dependent assignment of hydrogen atoms also shows very good agreement with protonation states and hydrogen-bond network observed in neutron-diffraction structures. The method is implemented as a computational protocol in Accelrys Discovery Studio and provides a fast and easy way to study the effect of pH on many important mechanisms such as enzyme catalysis, ligand binding, protein–protein interactions, and protein stability. PMID:18714088

  18. Accurately Predicting Complex Reaction Kinetics from First Principles

    NASA Astrophysics Data System (ADS)

    Green, William

    Many important systems contain a multitude of reactive chemical species, some of which react on a timescale faster than collisional thermalization, i.e. they never achieve a Boltzmann energy distribution. Usually it is impossible to fully elucidate the processes by experiments alone. Here we report recent progress toward predicting the time-evolving composition of these systems a priori: how unexpected reactions can be discovered on the computer, how reaction rates are computed from first principles, and how the many individual reactions are efficiently combined into a predictive simulation for the whole system. Some experimental tests of the a priori predictions are also presented.

  19. PredictSNP: Robust and Accurate Consensus Classifier for Prediction of Disease-Related Mutations

    PubMed Central

    Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D.; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri

    2014-01-01

    Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp. PMID:24453961

  20. Efficient and accurate computation of generalized singular-value decompositions

    NASA Astrophysics Data System (ADS)

    Drmac, Zlatko

    2001-11-01

    We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.

  1. Towards Accurate Ab Initio Predictions of the Spectrum of Methane

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    We have carried out extensive ab initio calculations of the electronic structure of methane, and these results are used to compute vibrational energy levels. We include basis set extrapolations, core-valence correlation, relativistic effects, and Born- Oppenheimer breakdown terms in our calculations. Our ab initio predictions of the lowest lying levels are superb.

  2. Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations

    NASA Astrophysics Data System (ADS)

    Bowman, J.; Jensen, S.; McDonald, Mark

    2010-10-01

    High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.

  3. Passive samplers accurately predict PAH levels in resident crayfish.

    PubMed

    Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A

    2016-02-15

    Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10 years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4 ± 1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests

  4. A new generalized correlation for accurate vapor pressure prediction

    NASA Astrophysics Data System (ADS)

    An, Hui; Yang, Wenming

    2012-08-01

    An accurate knowledge of the vapor pressure of organic liquids is very important for the oil and gas processing operations. In combustion modeling, the accuracy of numerical predictions is also highly dependent on the fuel properties such as vapor pressure. In this Letter, a new generalized correlation is proposed based on the Lee-Kesler's method where a fuel dependent parameter 'A' is introduced. The proposed method only requires the input parameters of critical temperature, normal boiling temperature and the acentric factor of the fluid. With this method, vapor pressures have been calculated and compared with the data reported in data compilation for 42 organic liquids over 1366 data points, and the overall average absolute percentage deviation is only 1.95%.

  5. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  6. Mouse models of human AML accurately predict chemotherapy response.

    PubMed

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W

    2009-04-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  7. Is Three-Dimensional Soft Tissue Prediction by Software Accurate?

    PubMed

    Nam, Ki-Uk; Hong, Jongrak

    2015-11-01

    The authors assessed whether virtual surgery, performed with a soft tissue prediction program, could correctly simulate the actual surgical outcome, focusing on soft tissue movement. Preoperative and postoperative computed tomography (CT) data for 29 patients, who had undergone orthognathic surgery, were obtained and analyzed using the Simplant Pro software. The program made a predicted soft tissue image (A) based on presurgical CT data. After the operation, we obtained actual postoperative CT data and an actual soft tissue image (B) was generated. Finally, the 2 images (A and B) were superimposed and analyzed differences between the A and B. Results were grouped in 2 classes: absolute values and vector values. In the absolute values, the left mouth corner was the most significant error point (2.36 mm). The right mouth corner (2.28 mm), labrale inferius (2.08 mm), and the pogonion (2.03 mm) also had significant errors. In vector values, prediction of the right-left side had a left-sided tendency, the superior-inferior had a superior tendency, and the anterior-posterior showed an anterior tendency. As a result, with this program, the position of points tended to be located more left, anterior, and superior than the "real" situation. There is a need to improve the prediction accuracy for soft tissue images. Such software is particularly valuable in predicting craniofacial soft tissues landmarks, such as the pronasale. With this software, landmark positions were most inaccurate in terms of anterior-posterior predictions. PMID:26594988

  8. Neutron supermirrors: an accurate theory for layer thickness computation

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-11-01

    We present a new theory for the computation of Super-Mirror stacks, using accurate formulas derived from the classical optics field. Approximations are introduced into the computation, but at a later stage than existing theories, providing a more rigorous treatment of the problem. The final result is a continuous thickness stack, whose properties can be determined at the outset of the design. We find that the well-known fourth power dependence of number of layers versus maximum angle is (of course) asymptotically correct. We find a formula giving directly the relation between desired reflectance, maximum angle, and number of layers (for a given pair of materials). Note: The author of this article, a classical opticist, has limited knowledge of the Neutron world, and begs forgiveness for any shortcomings, erroneous assumptions and/or misinterpretation of previous authors' work on the subject.

  9. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  10. Cas9-chromatin binding information enables more accurate CRISPR off-target prediction

    PubMed Central

    Singh, Ritambhara; Kuscu, Cem; Quinlan, Aaron; Qi, Yanjun; Adli, Mazhar

    2015-01-01

    The CRISPR system has become a powerful biological tool with a wide range of applications. However, improving targeting specificity and accurately predicting potential off-targets remains a significant goal. Here, we introduce a web-based CRISPR/Cas9 Off-target Prediction and Identification Tool (CROP-IT) that performs improved off-target binding and cleavage site predictions. Unlike existing prediction programs that solely use DNA sequence information; CROP-IT integrates whole genome level biological information from existing Cas9 binding and cleavage data sets. Utilizing whole-genome chromatin state information from 125 human cell types further enhances its computational prediction power. Comparative analyses on experimentally validated datasets show that CROP-IT outperforms existing computational algorithms in predicting both Cas9 binding as well as cleavage sites. With a user-friendly web-interface, CROP-IT outputs scored and ranked list of potential off-targets that enables improved guide RNA design and more accurate prediction of Cas9 binding or cleavage sites. PMID:26032770

  11. Accurate Prediction of Ligand Affinities for a Proton-Dependent Oligopeptide Transporter.

    PubMed

    Samsudin, Firdaus; Parker, Joanne L; Sansom, Mark S P; Newstead, Simon; Fowler, Philip W

    2016-02-18

    Membrane transporters are critical modulators of drug pharmacokinetics, efficacy, and safety. One example is the proton-dependent oligopeptide transporter PepT1, also known as SLC15A1, which is responsible for the uptake of the ?-lactam antibiotics and various peptide-based prodrugs. In this study, we modeled the binding of various peptides to a bacterial homolog, PepTSt, and evaluated a range of computational methods for predicting the free energy of binding. Our results show that a hybrid approach (endpoint methods to classify peptides into good and poor binders and a theoretically exact method for refinement) is able to accurately predict affinities, which we validated using proteoliposome transport assays. Applying the method to a homology model of PepT1 suggests that the approach requires a high-quality structure to be accurate. Our study provides a blueprint for extending these computational methodologies to other pharmaceutically important transporter families. PMID:27028887

  12. Accurate Prediction of Ligand Affinities for a Proton-Dependent Oligopeptide Transporter

    PubMed Central

    Samsudin, Firdaus; Parker, Joanne L.; Sansom, Mark S.P.; Newstead, Simon; Fowler, Philip W.

    2016-01-01

    Summary Membrane transporters are critical modulators of drug pharmacokinetics, efficacy, and safety. One example is the proton-dependent oligopeptide transporter PepT1, also known as SLC15A1, which is responsible for the uptake of the β-lactam antibiotics and various peptide-based prodrugs. In this study, we modeled the binding of various peptides to a bacterial homolog, PepTSt, and evaluated a range of computational methods for predicting the free energy of binding. Our results show that a hybrid approach (endpoint methods to classify peptides into good and poor binders and a theoretically exact method for refinement) is able to accurately predict affinities, which we validated using proteoliposome transport assays. Applying the method to a homology model of PepT1 suggests that the approach requires a high-quality structure to be accurate. Our study provides a blueprint for extending these computational methodologies to other pharmaceutically important transporter families. PMID:27028887

  13. Accurate charge capture and cost allocation: cost justification for bedside computing.

    PubMed Central

    Grewal, R.; Reed, R. L.

    1993-01-01

    This paper shows that cost justification for bedside clinical computing can be made by recouping charges with accurate charge capture. Twelve months worth of professional charges for a sixteen bed surgical intensive care unit are computed from charted data in a bedside clinical database and are compared to the professional charges actually billed by the unit. A substantial difference in predicted charges and billed charges was found. This paper also discusses the concept of appropriate cost allocation in the inpatient environment and the feasibility of appropriate allocation as a by-product of bedside computing. PMID:8130444

  14. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  15. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  16. Accurate Prediction of Binding Thermodynamics for DNA on Surfaces

    PubMed Central

    Vainrub, Arnold; Pettitt, B. Montgomery

    2011-01-01

    For DNA mounted on surfaces for microarrays, microbeads and nanoparticles, the nature of the random attachment of oligonucleotide probes to an amorphous surface gives rise to a locally inhomogeneous probe density. These fluctuations of the probe surface density are inherent to all common surface or bead platforms, regardless if they exploit either an attachment of pre-synthesized probes or probes synthesized in situ on the surface. Here, we demonstrate for the first time the crucial role of the probe surface density fluctuations in performance of DNA arrays. We account for the density fluctuations with a disordered two-dimensional surface model and derive the corresponding array hybridization isotherm that includes a counter-ion screened electrostatic repulsion between the assayed DNA and probe array. The calculated melting curves are in excellent agreement with published experimental results for arrays with both pre-synthesized and in-situ synthesized oligonucleotide probes. The approach developed allows one to accurately predict the melting curves of DNA arrays using only the known sequence dependent hybridization enthalpy and entropy in solution and the experimental macroscopic surface density of probes. This opens the way to high precision theoretical design and optimization of probes and primers in widely used DNA array-based high-throughput technologies for gene expression, genotyping, next-generation sequencing, and surface polymerase extension. PMID:21972932

  17. Accurate indel prediction using paired-end short reads

    PubMed Central

    2013-01-01

    Background One of the major open challenges in next generation sequencing (NGS) is the accurate identification of structural variants such as insertions and deletions (indels). Current methods for indel calling assign scores to different types of evidence or counter-evidence for the presence of an indel, such as the number of split read alignments spanning the boundaries of a deletion candidate or reads that map within a putative deletion. Candidates with a score above a manually defined threshold are then predicted to be true indels. As a consequence, structural variants detected in this manner contain many false positives. Results Here, we present a machine learning based method which is able to discover and distinguish true from false indel candidates in order to reduce the false positive rate. Our method identifies indel candidates using a discriminative classifier based on features of split read alignment profiles and trained on true and false indel candidates that were validated by Sanger sequencing. We demonstrate the usefulness of our method with paired-end Illumina reads from 80 genomes of the first phase of the 1001 Genomes Project ( http://www.1001genomes.org) in Arabidopsis thaliana. Conclusion In this work we show that indel classification is a necessary step to reduce the number of false positive candidates. We demonstrate that missing classification may lead to spurious biological interpretations. The software is available at: http://agkb.is.tuebingen.mpg.de/Forschung/SV-M/. PMID:23442375

  18. Computationally efficient prediction of area per lipid

    NASA Astrophysics Data System (ADS)

    Chaban, Vitaly

    2014-11-01

    Area per lipid (APL) is an important property of biological and artificial membranes. Newly constructed bilayers are characterized by their APL and newly elaborated force fields must reproduce APL. Computer simulations of APL are very expensive due to slow conformational dynamics. The simulated dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest. Thus, sampling times to predict accurate APL are reduced by a factor of ∼10.

  19. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  20. Fast and accurate predictions of covalent bonds in chemical space

    NASA Astrophysics Data System (ADS)

    Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  1. IRIS: Towards an Accurate and Fast Stage Weight Prediction Method

    NASA Astrophysics Data System (ADS)

    Taponier, V.; Balu, A.

    2002-01-01

    The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator

  2. Computer-based personality judgments are more accurate than those made by humans

    PubMed Central

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  3. Computer-based personality judgments are more accurate than those made by humans.

    PubMed

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  4. Photoacoustic computed tomography without accurate ultrasonic transducer responses

    NASA Astrophysics Data System (ADS)

    Sheng, Qiwei; Wang, Kun; Xia, Jun; Zhu, Liren; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  5. Accurate rotor loads prediction using the FLAP (Force and Loads Analysis Program) dynamics code

    SciTech Connect

    Wright, A.D.; Thresher, R.W.

    1987-10-01

    Accurately predicting wind turbine blade loads and response is very important in predicting the fatigue life of wind turbines. There is a clear need in the wind turbine community for validated and user-friendly structural dynamics codes for predicting blade loads and response. At the Solar Energy Research Institute (SERI), a Force and Loads Analysis Program (FLAP) has been refined and validated and is ready for general use. Currently, FLAP is operational on an IBM-PC compatible computer and can be used to analyze both rigid- and teetering-hub configurations. The results of this paper show that FLAP can be used to accurately predict the deterministic loads for rigid-hub rotors. This paper compares analytical predictions to field test measurements for a three-bladed, upwind turbine with a rigid-hub configuration. The deterministic loads predicted by FLAP are compared with 10-min azimuth averages of blade root flapwise bending moments for different wind speeds. 6 refs., 12 figs., 3 tabs.

  6. Accurate prediction of protein–protein interactions from sequence alignments using a Bayesian method

    PubMed Central

    Burger, Lukas; van Nimwegen, Erik

    2008-01-01

    Accurate and large-scale prediction of protein–protein interactions directly from amino-acid sequences is one of the great challenges in computational biology. Here we present a new Bayesian network method that predicts interaction partners using only multiple alignments of amino-acid sequences of interacting protein domains, without tunable parameters, and without the need for any training examples. We first apply the method to bacterial two-component systems and comprehensively reconstruct two-component signaling networks across all sequenced bacteria. Comparisons of our predictions with known interactions show that our method infers interaction partners genome-wide with high accuracy. To demonstrate the general applicability of our method we show that it also accurately predicts interaction partners in a recent dataset of polyketide synthases. Analysis of the predicted genome-wide two-component signaling networks shows that cognates (interacting kinase/regulator pairs, which lie adjacent on the genome) and orphans (which lie isolated) form two relatively independent components of the signaling network in each genome. In addition, while most genes are predicted to have only a small number of interaction partners, we find that 10% of orphans form a separate class of ‘hub' nodes that distribute and integrate signals to and from up to tens of different interaction partners. PMID:18277381

  7. Accurate similarity index based on activity and connectivity of node for link prediction

    NASA Astrophysics Data System (ADS)

    Li, Longjie; Qian, Lvjian; Wang, Xiaoping; Luo, Shishun; Chen, Xiaoyun

    2015-05-01

    Recent years have witnessed the increasing of available network data; however, much of those data is incomplete. Link prediction, which can find the missing links of a network, plays an important role in the research and analysis of complex networks. Based on the assumption that two unconnected nodes which are highly similar are very likely to have an interaction, most of the existing algorithms solve the link prediction problem by computing nodes' similarities. The fundamental requirement of those algorithms is accurate and effective similarity indices. In this paper, we propose a new similarity index, namely similarity based on activity and connectivity (SAC), which performs link prediction more accurately. To compute the similarity between two nodes, this index employs the average activity of these two nodes in their common neighborhood and the connectivities between them and their common neighbors. The higher the average activity is and the stronger the connectivities are, the more similar the two nodes are. The proposed index not only commendably distinguishes the contributions of paths but also incorporates the influence of endpoints. Therefore, it can achieve a better predicting result. To verify the performance of SAC, we conduct experiments on 10 real-world networks. Experimental results demonstrate that SAC outperforms the compared baselines.

  8. Unsteady jet flow computation towards noise prediction

    NASA Technical Reports Server (NTRS)

    Soh, Woo-Yung

    1994-01-01

    An attempt has been made to combine a wave solution method and an unsteady flow computation to produce an integrated aeroacoustic code to predict far-field jet noise. An axisymmetric subsonic jet is considered for this purpose. A fourth order space accurate Pade compact scheme is used for the unsteady Navier-Stokes solution. A Kirchhoff surface integral for the wave equation is employed through the use of an imaginary surface which is a circular cylinder enclosing the jet at a distance. Information such as pressure and its time and normal derivatives is provided on the surface. The sound prediction is performed side by side with the jet flow computation. Retarded time is also taken into consideration since the cylinder body is not acoustically compact. The far-field sound pressure has the directivity and spectra show that low frequency peaks shift toward higher frequency region as the observation angle increases from the jet flow axis.

  9. Computer loss experience and predictions

    NASA Astrophysics Data System (ADS)

    Parker, Donn B.

    1996-03-01

    The types of losses organizations must anticipate have become more difficult to predict because of the eclectic nature of computers and the data communications and the decrease in news media reporting of computer-related losses as they become commonplace. Total business crime is conjectured to be decreasing in frequency and increasing in loss per case as a result of increasing computer use. Computer crimes are probably increasing, however, as their share of the decreasing business crime rate grows. Ultimately all business crime will involve computers in some way, and we could see a decline of both together. The important information security measures in high-loss business crime generally concern controls over authorized people engaged in unauthorized activities. Such controls include authentication of users, analysis of detailed audit records, unannounced audits, segregation of development and production systems and duties, shielding the viewing of screens, and security awareness and motivation controls in high-value transaction areas. Computer crimes that involve highly publicized intriguing computer misuse methods, such as privacy violations, radio frequency emanations eavesdropping, and computer viruses, have been reported in waves that periodically have saturated the news media during the past 20 years. We must be able to anticipate such highly publicized crimes and reduce the impact and embarrassment they cause. On the basis of our most recent experience, I propose nine new types of computer crime to be aware of: computer larceny (theft and burglary of small computers), automated hacking (use of computer programs to intrude), electronic data interchange fraud (business transaction fraud), Trojan bomb extortion and sabotage (code security inserted into others' systems that can be triggered to cause damage), LANarchy (unknown equipment in use), desktop forgery (computerized forgery and counterfeiting of documents), information anarchy (indiscriminate use of

  10. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions.

    PubMed

    Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin

    2015-01-01

    Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale. PMID:26198229

  11. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions

    PubMed Central

    Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin

    2015-01-01

    Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale. PMID:26198229

  12. Computational approaches for predicting mutant protein stability.

    PubMed

    Kulshreshtha, Shweta; Chaudhary, Vigi; Goswami, Girish K; Mathur, Nidhi

    2016-05-01

    Mutations in the protein affect not only the structure of protein, but also its function and stability. Prediction of mutant protein stability with accuracy is desired for uncovering the molecular aspects of diseases and design of novel proteins. Many advanced computational approaches have been developed over the years, to predict the stability and function of a mutated protein. These approaches based on structure, sequence features and combined features (both structure and sequence features) provide reasonably accurate estimation of the impact of amino acid substitution on stability and function of protein. Recently, consensus tools have been developed by incorporating many tools together, which provide single window results for comparison purpose. In this review, a useful guide for the selection of tools that can be employed in predicting mutated proteins' stability and disease causing capability is provided. PMID:27160393

  13. A review of the kinetic detail required for accurate predictions of normal shock waves

    NASA Technical Reports Server (NTRS)

    Muntz, E. P.; Erwin, Daniel A.; Pham-Van-diep, Gerald C.

    1991-01-01

    Several aspects of the kinetic models used in the collision phase of Monte Carlo direct simulations have been studied. Accurate molecular velocity distribution function predictions require a significantly increased number of computational cells in one maximum slope shock thickness, compared to predictions of macroscopic properties. The shape of the highly repulsive portion of the interatomic potential for argon is not well modeled by conventional interatomic potentials; this portion of the potential controls high Mach number shock thickness predictions, indicating that the specification of the energetic repulsive portion of interatomic or intermolecular potentials must be chosen with care for correct modeling of nonequilibrium flows at high temperatures. It has been shown for inverse power potentials that the assumption of variable hard sphere scattering provides accurate predictions of the macroscopic properties in shock waves, by comparison with simulations in which differential scattering is employed in the collision phase. On the other hand, velocity distribution functions are not well predicted by the variable hard sphere scattering model for softer potentials at higher Mach numbers.

  14. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  15. Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues.

    PubMed

    Zhang, Hao; Zhao, Yan; Cao, Liangcai; Jin, Guofan

    2015-02-23

    We propose an algorithm based on fully computed holographic stereogram for calculating full-parallax computer-generated holograms (CGHs) with accurate depth cues. The proposed method integrates point source algorithm and holographic stereogram based algorithm to reconstruct the three-dimensional (3D) scenes. Precise accommodation cue and occlusion effect can be created, and computer graphics rendering techniques can be employed in the CGH generation to enhance the image fidelity. Optical experiments have been performed using a spatial light modulator (SLM) and a fabricated high-resolution hologram, the results show that our proposed algorithm can perform quality reconstructions of 3D scenes with arbitrary depth information. PMID:25836429

  16. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  17. Highly Accurate Structure-Based Prediction of HIV-1 Coreceptor Usage Suggests Intermolecular Interactions Driving Tropism

    PubMed Central

    Kieslich, Chris A.; Tamamis, Phanourios; Guzman, Yannis A.; Onel, Melis; Floudas, Christodoulos A.

    2016-01-01

    HIV-1 entry into host cells is mediated by interactions between the V3-loop of viral glycoprotein gp120 and chemokine receptor CCR5 or CXCR4, collectively known as HIV-1 coreceptors. Accurate genotypic prediction of coreceptor usage is of significant clinical interest and determination of the factors driving tropism has been the focus of extensive study. We have developed a method based on nonlinear support vector machines to elucidate the interacting residue pairs driving coreceptor usage and provide highly accurate coreceptor usage predictions. Our models utilize centroid-centroid interaction energies from computationally derived structures of the V3-loop:coreceptor complexes as primary features, while additional features based on established rules regarding V3-loop sequences are also investigated. We tested our method on 2455 V3-loop sequences of various lengths and subtypes, and produce a median area under the receiver operator curve of 0.977 based on 500 runs of 10-fold cross validation. Our study is the first to elucidate a small set of specific interacting residue pairs between the V3-loop and coreceptors capable of predicting coreceptor usage with high accuracy across major HIV-1 subtypes. The developed method has been implemented as a web tool named CRUSH, CoReceptor USage prediction for HIV-1, which is available at http://ares.tamu.edu/CRUSH/. PMID:26859389

  18. Accurate perception of negative emotions predicts functional capacity in schizophrenia.

    PubMed

    Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J

    2014-04-30

    Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration. PMID:24524947

  19. Standardized EEG interpretation accurately predicts prognosis after cardiac arrest

    PubMed Central

    Rossetti, Andrea O.; van Rootselaar, Anne-Fleur; Wesenberg Kjaer, Troels; Horn, Janneke; Ullén, Susann; Friberg, Hans; Nielsen, Niklas; Rosén, Ingmar; Åneman, Anders; Erlinge, David; Gasche, Yvan; Hassager, Christian; Hovdenes, Jan; Kjaergaard, Jesper; Kuiper, Michael; Pellis, Tommaso; Stammet, Pascal; Wanscher, Michael; Wetterslev, Jørn; Wise, Matt P.; Cronberg, Tobias

    2016-01-01

    Objective: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. Methods: In this cohort study, 4 EEG specialists, blinded to outcome, evaluated prospectively recorded EEGs in the Target Temperature Management trial (TTM trial) that randomized patients to 33°C vs 36°C. Routine EEG was performed in patients still comatose after rewarming. EEGs were classified into highly malignant (suppression, suppression with periodic discharges, burst-suppression), malignant (periodic or rhythmic patterns, pathological or nonreactive background), and benign EEG (absence of malignant features). Poor outcome was defined as best Cerebral Performance Category score 3–5 until 180 days. Results: Eight TTM sites randomized 202 patients. EEGs were recorded in 103 patients at a median 77 hours after cardiac arrest; 37% had a highly malignant EEG and all had a poor outcome (specificity 100%, sensitivity 50%). Any malignant EEG feature had a low specificity to predict poor prognosis (48%) but if 2 malignant EEG features were present specificity increased to 96% (p < 0.001). Specificity and sensitivity were not significantly affected by targeted temperature or sedation. A benign EEG was found in 1% of the patients with a poor outcome. Conclusions: Highly malignant EEG after rewarming reliably predicted poor outcome in half of patients without false predictions. An isolated finding of a single malignant feature did not predict poor outcome whereas a benign EEG was highly predictive of a good outcome. PMID:26865516

  20. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    PubMed

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  1. How Accurately Can We Predict Eclipses for Algol? (Poster abstract)

    NASA Astrophysics Data System (ADS)

    Turner, D.

    2016-06-01

    (Abstract only) beta Persei, or Algol, is a very well known eclipsing binary system consisting of a late B-type dwarf that is regularly eclipsed by a GK subgiant every 2.867 days. Eclipses, which last about 8 hours, are regular enough that predictions for times of minima are published in various places, Sky & Telescope magazine and The Observer's Handbook, for example. But eclipse minimum lasts for less than a half hour, whereas subtle mistakes in the current ephemeris for the star can result in predictions that are off by a few hours or more. The Algol system is fairly complex, with the Algol A and Algol B eclipsing system also orbited by Algol C with an orbital period of nearly 2 years. Added to that are complex long-term O-C variations with a periodicity of almost two centuries that, although suggested by Hoffmeister to be spurious, fit the type of light travel time variations expected for a fourth star also belonging to the system. The AB sub-system also undergoes mass transfer events that add complexities to its O-C behavior. Is it actually possible to predict precise times of eclipse minima for Algol months in advance given such complications, or is it better to encourage ongoing observations of the star so that O-C variations can be tracked in real time?

  2. Accurate Structure Prediction and Conformational Analysis of Cyclic Peptides with Residue-Specific Force Fields.

    PubMed

    Geng, Hao; Jiang, Fan; Wu, Yun-Dong

    2016-05-19

    Cyclic peptides (CPs) are promising candidates for drugs, chemical biology tools, and self-assembling nanomaterials. However, the development of reliable and accurate computational methods for their structure prediction has been challenging. Here, 20 all-trans CPs of 5-12 residues selected from Cambridge Structure Database have been simulated using replica-exchange molecular dynamics with four different force fields. Our recently developed residue-specific force fields RSFF1 and RSFF2 can correctly identify the crystal-like conformations of more than half CPs as the most populated conformation. The RSFF2 performs the best, which consistently predicts the crystal structures of 17 out of 20 CPs with rmsd < 1.1 Å. We also compared the backbone (ϕ, ψ) sampling of residues in CPs with those in short linear peptides and in globular proteins. In general, unlike linear peptides, CPs have local conformational free energies and entropies quite similar to globular proteins. PMID:27128113

  3. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting

    PubMed Central

    Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.

    2016-01-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  4. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.

    PubMed

    Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T

    2016-03-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  5. SIFTER search: a web server for accurate phylogeny-based protein function prediction.

    PubMed

    Sahraeian, Sayed M; Luo, Kevin R; Brenner, Steven E

    2015-07-01

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded. PMID:25979264

  6. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    SciTech Connect

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.

  7. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE PAGESBeta

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  8. High-performance computing and networking as tools for accurate emission computed tomography reconstruction.

    PubMed

    Passeri, A; Formiconi, A R; De Cristofaro, M T; Pupi, A; Meldolesi, U

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64x64) slices could be reconstructed from a set of 90 (64x64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. PMID:9096089

  9. Accurate predictions for the production of vaporized water

    SciTech Connect

    Morin, E.; Montel, F.

    1995-12-31

    The production of water vaporized in the gas phase is controlled by the local conditions around the wellbore. The pressure gradient applied to the formation creates a sharp increase of the molar water content in the hydrocarbon phase approaching the well; this leads to a drop in the pore water saturation around the wellbore. The extent of the dehydrated zone which is formed is the key controlling the bottom-hole content of vaporized water. The maximum water content in the hydrocarbon phase at a given pressure, temperature and salinity is corrected by capillarity or adsorption phenomena depending on the actual water saturation. Describing the mass transfer of the water between the hydrocarbon phases and the aqueous phase into the tubing gives a clear idea of vaporization effects on the formation of scales. Field example are presented for gas fields with temperatures ranging between 140{degrees}C and 180{degrees}C, where water vaporization effects are significant. Conditions for salt plugging in the tubing are predicted.

  10. Can Contemporary Density Functional Theory Predict Energy Spans in Molecular Catalysis Accurately Enough To Be Applicable for in Silico Catalyst Design? A Computational/Experimental Case Study for the Ruthenium-Catalyzed Hydrogenation of Olefins.

    PubMed

    Rohmann, Kai; Hölscher, Markus; Leitner, Walter

    2016-01-13

    The catalytic hydrogenation of cyclohexene and 1-methylcyclohexene is investigated experimentally and by means of density functional theory (DFT) computations using novel ruthenium Xantphos(Ph) (4,5-bis(diphenylphosphino)-9,9-dimethylxanthene) and Xantphos(Cy) (4,5-bis(dicyclohexylphosphino)-9,9-dimethylxanthene) precatalysts [Ru(Xantphos(Ph))(PhCO2)(Cl)] (1) and [Ru(Xantphos(Cy))(PhCO2)(Cl)] (2), the synthesis, characterization, and crystal structures of which are reported. The intention of this work is to (i) understand the reaction mechanisms on the microscopic level and (ii) compare experimentally observed activation barriers with computed barriers. The Gibbs free activation energy ΔG(⧧) was obtained experimentally with precatalyst 1 from Eyring plots for the hydrogenation of cyclohexene (ΔG(⧧) = 17.2 ± 1.0 kcal/mol) and 1-methylcyclohexene (ΔG(⧧) = 18.8 ± 2.4 kcal/mol), while the Gibbs free activation energy ΔG(⧧) for the hydrogenation of cyclohexene with precatalyst 2 was determined to be 21.1 ± 2.3 kcal/mol. Plausible activation pathways and catalytic cycles were computed in the gas phase (M06-L/def2-SVP). A variety of popular density functionals (ωB97X-D, LC-ωPBE, CAM-B3LYP, B3LYP, B97-D3BJ, B3LYP-D3, BP86-D3, PBE0-D3, M06-L, MN12-L) were used to reoptimize the turnover determining states in the solvent phase (DF/def2-TZVP; IEF-PCM and/or SMD) to investigate how well the experimentally obtained activation barriers can be reproduced by the calculations. The density functionals B97-D3BJ, MN12-L, M06-L, B3LYP-D3, and CAM-B3LYP reproduce the experimentally observed activation barriers for both olefins very well with very small (0.1 kcal/mol) to moderate (3.0 kcal/mol) mean deviations from the experimental values indicating for the field of hydrogenation catalysis most of these functionals to be useful for in silico catalyst design prior to experimental work. PMID:26713773

  11. Change in BMI Accurately Predicted by Social Exposure to Acquaintances

    PubMed Central

    Oloritun, Rahman O.; Ouarda, Taha B. M. J.; Moturu, Sai; Madan, Anmol; Pentland, Alex (Sandy); Khayal, Inas

    2013-01-01

    Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R2. This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends. PMID

  12. Assessment of computational prediction of tail buffeting

    NASA Technical Reports Server (NTRS)

    Edwards, John W.

    1990-01-01

    Assessments of the viability of computational methods and the computer resource requirements for the prediction of tail buffeting are made. Issues involved in the use of Euler and Navier-Stokes equations in modeling vortex-dominated and buffet flows are discussed and the requirement for sufficient grid density to allow accurate, converged calculations is stressed. Areas in need of basic fluid dynamics research are highlighted: vorticity convection, vortex breakdown, dynamic turbulence modeling for free shear layers, unsteady flow separation for moderately swept, rounded leading-edge wings, vortex flows about wings at high subsonic speeds. An estimate of the computer run time for a buffeting response calculation for a full span F-15 aircraft indicates that an improvement in computer and/or algorithm efficiency of three orders of magnitude is needed to enable routine use of such methods. Attention is also drawn to significant uncertainties in the estimates, in particular with regard to nonlinearities contained within the modeling and the question of the repeatability or randomness of buffeting response.

  13. High Order Schemes in Bats-R-US for Faster and More Accurate Predictions

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Toth, G.; Gombosi, T. I.

    2014-12-01

    BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.

  14. An accurate and computationally efficient model for membrane-type circular-symmetric micro-hotplates.

    PubMed

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  15. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  16. NMRDSP: an accurate prediction of protein shape strings from NMR chemical shifts and sequence data.

    PubMed

    Mao, Wusong; Cong, Peisheng; Wang, Zhiheng; Lu, Longjian; Zhu, Zhongliang; Li, Tonghua

    2013-01-01

    Shape string is structural sequence and is an extremely important structure representation of protein backbone conformations. Nuclear magnetic resonance chemical shifts give a strong correlation with the local protein structure, and are exploited to predict protein structures in conjunction with computational approaches. Here we demonstrate a novel approach, NMRDSP, which can accurately predict the protein shape string based on nuclear magnetic resonance chemical shifts and structural profiles obtained from sequence data. The NMRDSP uses six chemical shifts (HA, H, N, CA, CB and C) and eight elements of structure profiles as features, a non-redundant set (1,003 entries) as the training set, and a conditional random field as a classification algorithm. For an independent testing set (203 entries), we achieved an accuracy of 75.8% for S8 (the eight states accuracy) and 87.8% for S3 (the three states accuracy). This is higher than only using chemical shifts or sequence data, and confirms that the chemical shift and the structure profile are significant features for shape string prediction and their combination prominently improves the accuracy of the predictor. We have constructed the NMRDSP web server and believe it could be employed to provide a solid platform to predict other protein structures and functions. The NMRDSP web server is freely available at http://cal.tongji.edu.cn/NMRDSP/index.jsp. PMID:24376713

  17. NMRDSP: An Accurate Prediction of Protein Shape Strings from NMR Chemical Shifts and Sequence Data

    PubMed Central

    Mao, Wusong; Cong, Peisheng; Wang, Zhiheng; Lu, Longjian; Zhu, Zhongliang; Li, Tonghua

    2013-01-01

    Shape string is structural sequence and is an extremely important structure representation of protein backbone conformations. Nuclear magnetic resonance chemical shifts give a strong correlation with the local protein structure, and are exploited to predict protein structures in conjunction with computational approaches. Here we demonstrate a novel approach, NMRDSP, which can accurately predict the protein shape string based on nuclear magnetic resonance chemical shifts and structural profiles obtained from sequence data. The NMRDSP uses six chemical shifts (HA, H, N, CA, CB and C) and eight elements of structure profiles as features, a non-redundant set (1,003 entries) as the training set, and a conditional random field as a classification algorithm. For an independent testing set (203 entries), we achieved an accuracy of 75.8% for S8 (the eight states accuracy) and 87.8% for S3 (the three states accuracy). This is higher than only using chemical shifts or sequence data, and confirms that the chemical shift and the structure profile are significant features for shape string prediction and their combination prominently improves the accuracy of the predictor. We have constructed the NMRDSP web server and believe it could be employed to provide a solid platform to predict other protein structures and functions. The NMRDSP web server is freely available at http://cal.tongji.edu.cn/NMRDSP/index.jsp. PMID:24376713

  18. Computing accurate age and distance factors in cosmology

    NASA Astrophysics Data System (ADS)

    Christiansen, Jodi L.; Siver, Andrew

    2012-05-01

    As the universe expands astronomical observables such as brightness and angular size on the sky change in ways that differ from our simple Cartesian expectation. We show how observed quantities depend on the expansion of space and demonstrate how to calculate such quantities using the Friedmann equations. The general solution to the Friedmann equations requires a numerical solution, which is easily coded in any computing language (including excel). We use these numerical calculations in four projects that help students build their understanding of high-redshift phenomena and cosmology. Instructions for these projects are available as supplementary materials.

  19. Predictive models and computational toxicology.

    PubMed

    Knudsen, Thomas; Martin, Matthew; Chandler, Kelly; Kleinstreuer, Nicole; Judson, Richard; Sipes, Nisha

    2013-01-01

    Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was launched by EPA in 2007 and is part of the federal Tox21 consortium to develop a cost-effective approach for efficiently prioritizing the toxicity testing of thousands of chemicals and the application of this information to assessing human toxicology. ToxCast addresses this problem through an integrated workflow using high-throughput screening (HTS) of chemical libraries across more than 650 in vitro assays including biochemical assays, human cells and cell lines, and alternative models such as mouse embryonic stem cells and zebrafish embryo development. The initial phase of ToxCast profiled a library of 309 environmental chemicals, mostly pesticidal actives having rich in vivo data from guideline studies that include chronic/cancer bioassays in mice and rats, multigenerational reproductive studies in rats, and prenatal developmental toxicity endpoints in rats and rabbits. The first phase of ToxCast was used to build models that aim to determine how well in vivo animal effects can be predicted solely from the in vitro data. Phase I is now complete and both the in vitro data (ToxCast) and anchoring in vivo database (ToxRefDB) have been made available to the public (http://actor.epa.gov/). As Phase II of ToxCast is now underway, the purpose of this chapter is to review progress to date with ToxCast predictive modeling, using specific examples on developmental and reproductive effects in rats and rabbits with lessons learned during Phase I. PMID:23138916

  20. A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows

    NASA Astrophysics Data System (ADS)

    Bijleveld, H. A.; Veldman, A. E. P.

    2014-12-01

    A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.

  1. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  2. Accurate First-Principles Spectra Predictions for Planetological and Astrophysical Applications at Various T-Conditions

    NASA Astrophysics Data System (ADS)

    Rey, M.; Nikitin, A. V.; Tyuterev, V.

    2014-06-01

    Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical

  3. Towards fast and accurate algorithms for processing fuzzy data: interval computations revisited

    NASA Astrophysics Data System (ADS)

    Xiang, Gang; Kreinovich, Vladik

    2013-02-01

    In many practical applications, we need to process data, e.g. to predict the future values of different quantities based on their current values. Often, the only information that we have about the current values comes from experts, and is described in informal ('fuzzy') terms like 'small'. To process such data, it is natural to use fuzzy techniques, techniques specifically designed by Lotfi Zadeh to handle such informal information. In this survey, we start by revisiting the motivation behind Zadeh's formulae for processing fuzzy data, and explain how the algorithmic problem of processing fuzzy data can be described in terms of interval computations (α-cuts). Many fuzzy practitioners claim 'I tried interval computations, they did not work' - meaning that they got estimates which are much wider than the desired α-cuts. We show that such statements are usually based on a (widely spread) misunderstanding - that interval computations simply mean replacing each arithmetic operation with the corresponding operation with intervals. We show that while such straightforward interval techniques indeed often lead to over-wide estimates, the current advanced interval computations techniques result in estimates which are much more accurate. We overview such advanced interval computations techniques, and show that by using them, we can efficiently and accurately process fuzzy data. We wrote this survey with three audiences in mind. First, we want fuzzy researchers and practitioners to understand the current advanced interval computations techniques and to use them to come up with faster and more accurate algorithms for processing fuzzy data. For this 'fuzzy' audience, we explain these current techniques in detail. Second, we also want interval researchers to better understand this important application area for their techniques. For this 'interval' audience, we want to explain where fuzzy techniques come from, what are possible variants of these techniques, and what are the

  4. Computational nanotoxicology: Predicting toxicity of nanoparticles

    NASA Astrophysics Data System (ADS)

    Burello, Enrico; Worth, Andrew

    2011-03-01

    A statistical model based on a quantitative structure-activity relationship accurately predicts the cytotoxicity of various metal oxide nanoparticles, thus offering a way to rapidly screen nanomaterials and prioritize testing.

  5. Computational materials science: Predictions of pinning

    NASA Astrophysics Data System (ADS)

    Paruch, Patrycja; Ghosez, Philippe

    2016-06-01

    A multiscale model has been implemented that provides accurate predictions of the behaviour of ferroelectric materials in electric fields, and might aid efforts to design devices such as sensors and digital memory. See Letter p.360

  6. A hierarchical approach to accurate predictions of macroscopic thermodynamic behavior from quantum mechanics and molecular simulations

    NASA Astrophysics Data System (ADS)

    Garrison, Stephen L.

    2005-07-01

    The combination of molecular simulations and potentials obtained from quantum chemistry is shown to be able to provide reasonably accurate thermodynamic property predictions. Gibbs ensemble Monte Carlo simulations are used to understand the effects of small perturbations to various regions of the model Lennard-Jones 12-6 potential. However, when the phase behavior and second virial coefficient are scaled by the critical properties calculated for each potential, the results obey a corresponding states relation suggesting a non-uniqueness problem for interaction potentials fit to experimental phase behavior. Several variations of a procedure collectively referred to as quantum mechanical Hybrid Methods for Interaction Energies (HM-IE) are developed and used to accurately estimate interaction energies from CCSD(T) calculations with a large basis set in a computationally efficient manner for the neon-neon, acetylene-acetylene, and nitrogen-benzene systems. Using these results and methods, an ab initio, pairwise-additive, site-site potential for acetylene is determined and then improved using results from molecular simulations using this initial potential. The initial simulation results also indicate that a limited range of energies important for accurate phase behavior predictions. Second virial coefficients calculated from the improved potential indicate that one set of experimental data in the literature is likely erroneous. This prescription is then applied to methanethiol. Difficulties in modeling the effects of the lone pair electrons suggest that charges on the lone pair sites negatively impact the ability of the intermolecular potential to describe certain orientations, but that the lone pair sites may be necessary to reasonably duplicate the interaction energies for several orientations. Two possible methods for incorporating the effects of three-body interactions into simulations within the pairwise-additivity formulation are also developed. A low density

  7. Towards the computations of accurate spectroscopic parameters and vibrational spectra for organic compounds

    NASA Astrophysics Data System (ADS)

    Hochlaf, M.; Puzzarini, C.; Senent, M. L.

    2015-07-01

    We present multi-component computations for rotational constants, vibrational and torsional levels of medium-sized molecules. Through the treatment of two organic sulphur molecules, ethyl mercaptan and dimethyl sulphide, which are relevant for atmospheric and astrophysical media, we point out the outstanding capabilities of explicitly correlated coupled clusters (CCSD(T)-F12) method in conjunction with the cc-pVTZ-F12 basis set for the accurate predictions of such quantities. Indeed, we show that the CCSD(T)-F12/cc-pVTZ-F12 equilibrium rotational constants are in good agreement with those obtained by means of a composite scheme based on CCSD(T) calculations that accounts for the extrapolation to the complete basis set (CBS) limit and core-correlation effects [CCSD(T)/CBS+CV], thus leading to values of ground-state rotational constants rather close to the corresponding experimental data. For vibrational and torsional levels, our analysis reveals that the anharmonic frequencies derived from CCSD(T)-F12/cc-pVTZ-F12 harmonic frequencies and anharmonic corrections (Δν = ω - ν) at the CCSD/cc-pVTZ level closely agree with experimental results. The pattern of the torsional transitions and the shape of the potential energy surfaces along the torsional modes are also well reproduced using the CCSD(T)-F12/cc-pVTZ-F12 energies. Interestingly, this good accuracy is accompanied with a strong reduction of the computational costs. This makes the procedures proposed here as schemes of choice for effective and accurate prediction of spectroscopic properties of organic compounds. Finally, popular density functional approaches are compared with the coupled cluster (CC) methodologies in torsional studies. The long-range CAM-B3LYP functional of Handy and co-workers is recommended for large systems.

  8. Predicting Computer System Failures Using Support Vector Machines

    SciTech Connect

    Fulp, Errin W.; Fink, Glenn A.; Haack, Jereme N.

    2008-12-07

    Mitigating the impact of computer failure is possible if accurate failure predictions are provided. Resources, applications, and services can be scheduled around predicted failure and limit the impact. Such strategies are especially important for multi-computer systems, such as compute clusters, that experience a higher rate failure due to the large number of components. However providing accurate predictions with sufficient lead time remains a challenging problem. This paper describes a new spectrum-kernel Support Vector Machine (SVM) approach to predict failure events based on system log files. These files contain messages that represent a change of system state. While a single message in the file may not be sufficient for predicting failure, a sequence or pattern of messages may be. The approach described in this paper will use a sliding window (sub-sequence) of messages to predict the likelihood of failure. The frequency representation of the message sub-sequences observed are then used as input to the SVM that associates the messages to a class of failed or non-failed system. Experimental results using actual system log files from a Linux-based compute cluster indicate the proposed SVM approach can predict hard disk failure with an accuracy of 76% one day in advance.

  9. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  10. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    PubMed Central

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  11. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    PubMed

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  12. Predictive Models and Computational Toxicology

    EPA Science Inventory

    Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was l...

  13. PREDICTING CHEMICAL REACTIVITY BY COMPUTER

    EPA Science Inventory

    Mathematical models for predicting the fate of pollutants in the environment require reactivity parameter values--that it, the physical and chemical constants that govern reactivity. lthough empirical structure-activity relationships have been developed that allow estimation of s...

  14. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  15. Computational flow predictions for hypersonic drag devices

    NASA Technical Reports Server (NTRS)

    Tokarcik, Susan A.; Venkatapathy, Ethiraj

    1993-01-01

    The effectiveness of two types of hypersonic decelerators is examined: mechanically deployable flares and inflatable ballutes. Computational fluid dynamics (CFD) is used to predict the flowfield around a solid rocket motor (SRM) with a deployed decelerator. The computations are performed with an ideal gas solver using an effective specific heat ratio of 1.15. The results from the ideal gas solver are compared to computational results from a thermochemical nonequilibrium solver. The surface pressure coefficient, the drag, and the extend of the compression corner separation zone predicted by the ideal gas solver compare well with those predicted by the nonequilibrium solver. The ideal gas solver is computationally inexpensive and is shown to be well suited for preliminary design studies. The computed solutions are used to determine the size and shape of the decelerator that are required to achieve a drag coefficient of 5. Heat transfer rates to the SRM and the decelerators are predicted to estimate the amount of thermal protection required.

  16. Predictive Dynamic Security Assessment through Advanced Computing

    SciTech Connect

    Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

    2014-11-30

    Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

  17. Equilibrium gas flow computations. I - Accurate and efficient calculation of equilibrium gas properties

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    1989-01-01

    This paper treats the accurate and efficient calculation of thermodynamic properties of arbitrary gas mixtures for equilibrium flow computations. New improvements in the Stupochenko-Jaffe model for the calculation of thermodynamic properties of diatomic molecules are presented. A unified formulation of equilibrium calculations for gas mixtures in terms of irreversible entropy is given. Using a highly accurate thermo-chemical data base, a new, efficient and vectorizable search algorithm is used to construct piecewise interpolation procedures with generate accurate thermodynamic variable and their derivatives required by modern computational algorithms. Results are presented for equilibrium air, and compared with those given by the Srinivasan program.

  18. Accurate computation and interpretation of spin-dependent properties in metalloproteins

    NASA Astrophysics Data System (ADS)

    Rodriguez, Jorge

    2006-03-01

    Nature uses the properties of open-shell transition metal ions to carry out a variety of functions associated with vital life processes. Mononuclear and binuclear iron centers, in particular, are intriguing structural motifs present in many heme and non-heme proteins. Hemerythrin and methane monooxigenase, for example, are members of the latter class whose diiron active sites display magnetic ordering. We have developed a computational protocol based on spin density functional theory (SDFT) to accurately predict physico-chemical parameters of metal sites in proteins and bioinorganic complexes which traditionally had only been determined from experiment. We have used this new methodology to perform a comprehensive study of the electronic structure and magnetic properties of heme and non-heme iron proteins and related model compounds. We have been able to predict with a high degree of accuracy spectroscopic (Mössbauer, EPR, UV-vis, Raman) and magnetization parameters of iron proteins and, at the same time, gained unprecedented microscopic understanding of their physico-chemical properties. Our results have allowed us to establish important correlations between the electronic structure, geometry, spectroscopic data, and biochemical function of heme and non- heme iron proteins.

  19. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  20. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  1. Efficient Methods to Compute Genomic Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  2. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography

    PubMed Central

    Haley, William E.; Ibrahim, El-Sayed H.; Qu, Mingliang; Cernigliaro, Joseph G.; Goldfarb, David S.; McCollough, Cynthia H.

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  3. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography.

    PubMed

    Haley, William E; Ibrahim, El-Sayed H; Qu, Mingliang; Cernigliaro, Joseph G; Goldfarb, David S; McCollough, Cynthia H

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  4. Reliability of computer-generated prediction tracing.

    PubMed

    Cangialosi, T J; Chung, J M; Elliott, D F; Meistrell, M E

    1995-01-01

    The reliability of a commercially available computer prediction program (Quick Ceph II) was evaluated using pretreatment and posttreatment cephalograms of 30 patients who were treated during an active period of growth. The computer prediction was compared with the actual treatment result, and the growth forecast with the computer program was compared with the growth forecast using a manual method. Using paired student's t-tests, predictions for 5 of the 10 variables measured were found to be statistically reliable. Comparing the relative accuracy of growth prediction in terms of absolute values, the computer came closer to the actual result in four of the nine variables, while the manual method came closer in three variables. Predictions for the other two variables were virtually the same using both methods. The manual method of prediction was sufficient to give a reasonably good graphic representation of growth changes to create a VTO. However, the computer offers the added advantages of quicker access to information and somewhat greater accuracy in producing the tracing, as well as its use in patient education. PMID:7486242

  5. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  6. Accurate prediction of V1 location from cortical folds in a surface coordinate system

    PubMed Central

    Hinds, Oliver P.; Rajendran, Niranjini; Polimeni, Jonathan R.; Augustinack, Jean C.; Wiggins, Graham; Wald, Lawrence L.; Rosas, H. Diana; Potthast, Andreas; Schwartz, Eric L.; Fischl, Bruce

    2008-01-01

    Previous studies demonstrated substantial variability of the location of primary visual cortex (V1) in stereotaxic coordinates when linear volume-based registration is used to match volumetric image intensities (Amunts et al., 2000). However, other qualitative reports of V1 location (Smith, 1904; Stensaas et al., 1974; Rademacher et al., 1993) suggested a consistent relationship between V1 and the surrounding cortical folds. Here, the relationship between folds and the location of V1 is quantified using surface-based analysis to generate a probabilistic atlas of human V1. High-resolution (about 200 μm) magnetic resonance imaging (MRI) at 7 T of ex vivo human cerebral hemispheres allowed identification of the full area via the stria of Gennari: a myeloarchitectonic feature specific to V1. Separate, whole-brain scans were acquired using MRI at 1.5 T to allow segmentation and mesh reconstruction of the cortical gray matter. For each individual, V1 was manually identified in the high-resolution volume and projected onto the cortical surface. Surface-based intersubject registration (Fischl et al., 1999b) was performed to align the primary cortical folds of individual hemispheres to those of a reference template representing the average folding pattern. An atlas of V1 location was constructed by computing the probability of V1 inclusion for each cortical location in the template space. This probabilistic atlas of V1 exhibits low prediction error compared to previous V1 probabilistic atlases built in volumetric coordinates. The increased predictability observed under surface-based registration suggests that the location of V1 is more accurately predicted by the cortical folds than by the shape of the brain embedded in the volume of the skull. In addition, the high quality of this atlas provides direct evidence that surface-based intersubject registration methods are superior to volume-based methods at superimposing functional areas of cortex, and therefore are better

  7. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  8. Accurate First-Principles Spectra Predictions for Ethylene and its Isotopologues from Full 12D AB Initio Surfaces

    NASA Astrophysics Data System (ADS)

    Delahaye, Thibault; Rey, Michael; Tyuterev, Vladimir; Nikitin, Andrei V.; Szalay, Peter

    2015-06-01

    Hydrocarbons such as ethylene (C_2H_4) and methane (CH_4) are of considerable interest for the modeling of planetary atmospheres and other astrophysical applications. Knowledge of rovibrational transitions of hydrocarbons is of primary importance in many fields but remains a formidable challenge for the theory and spectral analysis. Essentially two theoretical approaches for the computation and prediction of spectra exist. The first one is based on empirically-fitted effective spectroscopic models. Several databases aim at collecting the corresponding data but the information about C_2H_4 spectrum present in these databases remains limited, only some spectral ranges around 1000, 3000 and 6000 cm-1 being available. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. Although they do not yet reach the spectroscopic accuracy, they could provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on two necessary ingredients: (i) accurate intramolecular potential energy surface and dipole moment surface components and (ii) efficient computational methods to achieve a good numerical convergence. We report predictions of vibrational and rovibrational energy levels of C_2H_4 using our new ground state potential energy surface obtained from extended ab initio calculations. Additionally we will introduce line positions and line intensities predictions based on a new dipole moment surface for ethylene. These results will be compared with previous works on ethylene and its isotopologues.

  9. An accurate modeling, simulation, and analysis tool for predicting and estimating Raman LIDAR system performance

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.

    2007-09-01

    BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.

  10. A Single Linear Prediction Filter that Accurately Predicts the AL Index

    NASA Astrophysics Data System (ADS)

    McPherron, R. L.; Chu, X.

    2015-12-01

    The AL index is a measure of the strength of the westward electrojet flowing along the auroral oval. It has two components: one from the global DP-2 current system and a second from the DP-1 current that is more localized near midnight. It is generally believed that the index a very poor measure of these currents because of its dependence on the distance of stations from the source of the two currents. In fact over season and solar cycle the coupling strength defined as the steady state ratio of the output AL to the input coupling function varies by a factor of four. There are four factors that lead to this variation. First is the equinoctial effect that modulates coupling strength with peaks (strongest coupling) at the equinoxes. Second is the saturation of the polar cap potential which decreases coupling strength as the strength of the driver increases. Since saturation occurs more frequently at solar maximum we obtain the result that maximum coupling strength occurs at equinox at solar minimum. A third factor is ionospheric conductivity with stronger coupling at summer solstice as compared to winter. The fourth factor is the definition of a solar wind coupling function appropriate to a given index. We have developed an optimum coupling function depending on solar wind speed, density, transverse magnetic field, and IMF clock angle which is better than previous functions. Using this we have determined the seasonal variation of coupling strength and developed an inverse function that modulates the optimum coupling function so that all seasonal variation is removed. In a similar manner we have determined the dependence of coupling strength on solar wind driver strength. The inverse of this function is used to scale a linear prediction filter thus eliminating the dependence on driver strength. Our result is a single linear filter that is adjusted in a nonlinear manner by driver strength and an optimum coupling function that is seasonal modulated. Together this

  11. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  12. Accurate microRNA target prediction correlates with protein repression levels

    PubMed Central

    Maragkakis, Manolis; Alexiou, Panagiotis; Papadopoulos, Giorgio L; Reczko, Martin; Dalamagas, Theodore; Giannopoulos, George; Goumas, George; Koukis, Evangelos; Kourtis, Kornilios; Simossis, Victor A; Sethupathy, Praveen; Vergoulis, Thanasis; Koziris, Nectarios; Sellis, Timos; Tsanakas, Panagiotis; Hatzigeorgiou, Artemis G

    2009-01-01

    Background MicroRNAs are small endogenously expressed non-coding RNA molecules that regulate target gene expression through translation repression or messenger RNA degradation. MicroRNA regulation is performed through pairing of the microRNA to sites in the messenger RNA of protein coding genes. Since experimental identification of miRNA target genes poses difficulties, computational microRNA target prediction is one of the key means in deciphering the role of microRNAs in development and disease. Results DIANA-microT 3.0 is an algorithm for microRNA target prediction which is based on several parameters calculated individually for each microRNA and combines conserved and non-conserved microRNA recognition elements into a final prediction score, which correlates with protein production fold change. Specifically, for each predicted interaction the program reports a signal to noise ratio and a precision score which can be used as an indication of the false positive rate of the prediction. Conclusion Recently, several computational target prediction programs were benchmarked based on a set of microRNA target genes identified by the pSILAC method. In this assessment DIANA-microT 3.0 was found to achieve the highest precision among the most widely used microRNA target prediction programs reaching approximately 66%. The DIANA-microT 3.0 prediction results are available online in a user friendly web server at PMID:19765283

  13. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  14. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    SciTech Connect

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  15. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons.

    PubMed

    Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs. PMID:25669533

  16. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    NASA Astrophysics Data System (ADS)

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-01

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  17. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707

  18. Bicluster Sampled Coherence Metric (BSCM) provides an accurate environmental context for phenotype predictions

    PubMed Central

    2015-01-01

    Background Biclustering is a popular method for identifying under which experimental conditions biological signatures are co-expressed. However, the general biclustering problem is NP-hard, offering room to focus algorithms on specific biological tasks. We hypothesize that conditional co-regulation of genes is a key factor in determining cell phenotype and that accurately segregating conditions in biclusters will improve such predictions. Thus, we developed a bicluster sampled coherence metric (BSCM) for determining which conditions and signals should be included in a bicluster. Results Our BSCM calculates condition and cluster size specific p-values, and we incorporated these into the popular integrated biclustering algorithm cMonkey. We demonstrate that incorporation of our new algorithm significantly improves bicluster co-regulation scores (p-value = 0.009) and GO annotation scores (p-value = 0.004). Additionally, we used a bicluster based signal to predict whether a given experimental condition will result in yeast peroxisome induction. Using the new algorithm, the classifier accuracy improves from 41.9% to 76.1% correct. Conclusions We demonstrate that the proposed BSCM helps determine which signals ought to be co-clustered, resulting in more accurately assigned bicluster membership. Furthermore, we show that BSCM can be extended to more accurately detect under which experimental conditions the genes are co-clustered. Features derived from this more accurate analysis of conditional regulation results in a dramatic improvement in the ability to predict a cellular phenotype in yeast. The latest cMonkey is available for download at https://github.com/baliga-lab/cmonkey2. The experimental data and source code featured in this paper is available http://AitchisonLab.com/BSCM. BSCM has been incorporated in the official cMonkey release. PMID:25881257

  19. Highly accurate prediction of emotions surrounding the attacks of September 11, 2001 over 1-, 2-, and 7-year prediction intervals.

    PubMed

    Doré, Bruce P; Meksin, Robert; Mather, Mara; Hirst, William; Ochsner, Kevin N

    2016-06-01

    In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting (a) the overall intensity of their future negative emotion, and (b) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. (PsycINFO Database Record PMID:27100309

  20. Prediction of Accurate Thermochemistry of Medium and Large Sized Radicals Using Connectivity-Based Hierarchy (CBH).

    PubMed

    Sengupta, Arkajyoti; Raghavachari, Krishnan

    2014-10-14

    Accurate modeling of the chemical reactions in many diverse areas such as combustion, photochemistry, or atmospheric chemistry strongly depends on the availability of thermochemical information of the radicals involved. However, accurate thermochemical investigations of radical systems using state of the art composite methods have mostly been restricted to the study of hydrocarbon radicals of modest size. In an alternative approach, systematic error-canceling thermochemical hierarchy of reaction schemes can be applied to yield accurate results for such systems. In this work, we have extended our connectivity-based hierarchy (CBH) method to the investigation of radical systems. We have calibrated our method using a test set of 30 medium sized radicals to evaluate their heats of formation. The CBH-rad30 test set contains radicals containing diverse functional groups as well as cyclic systems. We demonstrate that the sophisticated error-canceling isoatomic scheme (CBH-2) with modest levels of theory is adequate to provide heats of formation accurate to ∼1.5 kcal/mol. Finally, we predict heats of formation of 19 other large and medium sized radicals for which the accuracy of available heats of formation are less well-known. PMID:26588131

  1. conSSert: Consensus SVM Model for Accurate Prediction of Ordered Secondary Structure.

    PubMed

    Kieslich, Chris A; Smadbeck, James; Khoury, George A; Floudas, Christodoulos A

    2016-03-28

    Accurate prediction of protein secondary structure remains a crucial step in most approaches to the protein-folding problem, yet the prediction of ordered secondary structure, specifically beta-strands, remains a challenge. We developed a consensus secondary structure prediction method, conSSert, which is based on support vector machines (SVM) and provides exceptional accuracy for the prediction of beta-strands with QE accuracy of over 0.82 and a Q2-EH of 0.86. conSSert uses as input probabilities for the three types of secondary structure (helix, strand, and coil) that are predicted by four top performing methods: PSSpred, PSIPRED, SPINE-X, and RAPTOR. conSSert was trained/tested using 4261 protein chains from PDBSelect25, and 8632 chains from PISCES. Further validation was performed using targets from CASP9, CASP10, and CASP11. Our data suggest that poor performance in strand prediction is likely a result of training bias and not solely due to the nonlocal nature of beta-sheet contacts. conSSert is freely available for noncommercial use as a webservice: http://ares.tamu.edu/conSSert/ . PMID:26928531

  2. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants. PMID:24216719

  3. Planar Near-Field Phase Retrieval Using GPUs for Accurate THz Far-Field Prediction

    NASA Astrophysics Data System (ADS)

    Junkin, Gary

    2013-04-01

    With a view to using Phase Retrieval to accurately predict Terahertz antenna far-field from near-field intensity measurements, this paper reports on three fundamental advances that achieve very low algorithmic error penalties. The first is a new Gaussian beam analysis that provides accurate initial complex aperture estimates including defocus and astigmatic phase errors, based only on first and second moment calculations. The second is a powerful noise tolerant near-field Phase Retrieval algorithm that combines Anderson's Plane-to-Plane (PTP) with Fienup's Hybrid-Input-Output (HIO) and Successive Over-Relaxation (SOR) to achieve increased accuracy at reduced scan separations. The third advance employs teraflop Graphical Processing Units (GPUs) to achieve practically real time near-field phase retrieval and to obtain the optimum aperture constraint without any a priori information.

  4. Computational prediction of microRNA genes.

    PubMed

    Hertel, Jana; Langenberger, David; Stadler, Peter F

    2014-01-01

    The computational identification of novel microRNA (miRNA) genes is a challenging task in bioinformatics. Massive amounts of data describing unknown functional RNA transcripts have to be analyzed for putative miRNA candidates with automated computational pipelines. Beyond those miRNAs that meet the classical definition, high-throughput sequencing techniques have revealed additional miRNA-like molecules that are derived by alternative biogenesis pathways. Exhaustive bioinformatics analyses on such data involve statistical issues as well as precise sequence and structure inspection not only of the functional mature part but also of the whole precursor sequence of the putative miRNA. Apart from a considerable amount of species-specific miRNAs, the majority of all those genes are conserved at least among closely related organisms. Some miRNAs, however, can be traced back to very early points in the evolution of eukaryotic species. Thus, the investigation of the conservation of newly found miRNA candidates comprises an important step in the computational annotation of miRNAs.Topics covered in this chapter include a review on the obvious problem of miRNA annotation and family definition, recommended pipelines of computational miRNA annotation or detection, and an overview of current computer tools for the prediction of miRNAs and their limitations. The chapter closes discussing how those bioinformatic approaches address the problem of faithful miRNA prediction and correct annotation. PMID:24639171

  5. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    SciTech Connect

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  6. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGESBeta

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  7. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    SciTech Connect

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.

  8. Predicting microbial interactions through computational approaches.

    PubMed

    Li, Chenhao; Lim, Kun Ming Kenneth; Chng, Kern Rei; Nagarajan, Niranjan

    2016-06-01

    Microorganisms play a vital role in various ecosystems and characterizing interactions between them is an essential step towards understanding the organization and function of microbial communities. Computational prediction has recently become a widely used approach to investigate microbial interactions. We provide a thorough review of emerging computational methods organized by the type of data they employ. We highlight three major challenges in inferring interactions using metagenomic survey data and discuss the underlying assumptions and mathematics of interaction inference algorithms. In addition, we review interaction prediction methods relying on metabolic pathways, which are increasingly used to reveal mechanisms of interactions. Furthermore, we also emphasize the importance of mining the scientific literature for microbial interactions - a largely overlooked data source for experimentally validated interactions. PMID:27025964

  9. Accurate Prediction of Severe Allergic Reactions by a Small Set of Environmental Parameters (NDVI, Temperature)

    PubMed Central

    Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106

  10. Microstructure-Dependent Gas Adsorption: Accurate Predictions of Methane Uptake in Nanoporous Carbons

    SciTech Connect

    Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R

    2014-01-01

    We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.

  11. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    NASA Astrophysics Data System (ADS)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  12. Accurate verification of the conserved-vector-current and standard-model predictions

    SciTech Connect

    Sirlin, A.; Zucchini, R.

    1986-10-20

    An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.

  13. Systematic computational prediction of protein interaction networks.

    PubMed

    Lees, J G; Heriche, J K; Morilla, I; Ranea, J A; Orengo, C A

    2011-06-01

    Determining the network of physical protein associations is an important first step in developing mechanistic evidence for elucidating biological pathways. Despite rapid advances in the field of high throughput experiments to determine protein interactions, the majority of associations remain unknown. Here we describe computational methods for significantly expanding protein association networks. We describe methods for integrating multiple independent sources of evidence to obtain higher quality predictions and we compare the major publicly available resources available for experimentalists to use. PMID:21572181

  14. Predicting aerially applied particle deposition by computer

    NASA Technical Reports Server (NTRS)

    Bilanin, A. J.; Teske, M. E.; Morris, D. J.

    1981-01-01

    This paper is a status report on a NASA effort to develop a computer code capable of simulating the deposition of materials in the wake of fixed or rotary wing aircraft operating under realistic atmospheric conditions. The deposition code 'Ag Disp' is novel in that the mean particle trajectory, as well as the variance from the mean resulting from fluid fluctuations are simultaneously predicted. Sample calculations are undertaken to demonstrate the versatility of this code.

  15. Is ""predictability"" in computational sciences a myth?

    SciTech Connect

    Hemez, Francois M

    2011-01-31

    Within the last two decades, Modeling and Simulation (M&S) has become the tool of choice to investigate the behavior of complex phenomena. Successes encountered in 'hard' sciences are prompting interest to apply a similar approach to Computational Social Sciences in support, for example, of national security applications faced by the Intelligence Community (IC). This manuscript attempts to contribute to the debate on the relevance of M&S to IC problems by offering an overview of what it takes to reach 'predictability' in computational sciences. Even though models developed in 'soft' and 'hard' sciences are different, useful analogies can be drawn. The starting point is to view numerical simulations as 'filters' capable to represent information only within specific length, time or energy bandwidths. This simplified view leads to the discussion of resolving versus modeling which motivates the need for sub-scale modeling. The role that modeling assumptions play in 'hiding' our lack-of-knowledge about sub-scale phenomena is explained which leads to discussing uncertainty in simulations. It is argued that the uncertainty caused by resolution and modeling assumptions should be dealt with differently than uncertainty due to randomness or variability. The corollary is that a predictive capability cannot be defined solely as accuracy, or ability of predictions to match the available physical observations. We propose that 'predictability' is the demonstration that predictions from a class of 'equivalent' models are as consistent as possible. Equivalency stems from defining models that share a minimum requirement of accuracy, while being equally robust to the sources of lack-of-knowledge in the problem. Examples in computational physics and engineering are given to illustrate the discussion.

  16. RNA secondary structure prediction using soft computing.

    PubMed

    Ray, Shubhra Sankar; Pal, Sankar K

    2013-01-01

    Prediction of RNA structure is invaluable in creating new drugs and understanding genetic diseases. Several deterministic algorithms and soft computing-based techniques have been developed for more than a decade to determine the structure from a known RNA sequence. Soft computing gained importance with the need to get approximate solutions for RNA sequences by considering the issues related with kinetic effects, cotranscriptional folding, and estimation of certain energy parameters. A brief description of some of the soft computing-based techniques, developed for RNA secondary structure prediction, is presented along with their relevance. The basic concepts of RNA and its different structural elements like helix, bulge, hairpin loop, internal loop, and multiloop are described. These are followed by different methodologies, employing genetic algorithms, artificial neural networks, and fuzzy logic. The role of various metaheuristics, like simulated annealing, particle swarm optimization, ant colony optimization, and tabu search is also discussed. A relative comparison among different techniques, in predicting 12 known RNA secondary structures, is presented, as an example. Future challenging issues are then mentioned. PMID:23702539

  17. Accurate computation of Stokes flow driven by an open immersed interface

    NASA Astrophysics Data System (ADS)

    Li, Yi; Layton, Anita T.

    2012-06-01

    We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.

  18. Computational Aeroheating Predictions for Mars Lander Configurations

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Alter, Stephen J.

    2003-01-01

    The proposed Mars Science Laboratory (MSL) mission is intended to deliver a large rover to the Martian surface within 10 km of the target site. This paper presents computational fluid dynamics (CFD) predictions of forebody heating rates for two MSL entry configurations with fixed aerodynamic trim tabs. Results are compared to heating on a 70-deg sphere-cone reference geometry. All three heatshield geometries are designed to trim hypersonically at a 16 deg angle of attack in order to generate the lift-to-drag ratio (LID) required for precision landing. Comparisons between CFD and tunnel data are generally in good agreement for each configuration, but the computations predict more flow separation and higher heating on a trim tab inclined 10 deg relative to the surface. CFD solutions at flight conditions were obtained using an 8-species Mars gas in chemical and thermal non-equilibrium. Laminar and Baldwin-Lomax solutions were used to estimate the effects of the trim tabs and turbulence on heating. A tab extending smoothly from the heatshield flank is not predicted to increase laminar or turbulent heating rates above the reference levels. Laminar heating on a tab deflected 10 deg from the conical heatshield is influenced by flow separation and is up to 35% above the baseline heating rate. The turbulent solution on the inclined tab configuration predicts attached flow and a 43% heating increase above the reference level.

  19. Computational Aeroheating Predictions for Mars Lander Configurations

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Alter, Stephen J.

    2003-01-01

    The proposed Mars Science Laboratory (MSL) mission is intended to deliver a large rover to the Martian surface within 10 km of the target site. This paper presents computational fluid dynamics (CFD) predictions of forebody heating rates for two MSL entry configurations with fixed aerodynamic trim tabs. Results are compared to heating on a 70-deg sphere-cone reference geometry. All three heatshield geometries are designed to trim hypersonically at a 16 deg angle of attack in order to generate the lift-to-drag ratio (L/D) required for precision landing. Comparisons between CFD and tunnel data are generally in good agreement for each configuration, but the computations predict more flow separation and higher heating on a trim tab inclined 10 deg relative to the surface. CFD solutions at flight conditions were obtained using an 8-species Mars gas in chemical and thermal nonequilibrium. Laminar and Baldwin-Lomax solutions were used to estimate the effects of the trim tabs and turbulence on heating. A tab extending smoothly from the heatshield flank is not predicted to increase laminar or turbulent heating rates above the reference levels. Laminar heating on a tab deflected 10 deg from the conical heatshield is influenced by flow separation and is up to 35% above the baseline heating rate. The turbulent solution on the inclined tab configuration predicts attached flow and a 43% heating increase above the reference level.

  20. Computational Approaches for Predicting Biomedical Research Collaborations

    PubMed Central

    Zhang, Qing; Yu, Hong

    2014-01-01

    Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets. PMID:25375164

  1. Real-time Tsunami Inundation Prediction Using High Performance Computers

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    earthquake occurs took about 2 minutes, which would be sufficient for a practical tsunami inundation predictions. In the presentation, the computational performance of our faster-than-real-time tsunami inundation model will be shown, and preferable tsunami wave source analysis for an accurate inundation prediction will also be discussed.

  2. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  3. Palm computer demonstrates a fast and accurate means of burn data collection.

    PubMed

    Lal, S O; Smith, F W; Davis, J P; Castro, H Y; Smith, D W; Chinkes, D L; Barrow, R E

    2000-01-01

    Manual biomedical data collection and entry of the data into a personal computer is time-consuming and can be prone to errors. The purpose of this study was to compare data entry into a hand-held computer versus hand written data followed by entry of the data into a personal computer. A Palm (3Com Palm IIIx, Santa, Clara, Calif) computer with a custom menu-driven program was used for the entry and retrieval of burn-related variables. These variables were also used to create an identical sheet that was filled in by hand. Identical data were retrieved twice from 110 charts 48 hours apart and then used to create an Excel (Microsoft, Redmond, Wash) spreadsheet. One time data were recorded by the Palm entry method, and the other time the data were handwritten. The method of retrieval was alternated between the Palm system and handwritten system every 10 charts. The total time required to log data and to generate an Excel spreadsheet was recorded and used as a study endpoint. The total time for the Palm method of data collection and downloading to a personal computer was 23% faster than hand recording with the personal computer entry method (P < 0.05), and 58% fewer errors were generated with the Palm method.) The Palm is a faster and more accurate means of data collection than a handwritten technique. PMID:11194811

  4. Accurate prediction of helix interactions and residue contacts in membrane proteins.

    PubMed

    Hönigschmid, Peter; Frishman, Dmitrij

    2016-04-01

    Accurate prediction of intra-molecular interactions from amino acid sequence is an important pre-requisite for obtaining high-quality protein models. Over the recent years, remarkable progress in this area has been achieved through the application of novel co-variation algorithms, which eliminate transitive evolutionary connections between residues. In this work we present a new contact prediction method for α-helical transmembrane proteins, MemConP, in which evolutionary couplings are combined with a machine learning approach. MemConP achieves a substantially improved accuracy (precision: 56.0%, recall: 17.5%, MCC: 0.288) compared to the use of either machine learning or co-evolution methods alone. The method also achieves 91.4% precision, 42.1% recall and a MCC of 0.490 in predicting helix-helix interactions based on predicted contacts. The approach was trained and rigorously benchmarked by cross-validation and independent testing on up-to-date non-redundant datasets of 90 and 30 experimental three dimensional structures, respectively. MemConP is a standalone tool that can be downloaded together with the associated training data from http://webclu.bio.wzw.tum.de/MemConP. PMID:26851352

  5. Base-resolution methylation patterns accurately predict transcription factor bindings in vivo

    PubMed Central

    Xu, Tianlei; Li, Ben; Zhao, Meng; Szulwach, Keith E.; Street, R. Craig; Lin, Li; Yao, Bing; Zhang, Feiran; Jin, Peng; Wu, Hao; Qin, Zhaohui S.

    2015-01-01

    Detecting in vivo transcription factor (TF) binding is important for understanding gene regulatory circuitries. ChIP-seq is a powerful technique to empirically define TF binding in vivo. However, the multitude of distinct TFs makes genome-wide profiling for them all labor-intensive and costly. Algorithms for in silico prediction of TF binding have been developed, based mostly on histone modification or DNase I hypersensitivity data in conjunction with DNA motif and other genomic features. However, technical limitations of these methods prevent them from being applied broadly, especially in clinical settings. We conducted a comprehensive survey involving multiple cell lines, TFs, and methylation types and found that there are intimate relationships between TF binding and methylation level changes around the binding sites. Exploiting the connection between DNA methylation and TF binding, we proposed a novel supervised learning approach to predict TF–DNA interaction using data from base-resolution whole-genome methylation sequencing experiments. We devised beta-binomial models to characterize methylation data around TF binding sites and the background. Along with other static genomic features, we adopted a random forest framework to predict TF–DNA interaction. After conducting comprehensive tests, we saw that the proposed method accurately predicts TF binding and performs favorably versus competing methods. PMID:25722376

  6. Accurate prediction of unsteady and time-averaged pressure loads using a hybrid Reynolds-Averaged/large-eddy simulation technique

    NASA Astrophysics Data System (ADS)

    Bozinoski, Radoslav

    Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions

  7. Does a More Precise Chemical Description of Protein–Ligand Complexes Lead to More Accurate Prediction of Binding Affinity?

    PubMed Central

    2014-01-01

    Predicting the binding affinities of large sets of diverse molecules against a range of macromolecular targets is an extremely challenging task. The scoring functions that attempt such computational prediction are essential for exploiting and analyzing the outputs of docking, which is in turn an important tool in problems such as structure-based drug design. Classical scoring functions assume a predetermined theory-inspired functional form for the relationship between the variables that describe an experimentally determined or modeled structure of a protein–ligand complex and its binding affinity. The inherent problem of this approach is in the difficulty of explicitly modeling the various contributions of intermolecular interactions to binding affinity. New scoring functions based on machine-learning regression models, which are able to exploit effectively much larger amounts of experimental data and circumvent the need for a predetermined functional form, have already been shown to outperform a broad range of state-of-the-art scoring functions in a widely used benchmark. Here, we investigate the impact of the chemical description of the complex on the predictive power of the resulting scoring function using a systematic battery of numerical experiments. The latter resulted in the most accurate scoring function to date on the benchmark. Strikingly, we also found that a more precise chemical description of the protein–ligand complex does not generally lead to a more accurate prediction of binding affinity. We discuss four factors that may contribute to this result: modeling assumptions, codependence of representation and regression, data restricted to the bound state, and conformational heterogeneity in data. PMID:24528282

  8. Accurate and inexpensive prediction of the color optical properties of anthocyanins in solution.

    PubMed

    Ge, Xiaochuan; Timrov, Iurii; Binnie, Simon; Biancardi, Alessandro; Calzolari, Arrigo; Baroni, Stefano

    2015-04-23

    The simulation of the color optical properties of molecular dyes in liquid solution requires the calculation of time evolution of the solute absorption spectra fluctuating in the solvent at finite temperature. Time-averaged spectra can be directly evaluated by combining ab initio Car-Parrinello molecular dynamics and time-dependent density functional theory calculations. The inclusion of hybrid exchange-correlation functionals, necessary for the prediction of the correct transition frequencies, prevents one from using these techniques for the simulation of the optical properties of large realistic systems. Here we present an alternative approach for the prediction of the color of natural dyes in solution with a low computational cost. We applied this approach to representative anthocyanin dyes: the excellent agreement between the simulated and the experimental colors makes this method a straightforward and inexpensive tool for the high-throughput prediction of colors of molecules in liquid solvents. PMID:25830823

  9. Novel electromagnetic surface integral equations for highly accurate computations of dielectric bodies with arbitrarily low contrasts

    SciTech Connect

    Erguel, Ozguer; Guerel, Levent

    2008-12-01

    We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.

  10. An accurate quadrature technique for the contact boundary in 3D finite element computations

    NASA Astrophysics Data System (ADS)

    Duong, Thang X.; Sauer, Roger A.

    2015-01-01

    This paper presents a new numerical integration technique for 3D contact finite element implementations, focusing on a remedy for the inaccurate integration due to discontinuities at the boundary of contact surfaces. The method is based on the adaptive refinement of the integration domain along the boundary of the contact surface, and is accordingly denoted RBQ for refined boundary quadrature. It can be used for common element types of any order, e.g. Lagrange, NURBS, or T-Spline elements. In terms of both computational speed and accuracy, RBQ exhibits great advantages over a naive increase of the number of quadrature points. Also, the RBQ method is shown to remain accurate for large deformations. Furthermore, since the sharp boundary of the contact surface is determined, it can be used for various purposes like the accurate post-processing of the contact pressure. Several examples are presented to illustrate the new technique.

  11. Computer Model Predicts the Movement of Dust

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A new computer model of the atmosphere can now actually pinpoint where global dust events come from, and can project where they're going. The model may help scientists better evaluate the impact of dust on human health, climate, ocean carbon cycles, ecosystems, and atmospheric chemistry. Also, by seeing where dust originates and where it blows people with respiratory problems can get advanced warning of approaching dust clouds. 'The model is physically more realistic than previous ones,' said Mian Chin, a co-author of the study and an Earth and atmospheric scientist at Georgia Tech and the Goddard Space Flight Center (GSFC) in Greenbelt, Md. 'It is able to reproduce the short term day-to-day variations and long term inter-annual variations of dust concentrations and distributions that are measured from field experiments and observed from satellites.' The above images show both aerosols measured from space (left) and the movement of aerosols predicted by computer model for the same date (right). For more information, read New Computer Model Tracks and Predicts Paths Of Earth's Dust Images courtesy Paul Giroux, Georgia Tech/NASA Goddard Space Flight Center

  12. A simple yet accurate correction for winner's curse can predict signals discovered in much larger genome scans

    PubMed Central

    Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin

    2016-01-01

    Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203

  13. Effects of the inlet conditions and blood models on accurate prediction of hemodynamics in the stented coronary arteries

    NASA Astrophysics Data System (ADS)

    Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua

    2015-05-01

    Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.

  14. An accurate Fortran code for computing hydrogenic continuum wave functions at a wide range of parameters

    NASA Astrophysics Data System (ADS)

    Peng, Liang-You; Gong, Qihuang

    2010-12-01

    The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for

  15. An experiment in hurricane track prediction using parallel computing methods

    NASA Technical Reports Server (NTRS)

    Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

    1994-01-01

    The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

  16. Predictive Computational Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.

    In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.

  17. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water.

    PubMed

    Shvab, I; Sadus, Richard J

    2013-11-21

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g∕cm(3) for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC∕E and TIP4P∕2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC∕E and TIP4P∕2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K. PMID:24320337

  18. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water

    NASA Astrophysics Data System (ADS)

    Shvab, I.; Sadus, Richard J.

    2013-11-01

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm3 for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.

  19. Toward an Accurate Prediction of the Arrival Time of Geomagnetic-Effective Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Shi, T.; Wang, Y.; Wan, L.; Cheng, X.; Ding, M.; Zhang, J.

    2015-12-01

    Accurately predicting the arrival of coronal mass ejections (CMEs) to the Earth based on remote images is of critical significance for the study of space weather. Here we make a statistical study of 21 Earth-directed CMEs, specifically exploring the relationship between CME initial speeds and transit times. The initial speed of a CME is obtained by fitting the CME with the Graduated Cylindrical Shell model and is thus free of projection effects. We then use the drag force model to fit results of the transit time versus the initial speed. By adopting different drag regimes, i.e., the viscous, aerodynamics, and hybrid regimes, we get similar results, with a least mean estimation error of the hybrid model of 12.9 hr. CMEs with a propagation angle (the angle between the propagation direction and the Sun-Earth line) larger than their half-angular widths arrive at the Earth with an angular deviation caused by factors other than the radial solar wind drag. The drag force model cannot be reliably applied to such events. If we exclude these events in the sample, the prediction accuracy can be improved, i.e., the estimation error reduces to 6.8 hr. This work suggests that it is viable to predict the arrival time of CMEs to the Earth based on the initial parameters with fairly good accuracy. Thus, it provides a method of forecasting space weather 1-5 days following the occurrence of CMEs.

  20. Towards first-principles based prediction of highly accurate electrochemical Pourbiax diagrams

    NASA Astrophysics Data System (ADS)

    Zeng, Zhenhua; Chan, Maria; Greeley, Jeff

    2015-03-01

    Electrochemical Pourbaix diagrams lie at the heart of aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such Pourbaix diagrams, inherent errors in the description of strongly-correlated transition metal (hydr)oxides, together with neglect of weak van der Waals (vdW) interactions, has limited the reliability of the predictions for even the simplest bulk systems; corresponding predictions for more complex alloy or surface structures are even more challenging . Through introduction of a Hubbard U correction, employment of a state-of-the-art van der Waals functional, and use of pure water as a reference state for the calculations, these errors are systematically corrected. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxide, oxyhydroxide, binary and ternary oxides where the corresponding thermodynamics of oxidation and reduction can be accurately described with standard errors of less than 0.04 eV in comparison with experiment.

  1. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water

    SciTech Connect

    Shvab, I.; Sadus, Richard J.

    2013-11-21

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm{sup 3} for a wide range of temperatures (298–650 K) and pressures (0.1–700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.

  2. Accurate prediction of the ammonia probes of a variable proton-to-electron mass ratio

    NASA Astrophysics Data System (ADS)

    Owens, A.; Yurchenko, S. N.; Thiel, W.; Špirko, V.

    2015-07-01

    A comprehensive study of the mass sensitivity of the vibration-rotation-inversion transitions of 14NH3, 15NH3, 14ND3 and 15ND3 is carried out variationally using the TROVE approach. Variational calculations are robust and accurate, offering a new way to compute sensitivity coefficients. Particular attention is paid to the Δk = ±3 transitions between the accidentally coinciding rotation-inversion energy levels of the ν2 = 0+, 0-, 1+ and 1- states, and the inversion transitions in the ν4 = 1 state affected by the `giant' l-type doubling effect. These transitions exhibit highly anomalous sensitivities, thus appearing as promising probes of a possible cosmological variation of the proton-to-electron mass ratio μ. Moreover, a simultaneous comparison of the calculated sensitivities reveals a sizeable isotopic dependence which could aid an exclusive ammonia detection.

  3. Direct Pressure Monitoring Accurately Predicts Pulmonary Vein Occlusion During Cryoballoon Ablation

    PubMed Central

    Kosmidou, Ioanna; Wooden, Shannnon; Jones, Brian; Deering, Thomas; Wickliffe, Andrew; Dan, Dan

    2013-01-01

    Cryoballoon ablation (CBA) is an established therapy for atrial fibrillation (AF). Pulmonary vein (PV) occlusion is essential for achieving antral contact and PV isolation and is typically assessed by contrast injection. We present a novel method of direct pressure monitoring for assessment of PV occlusion. Transcatheter pressure is monitored during balloon advancement to the PV antrum. Pressure is recorded via a single pressure transducer connected to the inner lumen of the cryoballoon. Pressure curve characteristics are used to assess occlusion in conjunction with fluoroscopic or intracardiac echocardiography (ICE) guidance. PV occlusion is confirmed when loss of typical left atrial (LA) pressure waveform is observed with recordings of PA pressure characteristics (no A wave and rapid V wave upstroke). Complete pulmonary vein occlusion as assessed with this technique has been confirmed with concurrent contrast utilization during the initial testing of the technique and has been shown to be highly accurate and readily reproducible. We evaluated the efficacy of this novel technique in 35 patients. A total of 128 veins were assessed for occlusion with the cryoballoon utilizing the pressure monitoring technique; occlusive pressure was demonstrated in 113 veins with resultant successful pulmonary vein isolation in 111 veins (98.2%). Occlusion was confirmed with subsequent contrast injection during the initial ten procedures, after which contrast utilization was rapidly reduced or eliminated given the highly accurate identification of occlusive pressure waveform with limited initial training. Verification of PV occlusive pressure during CBA is a novel approach to assessing effective PV occlusion and it accurately predicts electrical isolation. Utilization of this method results in significant decrease in fluoroscopy time and volume of contrast. PMID:23485956

  4. Direct pressure monitoring accurately predicts pulmonary vein occlusion during cryoballoon ablation.

    PubMed

    Kosmidou, Ioanna; Wooden, Shannnon; Jones, Brian; Deering, Thomas; Wickliffe, Andrew; Dan, Dan

    2013-01-01

    Cryoballoon ablation (CBA) is an established therapy for atrial fibrillation (AF). Pulmonary vein (PV) occlusion is essential for achieving antral contact and PV isolation and is typically assessed by contrast injection. We present a novel method of direct pressure monitoring for assessment of PV occlusion. Transcatheter pressure is monitored during balloon advancement to the PV antrum. Pressure is recorded via a single pressure transducer connected to the inner lumen of the cryoballoon. Pressure curve characteristics are used to assess occlusion in conjunction with fluoroscopic or intracardiac echocardiography (ICE) guidance. PV occlusion is confirmed when loss of typical left atrial (LA) pressure waveform is observed with recordings of PA pressure characteristics (no A wave and rapid V wave upstroke). Complete pulmonary vein occlusion as assessed with this technique has been confirmed with concurrent contrast utilization during the initial testing of the technique and has been shown to be highly accurate and readily reproducible. We evaluated the efficacy of this novel technique in 35 patients. A total of 128 veins were assessed for occlusion with the cryoballoon utilizing the pressure monitoring technique; occlusive pressure was demonstrated in 113 veins with resultant successful pulmonary vein isolation in 111 veins (98.2%). Occlusion was confirmed with subsequent contrast injection during the initial ten procedures, after which contrast utilization was rapidly reduced or eliminated given the highly accurate identification of occlusive pressure waveform with limited initial training. Verification of PV occlusive pressure during CBA is a novel approach to assessing effective PV occlusion and it accurately predicts electrical isolation. Utilization of this method results in significant decrease in fluoroscopy time and volume of contrast. PMID:23485956

  5. Distance scaling method for accurate prediction of slowly varying magnetic fields in satellite missions

    NASA Astrophysics Data System (ADS)

    Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.

    2016-07-01

    In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.

  6. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    SciTech Connect

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  7. Measuring solar reflectance - Part I: Defining a metric that accurately predicts solar heat gain

    SciTech Connect

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-09-15

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective ''cool colored'' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland US latitudes, this metric R{sub E891BN} can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {<=} 5:12 [23 ]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool roof net energy savings by as much as 23%. We define clear sky air mass one global horizontal (''AM1GH'') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer. (author)

  8. A novel approach for accurate prediction of spontaneous passage of ureteral stones: support vector machines.

    PubMed

    Dal Moro, F; Abate, A; Lanckriet, G R G; Arandjelovic, G; Gasparella, P; Bassi, P; Mancini, M; Pagano, F

    2006-01-01

    The objective of this study was to optimally predict the spontaneous passage of ureteral stones in patients with renal colic by applying for the first time support vector machines (SVM), an instance of kernel methods, for classification. After reviewing the results found in the literature, we compared the performances obtained with logistic regression (LR) and accurately trained artificial neural networks (ANN) to those obtained with SVM, that is, the standard SVM, and the linear programming SVM (LP-SVM); the latter techniques show an improved performance. Moreover, we rank the prediction factors according to their importance using Fisher scores and the LP-SVM feature weights. A data set of 1163 patients affected by renal colic has been analyzed and restricted to single out a statistically coherent subset of 402 patients. Nine clinical factors are used as inputs for the classification algorithms, to predict one binary output. The algorithms are cross-validated by training and testing on randomly selected train- and test-set partitions of the data and reporting the average performance on the test sets. The SVM-based approaches obtained a sensitivity of 84.5% and a specificity of 86.9%. The feature ranking based on LP-SVM gives the highest importance to stone size, stone position and symptom duration before check-up. We propose a statistically correct way of employing LR, ANN and SVM for the prediction of spontaneous passage of ureteral stones in patients with renal colic. SVM outperformed ANN, as well as LR. This study will soon be translated into a practical software toolbox for actual clinical usage. PMID:16374437

  9. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    PubMed

    El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  10. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues

    PubMed Central

    EL-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  11. Accurate methods for computing inviscid and viscous Kelvin-Helmholtz instability

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.; Forbes, Lawrence K.

    2011-02-01

    The Kelvin-Helmholtz instability is modelled for inviscid and viscous fluids. Here, two bounded fluid layers flow parallel to each other with the interface between them growing in an unstable fashion when subjected to a small perturbation. In the various configurations of this problem, and the related problem of the vortex sheet, there are several phenomena associated with the evolution of the interface; notably the formation of a finite time curvature singularity and the ‘roll-up' of the interface. Two contrasting computational schemes will be presented. A spectral method is used to follow the evolution of the interface in the inviscid version of the problem. This allows the interface shape to be computed up to the time that a curvature singularity forms, with several computational difficulties overcome to reach that point. A weakly compressible viscous version of the problem is studied using finite difference techniques and a vorticity-streamfunction formulation. The two versions have comparable, but not identical, initial conditions and so the results exhibit some differences in timing. By including a small amount of viscosity the interface may be followed to the point that it rolls up into a classic ‘cat's-eye' shape. Particular attention was given to computing a consistent initial condition and solving the continuity equation both accurately and efficiently.

  12. Suite of finite element algorithms for accurate computation of soft tissue deformation for surgical simulation

    PubMed Central

    Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2008-01-01

    Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791

  13. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    SciTech Connect

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  14. Computational algorithms to predict Gene Ontology annotations

    PubMed Central

    2015-01-01

    Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper

  15. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    PubMed

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-01-01

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity. PMID:27130577

  16. Accurate prediction of lattice energies and structures of molecular crystals with molecular quantum chemistry methods.

    PubMed

    Fang, Tao; Li, Wei; Gu, Fangwei; Li, Shuhua

    2015-01-13

    We extend the generalized energy-based fragmentation (GEBF) approach to molecular crystals under periodic boundary conditions (PBC), and we demonstrate the performance of the method for a variety of molecular crystals. With this approach, the lattice energy of a molecular crystal can be obtained from the energies of a series of embedded subsystems, which can be computed with existing advanced molecular quantum chemistry methods. The use of the field compensation method allows the method to take long-range electrostatic interaction of the infinite crystal environment into account and make the method almost translationally invariant. The computational cost of the present method scales linearly with the number of molecules in the unit cell. Illustrative applications demonstrate that the PBC-GEBF method with explicitly correlated quantum chemistry methods is capable of providing accurate descriptions on the lattice energies and structures for various types of molecular crystals. In addition, this approach can be employed to quantify the contributions of various intermolecular interactions to the theoretical lattice energy. Such qualitative understanding is very useful for rational design of molecular crystals. PMID:26574207

  17. The general AMBER force field (GAFF) can accurately predict thermodynamic and transport properties of many ionic liquids.

    PubMed

    Sprenger, K G; Jaeger, Vance W; Pfaendtner, Jim

    2015-05-01

    We have applied molecular dynamics to calculate thermodynamic and transport properties of a set of 19 room-temperature ionic liquids. Since accurately simulating the thermophysical properties of solvents strongly depends upon the force field of choice, we tested the accuracy of the general AMBER force field, without refinement, for the case of ionic liquids. Electrostatic point charges were developed using ab initio calculations and a charge scaling factor of 0.8 to more accurately predict dynamic properties. The density, heat capacity, molar enthalpy of vaporization, self-diffusivity, and shear viscosity of the ionic liquids were computed and compared to experimentally available data, and good agreement across a wide range of cation and anion types was observed. Results show that, for a wide range of ionic liquids, the general AMBER force field, with no tuning of parameters, can reproduce a variety of thermodynamic and transport properties with similar accuracy to that of other published, often IL-specific, force fields. PMID:25853313

  18. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    SciTech Connect

    Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.

  19. Can CO2 assimilation in maize leaves be predicted accurately from chlorophyll fluorescence analysis?

    PubMed

    Edwards, G E; Baker, N R

    1993-08-01

    Analysis is made of the energetics of CO2 fixation, the photochemical quantum requirement per CO2 fixed, and sinks for utilising reductive power in the C4 plant maize. CO2 assimilation is the primary sink for energy derived from photochemistry, whereas photorespiration and nitrogen assimilation are relatively small sinks, particularly in developed leaves. Measurement of O2 exchange by mass spectrometry and CO2 exchange by infrared gas analysis under varying levels of CO2 indicate that there is a very close relationship between the true rate of O2 evolution from PS II and the net rate of CO2 fixation. Consideration is given to measurements of the quantum yields of PS II (φ PS II) from fluorescence analysis and of CO2 assimilation ([Formula: see text]) in maize over a wide range of conditions. The[Formula: see text] ratio was found to remain reasonably constant (ca. 12) over a range of physiological conditions in developed leaves, with varying temperature, CO2 concentrations, light intensities (from 5% to 100% of full sunlight), and following photoinhibition under high light and low temperature. A simple model for predicting CO2 assimilation from fluorescence parameters is presented and evaluated. It is concluded that under a wide range of conditions fluorescence parameters can be used to predict accurately and rapidly CO2 assimilation rates in maize. PMID:24317706

  20. Accurate multimodal probabilistic prediction of conversion to Alzheimer's disease in patients with mild cognitive impairment☆

    PubMed Central

    Young, Jonathan; Modat, Marc; Cardoso, Manuel J.; Mendelson, Alex; Cash, Dave; Ourselin, Sebastien

    2013-01-01

    Accurately identifying the patients that have mild cognitive impairment (MCI) who will go on to develop Alzheimer's disease (AD) will become essential as new treatments will require identification of AD patients at earlier stages in the disease process. Most previous work in this area has centred around the same automated techniques used to diagnose AD patients from healthy controls, by coupling high dimensional brain image data or other relevant biomarker data to modern machine learning techniques. Such studies can now distinguish between AD patients and controls as accurately as an experienced clinician. Models trained on patients with AD and control subjects can also distinguish between MCI patients that will convert to AD within a given timeframe (MCI-c) and those that remain stable (MCI-s), although differences between these groups are smaller and thus, the corresponding accuracy is lower. The most common type of classifier used in these studies is the support vector machine, which gives categorical class decisions. In this paper, we introduce Gaussian process (GP) classification to the problem. This fully Bayesian method produces naturally probabilistic predictions, which we show correlate well with the actual chances of converting to AD within 3 years in a population of 96 MCI-s and 47 MCI-c subjects. Furthermore, we show that GPs can integrate multimodal data (in this study volumetric MRI, FDG-PET, cerebrospinal fluid, and APOE genotype with the classification process through the use of a mixed kernel). The GP approach aids combination of different data sources by learning parameters automatically from training data via type-II maximum likelihood, which we compare to a more conventional method based on cross validation and an SVM classifier. When the resulting probabilities from the GP are dichotomised to produce a binary classification, the results for predicting MCI conversion based on the combination of all three types of data show a balanced accuracy

  1. Accurate Prediction of Transposon-Derived piRNAs by Integrating Various Sequential and Physicochemical Features

    PubMed Central

    Luo, Longqiang; Li, Dingfang; Zhang, Wen; Tu, Shikui; Zhu, Xiaopeng; Tian, Gang

    2016-01-01

    Background Piwi-interacting RNA (piRNA) is the largest class of small non-coding RNA molecules. The transposon-derived piRNA prediction can enrich the research contents of small ncRNAs as well as help to further understand generation mechanism of gamete. Methods In this paper, we attempt to differentiate transposon-derived piRNAs from non-piRNAs based on their sequential and physicochemical features by using machine learning methods. We explore six sequence-derived features, i.e. spectrum profile, mismatch profile, subsequence profile, position-specific scoring matrix, pseudo dinucleotide composition and local structure-sequence triplet elements, and systematically evaluate their performances for transposon-derived piRNA prediction. Finally, we consider two approaches: direct combination and ensemble learning to integrate useful features and achieve high-accuracy prediction models. Results We construct three datasets, covering three species: Human, Mouse and Drosophila, and evaluate the performances of prediction models by 10-fold cross validation. In the computational experiments, direct combination models achieve AUC of 0.917, 0.922 and 0.992 on Human, Mouse and Drosophila, respectively; ensemble learning models achieve AUC of 0.922, 0.926 and 0.994 on the three datasets. Conclusions Compared with other state-of-the-art methods, our methods can lead to better performances. In conclusion, the proposed methods are promising for the transposon-derived piRNA prediction. The source codes and datasets are available in S1 File. PMID:27074043

  2. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    SciTech Connect

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro

    2012-06-15

    % for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. Conclusions: The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm.

  3. Accurate and Robust Genomic Prediction of Celiac Disease Using Statistical Learning

    PubMed Central

    Abraham, Gad; Tye-Din, Jason A.; Bhalala, Oneil G.; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael

    2014-01-01

    Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87–0.89) and in independent replication across cohorts (AUC of 0.86–0.9), despite differences in ethnicity. The models explained 30–35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases. PMID:24550740

  4. Accurate prediction for atomic-level protein design and its application in diversifying the near-optimal sequence space.

    PubMed

    Fromer, Menachem; Yanover, Chen

    2009-05-15

    precisely. Examination of the predicted ensembles indicates that, for each structure, the amino acid identity at a majority of positions must be chosen extremely selectively so as to not incur significant energetic penalties. We investigate this high degree of similarity and demonstrate how more diverse near-optimal sequences can be predicted in order to systematically overcome this bottleneck for computational design. Finally, we exploit this in-depth analysis of a collection of the lowest energy sequences to suggest an explanation for previously observed experimental design results. The novel methodologies introduced here accurately portray the sequence space compatible with a protein structure and further supply a scheme to yield heterogeneous low-energy sequences, thus providing a powerful instrument for future work on protein design. PMID:19003998

  5. Optical computed tomography of radiochromic gels for accurate three-dimensional dosimetry

    NASA Astrophysics Data System (ADS)

    Babic, Steven

    In this thesis, three-dimensional (3-D) radiochromic Ferrous Xylenol-orange (FX) and Leuco Crystal Violet (LCV) micelles gels were imaged by laser and cone-beam (Vista(TM)) optical computed tomography (CT) scanners. The objective was to develop optical CT of radiochromic gels for accurate 3-D dosimetry of intensity-modulated radiation therapy (IMRT) and small field techniques used in modern radiotherapy. First, the cause of a threshold dose response in FX gel dosimeters when scanned with a yellow light source was determined. This effect stems from a spectral sensitivity to multiple chemical complexes that are at different dose levels between ferric ions and xylenol-orange. To negate the threshold dose, an initial concentration of ferric ions is needed in order to shift the chemical equilibrium so that additional dose results in a linear production of a coloured complex that preferentially absorbs at longer wavelengths. Second, a low diffusion leuco-based radiochromic gel consisting of Triton X-100 micelles was developed. The diffusion coefficient of the LCV micelle gel was found to be minimal (0.036 + 0.001 mm2 hr-1 ). Although a dosimetric characterization revealed a reduced sensitivity to radiation, this was offset by a lower auto-oxidation rate and base optical density, higher melting point and no spectral sensitivity. Third, the Radiological Physics Centre (RPC) head-and-neck IMRT protocol was extended to 3-D dose verification using laser and cone-beam (Vista(TM)) optical CT scans of FX gels. Both optical systems yielded comparable measured dose distributions in high-dose regions and low gradients. The FX gel dosimetry results were crossed checked against independent thermoluminescent dosimeter and GAFChromicRTM EBT film measurements made by the RPC. It was shown that optical CT scanned FX gels can be used for accurate IMRT dose verification in 3-D. Finally, corrections for FX gel diffusion and scattered stray light in the Vista(TM) scanner were developed to

  6. Energy expenditure during level human walking: seeking a simple and accurate predictive solution.

    PubMed

    Ludlow, Lindsay W; Weyand, Peter G

    2016-03-01

    Accurate prediction of the metabolic energy that walking requires can inform numerous health, bodily status, and fitness outcomes. We adopted a two-step approach to identifying a concise, generalized equation for predicting level human walking metabolism. Using literature-aggregated values we compared 1) the predictive accuracy of three literature equations: American College of Sports Medicine (ACSM), Pandolf et al., and Height-Weight-Speed (HWS); and 2) the goodness-of-fit possible from one- vs. two-component descriptions of walking metabolism. Literature metabolic rate values (n = 127; speed range = 0.4 to 1.9 m/s) were aggregated from 25 subject populations (n = 5-42) whose means spanned a 1.8-fold range of heights and a 4.2-fold range of weights. Population-specific resting metabolic rates (V̇o2 rest) were determined using standardized equations. Our first finding was that the ACSM and Pandolf et al. equations underpredicted nearly all 127 literature-aggregated values. Consequently, their standard errors of estimate (SEE) were nearly four times greater than those of the HWS equation (4.51 and 4.39 vs. 1.13 ml O2·kg(-1)·min(-1), respectively). For our second comparison, empirical best-fit relationships for walking metabolism were derived from the data set in one- and two-component forms for three V̇o2-speed model types: linear (∝V(1.0)), exponential (∝V(2.0)), and exponential/height (∝V(2.0)/Ht). We found that the proportion of variance (R(2)) accounted for, when averaged across the three model types, was substantially lower for one- vs. two-component versions (0.63 ± 0.1 vs. 0.90 ± 0.03) and the predictive errors were nearly twice as great (SEE = 2.22 vs. 1.21 ml O2·kg(-1)·min(-1)). Our final analysis identified the following concise, generalized equation for predicting level human walking metabolism: V̇o2 total = V̇o2 rest + 3.85 + 5.97·V(2)/Ht (where V is measured in m/s, Ht in meters, and V̇o2 in ml O2·kg(-1)·min(-1)). PMID:26679617

  7. Towards Accurate Prediction of Turbulent, Three-Dimensional, Recirculating Flows with the NCC

    NASA Technical Reports Server (NTRS)

    Iannetti, A.; Tacina, R.; Jeng, S.-M.; Cai, J.

    2001-01-01

    The National Combustion Code (NCC) was used to calculate the steady state, nonreacting flow field of a prototype Lean Direct Injection (LDI) swirler. This configuration used nine groups of eight holes drilled at a thirty-five degree angle to induce swirl. These nine groups created swirl in the same direction, or a corotating pattern. The static pressure drop across the holes was fixed at approximately four percent. Computations were performed on one quarter of the geometry, because the geometry is considered rotationally periodic every ninety degrees. The final computational grid used was approximately 2.26 million tetrahedral cells, and a cubic nonlinear k - epsilon model was used to model turbulence. The NCC results were then compared to time averaged Laser Doppler Velocimetry (LDV) data. The LDV measurements were performed on the full geometry, but four ninths of the geometry was measured. One-, two-, and three-dimensional representations of both flow fields are presented. The NCC computations compare both qualitatively and quantitatively well to the LDV data, but differences exist downstream. The comparison is encouraging, and shows that NCC can be used for future injector design studies. To improve the flow prediction accuracy of turbulent, three-dimensional, recirculating flow fields with the NCC, recommendations are given.

  8. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    PubMed

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure. PMID:19950907

  9. Computational approaches to predict bacteriophage-host relationships.

    PubMed

    Edwards, Robert A; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E

    2016-03-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus-host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage-host signals. Sequence homology approaches are the most effective at identifying known phage-host pairs. Compositional and abundance-based methods contain significant signal for phage-host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage-host relationships, with potential relevance for medical and industrial applications. PMID:26657537

  10. Accurate 3-D finite difference computation of traveltimes in strongly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Noble, M.; Gesret, A.; Belayouni, N.

    2014-12-01

    Seismic traveltimes and their spatial derivatives are the basis of many imaging methods such as pre-stack depth migration and tomography. A common approach to compute these quantities is to solve the eikonal equation with a finite-difference scheme. If many recently published algorithms for resolving the eikonal equation do now yield fairly accurate traveltimes for most applications, the spatial derivatives of traveltimes remain very approximate. To address this accuracy issue, we develop a new hybrid eikonal solver that combines a spherical approximation when close to the source and a plane wave approximation when far away. This algorithm reproduces properly the spherical behaviour of wave fronts in the vicinity of the source. We implement a combination of 16 local operators that enables us to handle velocity models with sharp vertical and horizontal velocity contrasts. We associate to these local operators a global fast sweeping method to take into account all possible directions of wave propagation. Our formulation allows us to introduce a variable grid spacing in all three directions of space. We demonstrate the efficiency of this algorithm in terms of computational time and the gain in accuracy of the computed traveltimes and their derivatives on several numerical examples.

  11. Computationally efficient and accurate enantioselectivity modeling by clusters of molecular dynamics simulations.

    PubMed

    Wijma, Hein J; Marrink, Siewert J; Janssen, Dick B

    2014-07-28

    Computational approaches could decrease the need for the laborious high-throughput experimental screening that is often required to improve enzymes by mutagenesis. Here, we report that using multiple short molecular dynamics (MD) simulations makes it possible to accurately model enantioselectivity for large numbers of enzyme-substrate combinations at low computational costs. We chose four different haloalkane dehalogenases as model systems because of the availability of a large set of experimental data on the enantioselective conversion of 45 different substrates. To model the enantioselectivity, we quantified the frequency of occurrence of catalytically productive conformations (near attack conformations) for pairs of enantiomers during MD simulations. We found that the angle of nucleophilic attack that leads to carbon-halogen bond cleavage was a critical variable that limited the occurrence of productive conformations; enantiomers for which this angle reached values close to 180° were preferentially converted. A cluster of 20-40 very short (10 ps) MD simulations allowed adequate conformational sampling and resulted in much better agreement to experimental enantioselectivities than single long MD simulations (22 ns), while the computational costs were 50-100 fold lower. With single long MD simulations, the dynamics of enzyme-substrate complexes remained confined to a conformational subspace that rarely changed significantly, whereas with multiple short MD simulations a larger diversity of conformations of enzyme-substrate complexes was observed. PMID:24916632

  12. Can radiation therapy treatment planning system accurately predict surface doses in postmastectomy radiation therapy patients?

    SciTech Connect

    Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay

    2012-07-01

    Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.

  13. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  14. CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval

    PubMed Central

    Karim, Rezaul; Aziz, Mohd. Momin Al; Shatabda, Swakkhar; Rahman, M. Sohel; Mia, Md. Abul Kashem; Zaman, Farhana; Rakin, Salman

    2015-01-01

    The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226

  15. CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval.

    PubMed

    Karim, Rezaul; Aziz, Mohd Momin Al; Shatabda, Swakkhar; Rahman, M Sohel; Mia, Md Abul Kashem; Zaman, Farhana; Rakin, Salman

    2015-01-01

    The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226

  16. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    NASA Astrophysics Data System (ADS)

    Mehmani, Yashar; Oostrom, Mart; Balhoff, Matthew T.

    2014-03-01

    Several approaches have been developed in the literature for solving flow and transport at the pore scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect-mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and validated against micromodel experiments; excellent matches were obtained across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3-D disordered granular media.

  17. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    SciTech Connect

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  18. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  19. nuMap: a web platform for accurate prediction of nucleosome positioning.

    PubMed

    Alharbi, Bader A; Alshammari, Thamir H; Felton, Nathan L; Zhurkin, Victor B; Cui, Feng

    2014-10-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. PMID:25220945

  20. New consensus definition for acute kidney injury accurately predicts 30-day mortality in cirrhosis with infection

    PubMed Central

    Wong, Florence; O’Leary, Jacqueline G; Reddy, K Rajender; Patton, Heather; Kamath, Patrick S; Fallon, Michael B; Garcia-Tsao, Guadalupe; Subramanian, Ram M.; Malik, Raza; Maliakkal, Benedict; Thacker, Leroy R; Bajaj, Jasmohan S

    2015-01-01

    Background & Aims A consensus conference proposed that cirrhosis-associated acute kidney injury (AKI) be defined as an increase in serum creatinine by >50% from the stable baseline value in <6 months or by ≥0.3mg/dL in <48 hrs. We prospectively evaluated the ability of these criteria to predict mortality within 30 days among hospitalized patients with cirrhosis and infection. Methods 337 patients with cirrhosis admitted with or developed an infection in hospital (56% men; 56±10 y old; model for end-stage liver disease score, 20±8) were followed. We compared data on 30-day mortality, hospital length-of-stay, and organ failure between patients with and without AKI. Results 166 (49%) developed AKI during hospitalization, based on the consensus criteria. Patients who developed AKI had higher admission Child-Pugh (11.0±2.1 vs 9.6±2.1; P<.0001), and MELD scores (23±8 vs17±7; P<.0001), and lower mean arterial pressure (81±16mmHg vs 85±15mmHg; P<.01) than those who did not. Also higher amongst patients with AKI were mortality in ≤30 days (34% vs 7%), intensive care unit transfer (46% vs 20%), ventilation requirement (27% vs 6%), and shock (31% vs 8%); AKI patients also had longer hospital stays (17.8±19.8 days vs 13.3±31.8 days) (all P<.001). 56% of AKI episodes were transient, 28% persistent, and 16% resulted in dialysis. Mortality was 80% among those without renal recovery, higher compared to partial (40%) or complete recovery (15%), or AKI-free patients (7%; P<.0001). Conclusions 30-day mortality is 10-fold higher among infected hospitalized cirrhotic patients with irreversible AKI than those without AKI. The consensus definition of AKI accurately predicts 30-day mortality, length of hospital stay, and organ failure. PMID:23999172

  1. INTEGRATING COMPUTATIONAL PROTEIN FUNCTION PREDICTION INTO DRUG DISCOVERY INITIATIVES

    PubMed Central

    Grant, Marianne A.

    2014-01-01

    Pharmaceutical researchers must evaluate vast numbers of protein sequences and formulate innovative strategies for identifying valid targets and discovering leads against them as a way of accelerating drug discovery. The ever increasing number and diversity of novel protein sequences identified by genomic sequencing projects and the success of worldwide structural genomics initiatives have spurred great interest and impetus in the development of methods for accurate, computationally empowered protein function prediction and active site identification. Previously, in the absence of direct experimental evidence, homology-based protein function annotation remained the gold-standard for in silico analysis and prediction of protein function. However, with the continued exponential expansion of sequence databases, this approach is not always applicable, as fewer query protein sequences demonstrate significant homology to protein gene products of known function. As a result, several non-homology based methods for protein function prediction that are based on sequence features, structure, evolution, biochemical and genetic knowledge have emerged. Herein, we review current bioinformatic programs and approaches for protein function prediction/annotation and discuss their integration into drug discovery initiatives. The development of such methods to annotate protein functional sites and their application to large protein functional families is crucial to successfully utilizing the vast amounts of genomic sequence information available to drug discovery and development processes. PMID:25530654

  2. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-01-01

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  3. How accurately can we predict the melting points of drug-like compounds?

    PubMed

    Tetko, Igor V; Sushko, Yurii; Novotarskyi, Sergii; Patiny, Luc; Kondratov, Ivan; Petrenko, Alexander E; Charochkina, Larisa; Asiri, Abdullah M

    2014-12-22

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638. PMID:25489863

  4. How Accurately Can We Predict the Melting Points of Drug-like Compounds?

    PubMed Central

    2014-01-01

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638. PMID:25489863

  5. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    SciTech Connect

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  6. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  7. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  8. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  9. Computational predictive methods for fracture and fatigue

    NASA Astrophysics Data System (ADS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  10. Computational predictive methods for fracture and fatigue

    NASA Technical Reports Server (NTRS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-01-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  11. Accurate Analysis and Computer Aided Design of Microstrip Dual Mode Resonators and Filters.

    NASA Astrophysics Data System (ADS)

    Grounds, Preston Whitfield, III

    1995-01-01

    Microstrip structures are of interest due to their many applications in microwave circuit design. Their small size and ease of connection to both passive and active components make them well suited for use in systems where size and space is at a premium. These include satellite communication systems, radar systems, satellite navigation systems, cellular phones and many others. In general, space is always a premium for any mobile system. Microstrip resonators find particular application in oscillators and filters. In typical filters each microstrip patch corresponds to one resonator. However, when dual mode patches are employed, each patch acts as two resonators and therefore reduces the amount of space required to build the filter. This dissertation focuses on the accurate electromagnetic analysis of the components of planar dual mode filters. Highly accurate analyses are required so that the resonator to resonator coupling and the resonator to input/output can be predicted with precision. Hence, filters can be built with a minimum of design iterations and tuning. The analysis used herein is an integral equation formulation in the spectral domain. The analysis is done in the spectral domain since the Green's function can be derived in closed form, and the spatial domain convolution becomes a simple product. The resulting set of equations is solved using the Method of Moments with Galerkin's procedure. The electromagnetic analysis is applied to range of problems including unloaded dual mode patches, dual mode patches coupled to microstrip feedlines, and complete filter structures. At each step calculated results are compared to measured results and good agreement is found. The calculated results are also compared to results from the circuit analysis program HP EESOF^{ rm TM} and again good agreement is found. A dual mode elliptic filter is built and good performance is obtained.

  12. Unilateral Prostate Cancer Cannot be Accurately Predicted in Low-Risk Patients

    SciTech Connect

    Isbarn, Hendrik; Karakiewicz, Pierre I.; Vogel, Susanne

    2010-07-01

    Purpose: Hemiablative therapy (HAT) is increasing in popularity for treatment of patients with low-risk prostate cancer (PCa). The validity of this therapeutic modality, which exclusively treats PCa within a single prostate lobe, rests on accurate staging. We tested the accuracy of unilaterally unremarkable biopsy findings in cases of low-risk PCa patients who are potential candidates for HAT. Methods and Materials: The study population consisted of 243 men with clinical stage {<=}T2a, a prostate-specific antigen (PSA) concentration of <10 ng/ml, a biopsy-proven Gleason sum of {<=}6, and a maximum of 2 ipsilateral positive biopsy results out of 10 or more cores. All men underwent a radical prostatectomy, and pathology stage was used as the gold standard. Univariable and multivariable logistic regression models were tested for significant predictors of unilateral, organ-confined PCa. These predictors consisted of PSA, %fPSA (defined as the quotient of free [uncomplexed] PSA divided by the total PSA), clinical stage (T2a vs. T1c), gland volume, and number of positive biopsy cores (2 vs. 1). Results: Despite unilateral stage at biopsy, bilateral or even non-organ-confined PCa was reported in 64% of all patients. In multivariable analyses, no variable could clearly and independently predict the presence of unilateral PCa. This was reflected in an overall accuracy of 58% (95% confidence interval, 50.6-65.8%). Conclusions: Two-thirds of patients with unilateral low-risk PCa, confirmed by clinical stage and biopsy findings, have bilateral or non-organ-confined PCa at radical prostatectomy. This alarming finding questions the safety and validity of HAT.

  13. Aeroacoustic Flow Phenomena Accurately Captured by New Computational Fluid Dynamics Method

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    2002-01-01

    One of the challenges in the computational fluid dynamics area is the accurate calculation of aeroacoustic phenomena, especially in the presence of shock waves. One such phenomenon is "transonic resonance," where an unsteady shock wave at the throat of a convergent-divergent nozzle results in the emission of acoustic tones. The space-time Conservation-Element and Solution-Element (CE/SE) method developed at the NASA Glenn Research Center can faithfully capture the shock waves, their unsteady motion, and the generated acoustic tones. The CE/SE method is a revolutionary new approach to the numerical modeling of physical phenomena where features with steep gradients (e.g., shock waves, phase transition, etc.) must coexist with those having weaker variations. The CE/SE method does not require the complex interpolation procedures (that allow for the possibility of a shock between grid cells) used by many other methods to transfer information between grid cells. These interpolation procedures can add too much numerical dissipation to the solution process. Thus, while shocks are resolved, weaker waves, such as acoustic waves, are washed out.

  14. Fast and accurate computation of two-dimensional non-separable quadratic-phase integrals.

    PubMed

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-06-01

    We report a fast and accurate algorithm for numerical computation of two-dimensional non-separable linear canonical transforms (2D-NS-LCTs). Also known as quadratic-phase integrals, this class of integral transforms represents a broad class of optical systems including Fresnel propagation in free space, propagation in graded-index media, passage through thin lenses, and arbitrary concatenations of any number of these, including anamorphic/astigmatic/non-orthogonal cases. The general two-dimensional non-separable case poses several challenges which do not exist in the one-dimensional case and the separable two-dimensional case. The algorithm takes approximately N log N time, where N is the two-dimensional space-bandwidth product of the signal. Our method properly tracks and controls the space-bandwidth products in two dimensions, in order to achieve information theoretically sufficient, but not wastefully redundant, sampling required for the reconstruction of the underlying continuous functions at any stage of the algorithm. Additionally, we provide an alternative definition of general 2D-NS-LCTs that shows its kernel explicitly in terms of its ten parameters, and relate these parameters bidirectionally to conventional ABCD matrix parameters. PMID:20508697

  15. Accurate computation of surface stresses and forces with immersed boundary methods

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim

    2016-09-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.

  16. Comparison of two computer programs by predicting turbulent mixing of helium in a ducted supersonic airstream

    NASA Technical Reports Server (NTRS)

    Pan, Y. S.; Drummond, J. P.; Mcclinton, C. R.

    1978-01-01

    Two parabolic flow computer programs, SHIP (a finite-difference program) and COMOC (a finite-element program), are used for predicting three-dimensional turbulent reacting flow fields in supersonic combustors. The theoretical foundation of the two computer programs are described, and then the programs are applied to a three-dimensional turbulent mixing experiment. The cold (nonreacting) flow experiment was performed to study the mixing of helium jets with a supersonic airstream in a rectangular duct. Surveys of the flow field at an upstream were used as the initial data by programs; surveys at a downstream station provided comparison to assess program accuracy. Both computer programs predicted the experimental results and data trends reasonably well. However, the comparison between the computations from the two programs indicated that SHIP was more accurate in computation and more efficient in both computer storage and computing time than COMOC.

  17. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    SciTech Connect

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  18. A fast and accurate method for computing the Sunyaev-Zel'dovich signal of hot galaxy clusters

    NASA Astrophysics Data System (ADS)

    Chluba, Jens; Nagai, Daisuke; Sazonov, Sergey; Nelson, Kaylea

    2012-10-01

    New-generation ground- and space-based cosmic microwave background experiments have ushered in discoveries of massive galaxy clusters via the Sunyaev-Zel'dovich (SZ) effect, providing a new window for studying cluster astrophysics and cosmology. Many of the newly discovered, SZ-selected clusters contain hot intracluster plasma (kTe ≳ 10 keV) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity (v ≳ 1000 km s-1). It is well known that for the interpretation of the SZ signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. Our approach is based on an alternative derivation of the Boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. In contrast to previous works, this allows us to obtain a clean separation of kinematic and scattering terms. We also briefly mention additional complications connected with kinematic effects that should be considered when interpreting future SZ data for individual clusters. One of the main outcomes of this work is SZPACK, a numerical library which allows very fast and precise (≲0.001 per cent at frequencies hν ≲ 20kTγ) computation of the SZ signals up to high electron temperature (kTe ≃ 25 keV) and large peculiar velocity (v/c ≃ 0.01). The accuracy is well beyond the current and future precision of SZ observations and practically eliminates uncertainties which are usually overcome with more expensive numerical evaluation of the Boltzmann collision term. Our new approach should therefore be useful for analysing future high-resolution, multifrequency SZ observations as well as computing the predicted SZ effect signals from numerical simulations.

  19. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  20. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  1. Computational prediction of isolated performance of an axisymmetric nozzle at Mach number 0.90

    NASA Technical Reports Server (NTRS)

    Carlson, John R.

    1994-01-01

    An improved ability to predict external propulsive performance was incorporated into the three-dimensional Navier-Stokes code PAB3D. The improvements are the ability to account for skin friction and external pressure forces. Performance parameters for two axisymmetric supersonic cruise nozzle configurations were calculated to test the improved methodology. Internal and external flow-field regions were computed using a two-equation kappa-epsilon turbulent viscous-stress model. The computed thrust-minus-drag ratios were within 1 percent of the absolute level of experimental data and the trends of data were predicted accurately. The predicted trend of integrated nozzle pressure drag matched the trend of the integrated experimental pressure drag over a range of nozzle pressure ratios, but absolute drag levels were not accurately predicted.

  2. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  3. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-06-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of "family of secular functions" that we herein call "adaptive mode observers", is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of "turning point", our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  4. Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Williams, Brandon R.

    2011-01-01

    Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.

  5. Accurate prediction model of bead geometry in crimping butt of the laser brazing using generalized regression neural network

    NASA Astrophysics Data System (ADS)

    Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.

    2015-12-01

    There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.

  6. Towards more accurate wind and solar power prediction by improving NWP model physics

    NASA Astrophysics Data System (ADS)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  7. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  8. Combining Evolutionary Information and an Iterative Sampling Strategy for Accurate Protein Structure Prediction

    PubMed Central

    Braun, Tatjana; Koehler Leman, Julia; Lange, Oliver F.

    2015-01-01

    Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs. PMID:26713437

  9. Procedure for computer-controlled milling of accurate surfaces of revolution for millimeter and far-infrared mirrors

    NASA Technical Reports Server (NTRS)

    Emmons, Louisa; De Zafra, Robert

    1991-01-01

    A simple method for milling accurate off-axis parabolic mirrors with a computer-controlled milling machine is discussed. For machines with a built-in circle-cutting routine, an exact paraboloid can be milled with few computer commands and without the use of the spherical or linear approximations. The proposed method can be adapted easily to cut off-axis sections of elliptical or spherical mirrors.

  10. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    NASA Astrophysics Data System (ADS)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  11. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors.

    PubMed

    Carlson, Joel N K; Park, Jong Min; Park, So-Yeon; Park, Jong In; Choi, Yunseok; Ye, Sung-Joon

    2016-03-21

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  12. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  13. LOCUSTRA: accurate prediction of local protein structure using a two-layer support vector machine approach.

    PubMed

    Zimmermann, Olav; Hansmann, Ulrich H E

    2008-09-01

    Constraint generation for 3d structure prediction and structure-based database searches benefit from fine-grained prediction of local structure. In this work, we present LOCUSTRA, a novel scheme for the multiclass prediction of local structure that uses two layers of support vector machines (SVM). Using a 16-letter structural alphabet from de Brevern et al. (Proteins: Struct., Funct., Bioinf. 2000, 41, 271-287), we assess its prediction ability for an independent test set of 222 proteins and compare our method to three-class secondary structure prediction and direct prediction of dihedral angles. The prediction accuracy is Q16=61.0% for the 16 classes of the structural alphabet and Q3=79.2% for a simple mapping to the three secondary classes helix, sheet, and coil. We achieve a mean phi(psi) error of 24.74 degrees (38.35 degrees) and a median RMSDA (root-mean-square deviation of the (dihedral) angles) per protein chain of 52.1 degrees. These results compare favorably with related approaches. The LOCUSTRA web server is freely available to researchers at http://www.fz-juelich.de/nic/cbb/service/service.php. PMID:18763837

  14. Accurate technique for complete geometric calibration of cone-beam computed tomography systems.

    PubMed

    Cho, Youngbin; Moseley, Douglas J; Siewerdsen, Jeffrey H; Jaffray, David A

    2005-04-01

    Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 degrees (around beam direction) to 0.3 degrees (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0

  15. Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory

    PubMed Central

    Li, Jiaming; Luo, Suhuai; Jin, Jesse S.

    2010-01-01

    Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent. PMID:22163414

  16. DISPLAR: an accurate method for predicting DNA-binding sites on protein surfaces

    PubMed Central

    Tjong, Harianto; Zhou, Huan-Xiang

    2007-01-01

    Structural and physical properties of DNA provide important constraints on the binding sites formed on surfaces of DNA-targeting proteins. Characteristics of such binding sites may form the basis for predicting DNA-binding sites from the structures of proteins alone. Such an approach has been successfully developed for predicting protein–protein interface. Here this approach is adapted for predicting DNA-binding sites. We used a representative set of 264 protein–DNA complexes from the Protein Data Bank to analyze characteristics and to train and test a neural network predictor of DNA-binding sites. The input to the predictor consisted of PSI-blast sequence profiles and solvent accessibilities of each surface residue and 14 of its closest neighboring residues. Predicted DNA-contacting residues cover 60% of actual DNA-contacting residues and have an accuracy of 76%. This method significantly outperforms previous attempts of DNA-binding site predictions. Its application to the prion protein yielded a DNA-binding site that is consistent with recent NMR chemical shift perturbation data, suggesting that it can complement experimental techniques in characterizing protein–DNA interfaces. PMID:17284455

  17. Using complete genome comparisons to identify sequences whose presence accurately predicts clinically important phenotypes.

    PubMed

    Hall, Barry G; Cardenas, Heliodoro; Barlow, Miriam

    2013-01-01

    In clinical settings it is often important to know not just the identity of a microorganism, but also the danger posed by that particular strain. For instance, Escherichia coli can range from being a harmless commensal to being a very dangerous enterohemorrhagic (EHEC) strain. Determining pathogenic phenotypes can be both time consuming and expensive. Here we propose a simple, rapid, and inexpensive method of predicting pathogenic phenotypes on the basis of the presence or absence of short homologous DNA segments in an isolate. Our method compares completely sequenced genomes without the necessity of genome alignments in order to identify the presence or absence of the segments to produce an automatic alignment of the binary string that describes each genome. Analysis of the segment alignment allows identification of those segments whose presence strongly predicts a phenotype. Clinical application of the method requires nothing more that PCR amplification of each of the set of predictive segments. Here we apply the method to identifying EHEC strains of E. coli and to distinguishing E. coli from Shigella. We show in silico that with as few as 8 predictive sequences, if even three of those predictive sequences are amplified the probability of being EHEC or Shigella is >0.99. The method is thus very robust to the occasional amplification failure for spurious reasons. Experimentally, we apply the method to screening a set of 98 isolates to distinguishing E. coli from Shigella, and EHEC from non-EHEC E. coli strains and show that all isolates are correctly identified. PMID:23935901

  18. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    NASA Astrophysics Data System (ADS)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  19. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    NASA Astrophysics Data System (ADS)

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  20. An endometrial gene expression signature accurately predicts recurrent implantation failure after IVF

    PubMed Central

    Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.

    2016-01-01

    The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113

  1. A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)

    PubMed Central

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585

  2. Predicting Antimicrobial Resistance Prevalence and Incidence from Indicators of Antimicrobial Use: What Is the Most Accurate Indicator for Surveillance in Intensive Care Units?

    PubMed Central

    Fortin, Élise; Platt, Robert W.; Fontela, Patricia S.; Buckeridge, David L.; Quach, Caroline

    2015-01-01

    Objective The optimal way to measure antimicrobial use in hospital populations, as a complement to surveillance of resistance is still unclear. Using respiratory isolates and antimicrobial prescriptions of nine intensive care units (ICUs), this study aimed to identify the indicator of antimicrobial use that predicted prevalence and incidence rates of resistance with the best accuracy. Methods Retrospective cohort study including all patients admitted to three neonatal (NICU), two pediatric (PICU) and four adult ICUs between April 2006 and March 2010. Ten different resistance / antimicrobial use combinations were studied. After adjustment for ICU type, indicators of antimicrobial use were successively tested in regression models, to predict resistance prevalence and incidence rates, per 4-week time period, per ICU. Binomial regression and Poisson regression were used to model prevalence and incidence rates, respectively. Multiplicative and additive models were tested, as well as no time lag and a one 4-week-period time lag. For each model, the mean absolute error (MAE) in prediction of resistance was computed. The most accurate indicator was compared to other indicators using t-tests. Results Results for all indicators were equivalent, except for 1/20 scenarios studied. In this scenario, where prevalence of carbapenem-resistant Pseudomonas sp. was predicted with carbapenem use, recommended daily doses per 100 admissions were less accurate than courses per 100 patient-days (p = 0.0006). Conclusions A single best indicator to predict antimicrobial resistance might not exist. Feasibility considerations such as ease of computation or potential external comparisons could be decisive in the choice of an indicator for surveillance of healthcare antimicrobial use. PMID:26710322

  3. Accurate ab initio prediction of NMR chemical shifts of nucleic acids and nucleic acids/protein complexes

    PubMed Central

    Victora, Andrea; Möller, Heiko M.; Exner, Thomas E.

    2014-01-01

    NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3–0.6 ppm and correlation coefficients (r2) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135

  4. Accurate ab initio prediction of NMR chemical shifts of nucleic acids and nucleic acids/protein complexes.

    PubMed

    Victora, Andrea; Möller, Heiko M; Exner, Thomas E

    2014-12-16

    NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3-0.6 ppm and correlation coefficients (r(2)) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135

  5. SnowyOwl: accurate prediction of fungal genes by using RNA-Seq and homology information to select among ab initio models

    PubMed Central

    2014-01-01

    Background Locating the protein-coding genes in novel genomes is essential to understanding and exploiting the genomic information but it is still difficult to accurately predict all the genes. The recent availability of detailed information about transcript structure from high-throughput sequencing of messenger RNA (RNA-Seq) delineates many expressed genes and promises increased accuracy in gene prediction. Computational gene predictors have been intensively developed for and tested in well-studied animal genomes. Hundreds of fungal genomes are now or will soon be sequenced. The differences of fungal genomes from animal genomes and the phylogenetic sparsity of well-studied fungi call for gene-prediction tools tailored to them. Results SnowyOwl is a new gene prediction pipeline that uses RNA-Seq data to train and provide hints for the generation of Hidden Markov Model (HMM)-based gene predictions and to evaluate the resulting models. The pipeline has been developed and streamlined by comparing its predictions to manually curated gene models in three fungal genomes and validated against the high-quality gene annotation of Neurospora crassa; SnowyOwl predicted N. crassa genes with 83% sensitivity and 65% specificity. SnowyOwl gains sensitivity by repeatedly running the HMM gene predictor Augustus with varied input parameters and selectivity by choosing the models with best homology to known proteins and best agreement with the RNA-Seq data. Conclusions SnowyOwl efficiently uses RNA-Seq data to produce accurate gene models in both well-studied and novel fungal genomes. The source code for the SnowyOwl pipeline (in Python) and a web interface (in PHP) is freely available from http://sourceforge.net/projects/snowyowl/. PMID:24980894

  6. An Accurate, Clinically Feasible Multi-Gene Expression Assay for Predicting Metastasis in Uveal Melanoma

    PubMed Central

    Onken, Michael D.; Worley, Lori A.; Tuscan, Meghan D.; Harbour, J. William

    2010-01-01

    Uveal (ocular) melanoma is an aggressive cancer that often forms undetectable micrometastases before diagnosis of the primary tumor. These micrometastases later multiply to generate metastatic tumors that are resistant to therapy and are uniformly fatal. We have previously identified a gene expression profile derived from the primary tumor that is extremely accurate for identifying patients at high risk of metastatic disease. Development of a practical clinically feasible platform for analyzing this expression profile would benefit high-risk patients through intensified metastatic surveillance, earlier intervention for metastasis, and stratification for entry into clinical trials of adjuvant therapy. Here, we migrate the expression profile from a hybridization-based microarray platform to a robust, clinically practical, PCR-based 15-gene assay comprising 12 discriminating genes and three endogenous control genes. We analyze the technical performance of the assay in a prospective study of 609 tumor samples, including 421 samples sent from distant locations. We show that the assay can be performed accurately on fine needle aspirate biopsy samples, even when the quantity of RNA is below detectable limits. Preliminary outcome data from the prospective study affirm the prognostic accuracy of the assay. This prognostic assay provides an important addition to the armamentarium for managing patients with uveal melanoma, and it provides a proof of principle for the development of similar assays for other cancers. PMID:20413675

  7. Computational Methods to Predict Protein Interaction Partners

    NASA Astrophysics Data System (ADS)

    Valencia, Alfonso; Pazos, Florencio

    In the new paradigm for studying biological phenomena represented by Systems Biology, cellular components are not considered in isolation but as forming complex networks of relationships. Protein interaction networks are among the first objects studied from this new point of view. Deciphering the interactome (the whole network of interactions for a given proteome) has been shown to be a very complex task. Computational techniques for detecting protein interactions have become standard tools for dealing with this problem, helping and complementing their experimental counterparts. Most of these techniques use genomic or sequence features intuitively related with protein interactions and are based on "first principles" in the sense that they do not involve training with examples. There are also other computational techniques that use other sources of information (i.e. structural information or even experimental data) or are based on training with examples.

  8. A gene expression biomarker accurately predicts estrogen receptor α modulation in a human gene expression compendium

    EPA Science Inventory

    The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1...

  9. Computer program to predict aircraft noise levels

    NASA Technical Reports Server (NTRS)

    Clark, B. J.

    1981-01-01

    Methods developed at the NASA Lewis Research Center for predicting the noise contributions from various aircraft noise sources were programmed to predict aircraft noise levels either in flight or in ground tests. The noise sources include fan inlet and exhaust, jet, flap (for powered lift), core (combustor), turbine, and airframe. Noise propagation corrections are available for atmospheric attenuation, ground reflections, extra ground attenuation, and shielding. Outputs can include spectra, overall sound pressure level, perceived noise level, tone-weighted perceived noise level, and effective perceived noise level at locations specified by the user. Footprint contour coordinates and approximate footprint areas can also be calculated. Inputs and outputs can be in either System International or U.S. customary units. The subroutines for each noise source and propagation correction are described. A complete listing is given.

  10. Limited rotational and rovibrational line lists computed with highly accurate quartic force fields and ab initio dipole surfaces.

    PubMed

    Fortenberry, Ryan C; Huang, Xinchuan; Schwenke, David W; Lee, Timothy J

    2014-02-01

    In this work, computational procedures are employed to compute the rotational and rovibrational spectra and line lists for H2O, CO2, and SO2. Building on the established use of quartic force fields, MP2 and CCSD(T) Dipole Moment Surfaces (DMSs) are computed for each system of study in order to produce line intensities as well as the transition energies. The computed results exhibit a clear correlation to reference data available in the HITRAN database. Additionally, even though CCSD(T) DMSs produce more accurate intensities as compared to experiment, the use of MP2 DMSs results in reliable line lists that are still comparable to experiment. The use of the less computationally costly MP2 method is beneficial in the study of larger systems where use of CCSD(T) would be more costly. PMID:23692860

  11. Computer Program Predicts Turbine-Stage Performance

    NASA Technical Reports Server (NTRS)

    Boyle, Robert J.; Haas, Jeffrey E.; Katsanis, Theodore

    1988-01-01

    MTSBL updated version of flow-analysis programs MERIDL and TSONIC coupled to boundary-layer program BLAYER. Method uses quasi-three-dimensional, inviscid, stream-function flow analysis iteratively coupled to calculated losses so changes in losses result in changes in flow distribution. Manner effects both configuration on flow distribution and flow distribution on losses taken into account in prediction of performance of stage. Written in FORTRAN IV.

  12. Comparisons of AEROX computer program predictions of lift and induced drag with flight test data

    NASA Technical Reports Server (NTRS)

    Axelson, J.; Hill, G. C.

    1981-01-01

    The AEROX aerodynamic computer program which provides accurate predictions of induced drag and trim drag for the full angle of attack range and for Mach numbers from 0.4 to 3.0 is described. This capability is demonstrated comparing flight test data and AEROX predictions for 17 different tactical aircraft. Values of minimum (skin friction, pressure, and zero lift wave) drag coefficients and lift coefficient offset due to camber (when required) were input from the flight test data to produce total lift and drag curves. The comparisons of trimmed lift drag polars show excellent agreement between the AEROX predictions and the in flight measurements.

  13. Viewing men's faces does not lead to accurate predictions of trustworthiness

    PubMed Central

    Efferson, Charles; Vogt, Sonja

    2013-01-01

    The evolution of cooperation requires some mechanism that reduces the risk of exploitation for cooperative individuals. Recent studies have shown that men with wide faces are anti-social, and they are perceived that way by others. This suggests that people could use facial width to identify anti-social men and thus limit the risk of exploitation. To see if people can make accurate inferences like this, we conducted a two-part experiment. First, males played a sequential social dilemma, and we took photographs of their faces. Second, raters then viewed these photographs and guessed how second movers behaved. Raters achieved significant accuracy by guessing that second movers exhibited reciprocal behaviour. Raters were not able to use the photographs to further improve accuracy. Indeed, some raters used the photographs to their detriment; they could have potentially achieved greater accuracy and earned more money by ignoring the photographs and assuming all second movers reciprocate. PMID:23308340

  14. Accurate Prediction of the Statistics of Repetitions in Random Sequences: A Case Study in Archaea Genomes

    PubMed Central

    Régnier, Mireille; Chassignet, Philippe

    2016-01-01

    Repetitive patterns in genomic sequences have a great biological significance and also algorithmic implications. Analytic combinatorics allow to derive formula for the expected length of repetitions in a random sequence. Asymptotic results, which generalize previous works on a binary alphabet, are easily computable. Simulations on random sequences show their accuracy. As an application, the sample case of Archaea genomes illustrates how biological sequences may differ from random sequences. PMID:27376057

  15. Accurate Prediction of the Statistics of Repetitions in Random Sequences: A Case Study in Archaea Genomes.

    PubMed

    Régnier, Mireille; Chassignet, Philippe

    2016-01-01

    Repetitive patterns in genomic sequences have a great biological significance and also algorithmic implications. Analytic combinatorics allow to derive formula for the expected length of repetitions in a random sequence. Asymptotic results, which generalize previous works on a binary alphabet, are easily computable. Simulations on random sequences show their accuracy. As an application, the sample case of Archaea genomes illustrates how biological sequences may differ from random sequences. PMID:27376057

  16. Accurate, conformation-dependent predictions of solvent effects on protein ionization constants

    PubMed Central

    Barth, P.; Alber, T.; Harbury, P. B.

    2007-01-01

    Predicting how aqueous solvent modulates the conformational transitions and influences the pKa values that regulate the biological functions of biomolecules remains an unsolved challenge. To address this problem, we developed FDPB_MF, a rotamer repacking method that exhaustively samples side chain conformational space and rigorously calculates multibody protein–solvent interactions. FDPB_MF predicts the effects on pKa values of various solvent exposures, large ionic strength variations, strong energetic couplings, structural reorganizations and sequence mutations. The method achieves high accuracy, with root mean square deviations within 0.3 pH unit of the experimental values measured for turkey ovomucoid third domain, hen lysozyme, Bacillus circulans xylanase, and human and Escherichia coli thioredoxins. FDPB_MF provides a faithful, quantitative assessment of electrostatic interactions in biological macromolecules. PMID:17360348

  17. Computational Aeroheating Predictions for X-34

    NASA Technical Reports Server (NTRS)

    Kelb, William L.; Wood, William A.; Gnoffo, Peter A.; Alter, Stephen J.

    1998-01-01

    Radiative equilibrium surface temperatures, heating rates, streamlines, surface pressures, and flow-field features as predicted by the Langley Aerothermodynamic Upwind Relaxation Algorithm (Laura) are presented for the X-34 Technology Demonstrator. Results for two trajectory points corresponding to entry peak heating and two control surface deflections are discussed. This data is also discussed in context of Thermal Protection System (TPS) design issues. The work presented in this report is part of a larger effort to define the X-34 aerothermal environment, including the application of engineering codes and wind-tunnel studies.

  18. Computational Aeroheating Predictions for X-34

    NASA Technical Reports Server (NTRS)

    Kleb,William H.; Wood, William A.; Gnoffo, Peter A.

    1998-01-01

    Radiative equilibrium surface temperatures, heating rates, streamlines, surface pressures, and flow-field features as predicted by the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) are presented for the X-34 Technology Demonstrator. Results for two trajectory points corresponding to entry peak heating and two control surface deflections are discussed. This data is also discussed in the context of Thermal Protection System (TPS) design issues. The work presented in this report is part of a larger effort to define the X-34 aerothermal environment, including the application of engineering codes and wind-tunnel studies.

  19. Accurate Fault Prediction of BlueGene/P RAS Logs Via Geometric Reduction

    SciTech Connect

    Jones, Terry R; Kirby, Michael; Ladd, Joshua S; Dreisigmeyer, David; Thompson, Joshua

    2010-01-01

    The authors are building two algorithms for fault prediction using raw system-log data. This work is preliminary, and has only been applied to a limited dataset, however the results seem promising. The conclusions are that: (1) obtaining useful data from RAS-logs is challenging; (2) extracting concentrated information improves efficiency and accuracy; and (3) function evaluation algorithms are fast and lend well to scaling.

  20. Accurate single-sequence prediction of solvent accessible surface area using local and global features

    PubMed Central

    Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej

    2014-01-01

    We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636

  1. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.

    PubMed

    Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  2. Accurate single-sequence prediction of solvent accessible surface area using local and global features.

    PubMed

    Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej

    2014-11-01

    We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment-based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636

  3. Accurate Prediction of Drug-Induced Liver Injury Using Stem Cell-Derived Populations

    PubMed Central

    Szkolnicka, Dagmara; Farnworth, Sarah L.; Lucendo-Villarin, Baltasar; Storck, Christopher; Zhou, Wenli; Iredale, John P.; Flint, Oliver

    2014-01-01

    Despite major progress in the knowledge and management of human liver injury, there are millions of people suffering from chronic liver disease. Currently, the only cure for end-stage liver disease is orthotopic liver transplantation; however, this approach is severely limited by organ donation. Alternative approaches to restoring liver function have therefore been pursued, including the use of somatic and stem cell populations. Although such approaches are essential in developing scalable treatments, there is also an imperative to develop predictive human systems that more effectively study and/or prevent the onset of liver disease and decompensated organ function. We used a renewable human stem cell resource, from defined genetic backgrounds, and drove them through developmental intermediates to yield highly active, drug-inducible, and predictive human hepatocyte populations. Most importantly, stem cell-derived hepatocytes displayed equivalence to primary adult hepatocytes, following incubation with known hepatotoxins. In summary, we have developed a serum-free, scalable, and shippable cell-based model that faithfully predicts the potential for human liver injury. Such a resource has direct application in human modeling and, in the future, could play an important role in developing renewable cell-based therapies. PMID:24375539

  4. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    PubMed Central

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  5. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    SciTech Connect

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  6. Combining Theory and Experiment to Compute Highly Accurate Line Lists for Stable Molecules, and Purely AB Initio Theory to Compute Accurate Rotational and Rovibrational Line Lists for Transient Molecules

    NASA Astrophysics Data System (ADS)

    Lee, Timothy J.; Huang, Xinchuan; Fortenberry, Ryan C.; Schwenke, David W.

    2013-06-01

    Theoretical chemists have been computing vibrational and rovibrational spectra of small molecules for more than 40 years, but over the last decade the interest in this application has grown significantly. The increased interest in computing accurate rotational and rovibrational spectra for small molecules could not come at a better time, as NASA and ESA have begun to acquire a mountain of high-resolution spectra from the Herschel mission, and soon will from the SOFIA and JWST missions. In addition, the ground-based telescope, ALMA, has begun to acquire high-resolution spectra in the same time frame. Hence the need for highly accurate line lists for many small molecules, including their minor isotopologues, will only continue to increase. I will present the latest developments from our group on using the "Best Theory + High-Resolution Experimental Data" strategy to compute highly accurate rotational and rovibrational spectra for small molecules, including NH3, CO2, and SO2. I will also present the latest work from our group in producing purely ab initio line lists and spectroscopic constants for small molecules thought to exist in various astrophysical environments, but for which there is either limited or no high-resolution experimental data available. These more limited line lists include purely rotational transitions as well as rovibrational transitions for bands up through a few combination/overtones.

  7. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    PubMed

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. PMID:24375512

  8. A fourth order accurate finite difference scheme for the computation of elastic waves

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.

    1986-01-01

    A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.

  9. More accurate predictions with transonic Navier-Stokes methods through improved turbulence modeling

    NASA Technical Reports Server (NTRS)

    Johnson, Dennis A.

    1989-01-01

    Significant improvements in predictive accuracies for off-design conditions are achievable through better turbulence modeling; and, without necessarily adding any significant complication to the numerics. One well established fact about turbulence is it is slow to respond to changes in the mean strain field. With the 'equilibrium' algebraic turbulence models no attempt is made to model this characteristic and as a consequence these turbulence models exaggerate the turbulent boundary layer's ability to produce turbulent Reynolds shear stresses in regions of adverse pressure gradient. As a consequence, too little momentum loss within the boundary layer is predicted in the region of the shock wave and along the aft part of the airfoil where the surface pressure undergoes further increases. Recently, a 'nonequilibrium' algebraic turbulence model was formulated which attempts to capture this important characteristic of turbulence. This 'nonequilibrium' algebraic model employs an ordinary differential equation to model the slow response of the turbulence to changes in local flow conditions. In its original form, there was some question as to whether this 'nonequilibrium' model performed as well as the 'equilibrium' models for weak interaction cases. However, this turbulence model has since been further improved wherein it now appears that this turbulence model performs at least as well as the 'equilibrium' models for weak interaction cases and for strong interaction cases represents a very significant improvement. The performance of this turbulence model relative to popular 'equilibrium' models is illustrated for three airfoil test cases of the 1987 AIAA Viscous Transonic Airfoil Workshop, Reno, Nevada. A form of this 'nonequilibrium' turbulence model is currently being applied to wing flows for which similar improvements in predictive accuracy are being realized.

  10. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding

    NASA Astrophysics Data System (ADS)

    Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.

    2016-02-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process.

  11. DisoMCS: Accurately Predicting Protein Intrinsically Disordered Regions Using a Multi-Class Conservative Score Approach

    PubMed Central

    Wang, Zhiheng; Yang, Qianqian; Li, Tonghua; Cong, Peisheng

    2015-01-01

    The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS) obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction) tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database. Availability The DisoMCS is available at http://cal.tongji.edu.cn/disorder/. PMID:26090958

  12. Accurate predictions of dielectrophoretic force and torque on particles with strong mutual field, particle, and wall interactions

    NASA Astrophysics Data System (ADS)

    Liu, Qianlong; Reifsnider, Kenneth

    2012-11-01

    The basis of dielectrophoresis (DEP) is the prediction of the force and torque on particles. The classical approach to the prediction is based on the effective moment method, which, however, is an approximate approach, assumes infinitesimal particles. Therefore, it is well-known that for finite-sized particles, the DEP approximation is inaccurate as the mutual field, particle, wall interactions become strong, a situation presently attracting extensive research for practical significant applications. In the present talk, we provide accurate calculations of the force and torque on the particles from first principles, by directly resolving the local geometry and properties and accurately accounting for the mutual interactions for finite-sized particles with both dielectric polarization and conduction in a sinusoidally steady-state electric field. Since the approach has a significant advantage, compared to other numerical methods, to efficiently simulate many closely packed particles, it provides an important, unique, and accurate technique to investigate complex DEP phenomena, for example heterogeneous mixtures containing particle chains, nanoparticle assembly, biological cells, non-spherical effects, etc. This study was supported by the Department of Energy under funding for an EFRC (the HeteroFoaM Center), grant no. DE-SC0001061.

  13. PSI: a comprehensive and integrative approach for accurate plant subcellular localization prediction.

    PubMed

    Liu, Lili; Zhang, Zijun; Mei, Qian; Chen, Ming

    2013-01-01

    Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ~10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827

  14. The Compensatory Reserve For Early and Accurate Prediction Of Hemodynamic Compromise: A Review of the Underlying Physiology.

    PubMed

    Convertino, Victor A; Wirt, Michael D; Glenn, John F; Lein, Brian C

    2016-06-01

    Shock is deadly and unpredictable if it is not recognized and treated in early stages of hemorrhage. Unfortunately, measurements of standard vital signs that are displayed on current medical monitors fail to provide accurate or early indicators of shock because of physiological mechanisms that effectively compensate for blood loss. As a result of new insights provided by the latest research on the physiology of shock using human experimental models of controlled hemorrhage, it is now recognized that measurement of the body's reserve to compensate for reduced circulating blood volume is the single most important indicator for early and accurate assessment of shock. We have called this function the "compensatory reserve," which can be accurately assessed by real-time measurements of changes in the features of the arterial waveform. In this paper, the physiology underlying the development and evaluation of a new noninvasive technology that allows for real-time measurement of the compensatory reserve will be reviewed, with its clinical implications for earlier and more accurate prediction of shock. PMID:26950588

  15. Predicting Computer Science Ph.D. Completion: A Case Study

    ERIC Educational Resources Information Center

    Cox, G. W.; Hughes, W. E., Jr.; Etzkorn, L. H.; Weisskopf, M. E.

    2009-01-01

    This paper presents the results of an analysis of indicators that can be used to predict whether a student will succeed in a Computer Science Ph.D. program. The analysis was conducted by studying the records of 75 students who have been in the Computer Science Ph.D. program of the University of Alabama in Huntsville. Seventy-seven variables were…

  16. A novel method to predict visual field progression more accurately, using intraocular pressure measurements in glaucoma patients.

    PubMed

    2016-01-01

    Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an 'intraocular pressure (IOP)-integrated VF trend analysis' was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553

  17. A novel method to predict visual field progression more accurately, using intraocular pressure measurements in glaucoma patients

    PubMed Central

    Asaoka, Ryo; Fujino, Yuri; Murata, Hiroshi; Miki, Atsuya; Tanito, Masaki; Mizoue, Shiro; Mori, Kazuhiko; Suzuki, Katsuyoshi; Yamashita, Takehiro; Kashiwagi, Kenji; Shoji, Nobuyuki

    2016-01-01

    Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an ‘intraocular pressure (IOP)-integrated VF trend analysis’ was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553

  18. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  19. Combining multiple regression and principal component analysis for accurate predictions for column ozone in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.

    2013-06-01

    This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.

  20. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    SciTech Connect

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  1. Accurate prediction of electron-paramagnetic-resonance tensors for spin probes dissolved in liquid crystals.

    PubMed

    Benzi, Caterina; Cossi, Maurizio; Barone, Vincenzo

    2005-11-15

    High-level ab initio g and A tensor components have been calculated for PD-tempone and tempo-palmitate (TP) radical spin probes dissolved in n-pentyl and n-hexyl cyanobiphenyl liquid crystals. Solvent effects have been included in the proposed approach by means of the polarizable continuum model, allowing for solvent anisotropy. An in-depth analysis of the electronic structure of probes was performed to choose a suitable model for TP and make the calculations more accessible. Computed magnetic tensor components have been compared with corresponding values measured in the rigid limit. The quality of the results suggests the use of quantum-mechanical data to determine the order parameter of the nematic from experimental electron-spin resonance measurements. PMID:16321115

  2. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    SciTech Connect

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  3. Sequence features accurately predict genome-wide MeCP2 binding in vivo.

    PubMed

    Rube, H Tomas; Lee, Wooje; Hejna, Miroslav; Chen, Huaiyang; Yasui, Dag H; Hess, John F; LaSalle, Janine M; Song, Jun S; Gong, Qizhi

    2016-01-01

    Methyl-CpG binding protein 2 (MeCP2) is critical for proper brain development and expressed at near-histone levels in neurons, but the mechanism of its genomic localization remains poorly understood. Using high-resolution MeCP2-binding data, we show that DNA sequence features alone can predict binding with 88% accuracy. Integrating MeCP2 binding and DNA methylation in a probabilistic graphical model, we demonstrate that previously reported genome-wide association with methylation is in part due to MeCP2's affinity to GC-rich chromatin, a result replicated using published data. Furthermore, MeCP2 co-localizes with nucleosomes. Finally, MeCP2 binding downstream of promoters correlates with increased expression in Mecp2-deficient neurons. PMID:27008915

  4. Sequence features accurately predict genome-wide MeCP2 binding in vivo

    PubMed Central

    Rube, H. Tomas; Lee, Wooje; Hejna, Miroslav; Chen, Huaiyang; Yasui, Dag H.; Hess, John F.; LaSalle, Janine M.; Song, Jun S.; Gong, Qizhi

    2016-01-01

    Methyl-CpG binding protein 2 (MeCP2) is critical for proper brain development and expressed at near-histone levels in neurons, but the mechanism of its genomic localization remains poorly understood. Using high-resolution MeCP2-binding data, we show that DNA sequence features alone can predict binding with 88% accuracy. Integrating MeCP2 binding and DNA methylation in a probabilistic graphical model, we demonstrate that previously reported genome-wide association with methylation is in part due to MeCP2's affinity to GC-rich chromatin, a result replicated using published data. Furthermore, MeCP2 co-localizes with nucleosomes. Finally, MeCP2 binding downstream of promoters correlates with increased expression in Mecp2-deficient neurons. PMID:27008915

  5. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. PMID:26708965

  6. An accurate and efficient method for prediction of the long-term evolution of space debris in the geosynchronous region

    NASA Astrophysics Data System (ADS)

    McNamara, Roger P.; Eagle, C. D.

    1992-08-01

    Planetary Observer High Accuracy Orbit Prediction Program (POHOP), an existing numerical integrator, was modified with the solar and lunar formulae developed by T.C. Van Flandern and K.F. Pulkkinen to provide the accuracy required to evaluate long-term orbit characteristics of objects on the geosynchronous region. The orbit of a 1000 kg class spacecraft is numerically integrated over 50 years using both the original and the more accurate solar and lunar ephemerides methods. Results of this study demonstrate that, over the long term, for an object located in the geosynchronous region, the more accurate solar and lunar ephemerides effects on the objects's position are significantly different than using the current POHOP ephemeris.

  7. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    SciTech Connect

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-10-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  8. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Schaeffler, N. W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2007-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are summarized. Results in both cases are compared to experiment.

  9. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, Christoper L.; Schaeffler, Norman W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2005-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are outlined. Results in both cases are compared to experiment.

  10. Predicting cancer risks from dental computed tomography.

    PubMed

    Wu, T-H; Lin, W-C; Chen, W-K; Chang, Y-C; Hwang, J-J

    2015-01-01

    Dental computed tomography (CT) has become a common tool when carrying out dental implants, yet there is little information available on its associated cancer risk. The objective of this study was to estimate the lifetime-attributable risk (LAR) of cancer incidence that is associated with the radiation dose from dental CT scans and to evaluate the effect of scan position, sex, and age on the cancer risk. This retrospective cohort study involved 505 participants who underwent CT scans. The mean effective doses for male and female patients in the maxilla group were 408 and 389 µSv (P = 0.055), respectively, whereas the mean effective doses for male and female patients in the mandible groups were 475 and 450 µSv (P < 0.001), respectively. The LAR for cancer incidence after mandible CT scanning varied from 1 in 16,196 for a 30-y-old woman to 1 in 114,680 for a 70-y-old man. The organ-specific cancer risks for thyroid cancer, other cancers, leukemia, and lung cancer account for 99% of the LAR. Among patients of all ages, the estimated LAR of a mandible scan was higher than that of a maxilla scan. Furthermore, the LAR for female thyroid cancer had a peak before age 45 y. The risk for a woman aged 30 y is roughly 8 times higher than that of a woman aged 50 y. After undergoing a dental CT scan, the possible cancer risks related to sex and age across various different anatomical regions are not similar. The greatest risk due to a dental CT scan is for a mandible scan when the woman is younger than 45 y. Given the limits of the sample size, machine parameters, and the retrospective nature of this study, the results need to be interpreted within the context of this patient population. Future studies will be of value to corroborate these findings. PMID:25359782

  11. Outer packet sets and feature prediction of computer virus

    NASA Astrophysics Data System (ADS)

    Zhang, Ling

    2014-10-01

    The packet sets model was proposed by Prof. Shi in 2008. A packet sets is a set pair composed of internal and outer packet sets, and it has dynamic characteristic. Using packet sets theory, this paper gives the feature prediction of computer virus based on outer packet sets. The concept of virus screening-filtering is given, furthermore, the virus screening-filtering order theorem, composite virus screening-filtering theorem and virus screening-filtering rule are presented. A prediction method of computer virus feature is given based on the results. The outer packet sets is a new tool in the research of the prediction of dynamic virus feature.

  12. Computer subroutine ISUDS accurately solves large system of simultaneous linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Collier, G.

    1967-01-01

    Computer program, an Iterative Scheme Using a Direct Solution, obtains double precision accuracy using a single-precision coefficient matrix. ISUDS solves a system of equations written in matrix form as AX equals B, where A is a square non-singular coefficient matrix, X is a vector, and B is a vector.

  13. Time-Accurate Computation of Viscous Flow Around Deforming Bodies Using Overset Grids

    SciTech Connect

    Fast, P; Henshaw, W D

    2001-04-02

    Dynamically evolving boundaries and deforming bodies interacting with a flow are commonly encountered in fluid dynamics. However, the numerical simulation of flows with dynamic boundaries is difficult with current methods. We propose a new method for studying such problems. The key idea is to use the overset grid method with a thin, body-fitted grid near the deforming boundary, while using fixed Cartesian grids to cover most of the computational domain. Our approach combines the strengths of earlier moving overset grid methods for rigid body motion, and unstructured grid methods for Aow-structure interactions. Large scale deformation of the flow boundaries can be handled without a global regridding, and in a computationally efficient way. In terms of computational cost, even a full overset grid regridding is significantly cheaper than a full regridding of an unstructured grid for the same domain, especially in three dimensions. Numerical studies are used to verify accuracy and convergence of our flow solver. As a computational example, we consider two-dimensional incompressible flow past a flexible filament with prescribed dynamics.

  14. Computational methods in sequence and structure prediction

    NASA Astrophysics Data System (ADS)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed

  15. BgN-Score and BsN-Score: Bagging and boosting based ensemble neural networks scoring functions for accurate binding affinity prediction of protein-ligand complexes

    PubMed Central

    2015-01-01

    Background Accurately predicting the binding affinities of large sets of protein-ligand complexes is a key challenge in computational biomolecular science, with applications in drug discovery, chemical biology, and structural biology. Since a scoring function (SF) is used to score, rank, and identify drug leads, the fidelity with which it predicts the affinity of a ligand candidate for a protein's binding site has a significant bearing on the accuracy of virtual screening. Despite intense efforts in developing conventional SFs, which are either force-field based, knowledge-based, or empirical, their limited predictive power has been a major roadblock toward cost-effective drug discovery. Therefore, in this work, we present novel SFs employing a large ensemble of neural networks (NN) in conjunction with a diverse set of physicochemical and geometrical features characterizing protein-ligand complexes to predict binding affinity. Results We assess the scoring accuracies of two new ensemble NN SFs based on bagging (BgN-Score) and boosting (BsN-Score), as well as those of conventional SFs in the context of the 2007 PDBbind benchmark that encompasses a diverse set of high-quality protein families. We find that BgN-Score and BsN-Score have more than 25% better Pearson's correlation coefficient (0.804 and 0.816 vs. 0.644) between predicted and measured binding affinities compared to that achieved by a state-of-the-art conventional SF. In addition, these ensemble NN SFs are also at least 19% more accurate (0.804 and 0.816 vs. 0.675) than SFs based on a single neural network that has been traditionally used in drug discovery applications. We further find that ensemble models based on NNs surpass SFs based on the decision-tree ensemble technique Random Forests. Conclusions Ensemble neural networks SFs, BgN-Score and BsN-Score, are the most accurate in predicting binding affinity of protein-ligand complexes among the considered SFs. Moreover, their accuracies are even higher

  16. Integrative subcellular proteomic analysis allows accurate prediction of human disease-causing genes.

    PubMed

    Zhao, Li; Chen, Yiyun; Bajaj, Amol Onkar; Eblimit, Aiden; Xu, Mingchu; Soens, Zachry T; Wang, Feng; Ge, Zhongqi; Jung, Sung Yun; He, Feng; Li, Yumei; Wensel, Theodore G; Qin, Jun; Chen, Rui

    2016-05-01

    Proteomic profiling on subcellular fractions provides invaluable information regarding both protein abundance and subcellular localization. When integrated with other data sets, it can greatly enhance our ability to predict gene function genome-wide. In this study, we performed a comprehensive proteomic analysis on the light-sensing compartment of photoreceptors called the outer segment (OS). By comparing with the protein profile obtained from the retina tissue depleted of OS, an enrichment score for each protein is calculated to quantify protein subcellular localization, and 84% accuracy is achieved compared with experimental data. By integrating the protein OS enrichment score, the protein abundance, and the retina transcriptome, the probability of a gene playing an essential function in photoreceptor cells is derived with high specificity and sensitivity. As a result, a list of genes that will likely result in human retinal disease when mutated was identified and validated by previous literature and/or animal model studies. Therefore, this new methodology demonstrates the synergy of combining subcellular fractionation proteomics with other omics data sets and is generally applicable to other tissues and diseases. PMID:26912414

  17. Neural network approach to quantum-chemistry data: Accurate prediction of density functional theory energies

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.; Lomakina, Ekaterina I.

    2009-08-01

    Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6±0.2 kcal mol-1. In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.

  18. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    PubMed

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin. PMID:19054059

  19. Accurate prediction of the refractive index of polymers using first principles and data modeling

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes

    Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.

  20. How Accurate Are the Anthropometry Equations in in Iranian Military Men in Predicting Body Composition?

    PubMed Central

    Shakibaee, Abolfazl; Faghihzadeh, Soghrat; Alishiri, Gholam Hossein; Ebrahimpour, Zeynab; Faradjzadeh, Shahram; Sobhani, Vahid; Asgari, Alireza

    2015-01-01

    Background: The body composition varies according to different life styles (i.e. intake calories and caloric expenditure). Therefore, it is wise to record military personnel’s body composition periodically and encourage those who abide to the regulations. Different methods have been introduced for body composition assessment: invasive and non-invasive. Amongst them, the Jackson and Pollock equation is most popular. Objectives: The recommended anthropometric prediction equations for assessing men’s body composition were compared with dual-energy X-ray absorptiometry (DEXA) gold standard to develop a modified equation to assess body composition and obesity quantitatively among Iranian military men. Patients and Methods: A total of 101 military men aged 23 - 52 years old with a mean age of 35.5 years were recruited and evaluated in the present study (average height, 173.9 cm and weight, 81.5 kg). The body-fat percentages of subjects were assessed both with anthropometric assessment and DEXA scan. The data obtained from these two methods were then compared using multiple regression analysis. Results: The mean and standard deviation of body fat percentage of the DEXA assessment was 21.2 ± 4.3 and body fat percentage obtained from three Jackson and Pollock 3-, 4- and 7-site equations were 21.1 ± 5.8, 22.2 ± 6.0 and 20.9 ± 5.7, respectively. There was a strong correlation between these three equations and DEXA (R² = 0.98). Conclusions: The mean percentage of body fat obtained from the three equations of Jackson and Pollock was very close to that of body fat obtained from DEXA; however, we suggest using a modified Jackson-Pollock 3-site equation for volunteer military men because the 3-site equation analysis method is simpler and faster than other methods. PMID:26715964

  1. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGESBeta

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  2. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    NASA Technical Reports Server (NTRS)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  3. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  4. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  5. Iofetamine I 123 single photon emission computed tomography is accurate in the diagnosis of Alzheimer's disease

    SciTech Connect

    Johnson, K.A.; Holman, B.L.; Rosen, T.J.; Nagel, J.S.; English, R.J.; Growdon, J.H. )

    1990-04-01

    To determine the diagnostic accuracy of iofetamine hydrochloride I 123 (IMP) with single photon emission computed tomography in Alzheimer's disease, we studied 58 patients with AD and 15 age-matched healthy control subjects. We used a qualitative method to assess regional IMP uptake in the entire brain and to rate image data sets as normal or abnormal without knowledge of subjects'clinical classification. The sensitivity and specificity of IMP with single photon emission computed tomography in AD were 88% and 87%, respectively. In 15 patients with mild cognitive deficits (Blessed Dementia Scale score, less than or equal to 10), sensitivity was 80%. With the use of a semiquantitative measure of regional cortical IMP uptake, the parietal lobes were the most functionally impaired in AD and the most strongly associated with the patients' Blessed Dementia Scale scores. These results indicated that IMP with single photon emission computed tomography may be a useful adjunct in the clinical diagnosis of AD in early, mild disease.

  6. Accurate prediction of interference minima in linear molecular harmonic spectra by a modified two-center model

    NASA Astrophysics Data System (ADS)

    Xin, Cui; Di-Yu, Zhang; Gao, Chen; Ji-Gen, Chen; Si-Liang, Zeng; Fu-Ming, Guo; Yu-Jun, Yang

    2016-03-01

    We demonstrate that the interference minima in the linear molecular harmonic spectra can be accurately predicted by a modified two-center model. Based on systematically investigating the interference minima in the linear molecular harmonic spectra by the strong-field approximation (SFA), it is found that the locations of the harmonic minima are related not only to the nuclear distance between the two main atoms contributing to the harmonic generation, but also to the symmetry of the molecular orbital. Therefore, we modify the initial phase difference between the double wave sources in the two-center model, and predict the harmonic minimum positions consistent with those simulated by SFA. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant Nos. 11274001, 11274141, 11304116, 11247024, and 11034003), and the Jilin Provincial Research Foundation for Basic Research, China (Grant Nos. 20130101012JC and 20140101168JC).

  7. Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2002-01-01

    NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.

  8. Evaluating Mesoscale Numerical Weather Predictions and Spatially Distributed Meteorologic Forcing Data for Developing Accurate SWE Forecasts over Large Mountain Basins

    NASA Astrophysics Data System (ADS)

    Hedrick, A. R.; Marks, D. G.; Winstral, A. H.; Marshall, H. P.

    2014-12-01

    The ability to forecast snow water equivalent, or SWE, in mountain catchments would benefit many different communities ranging from avalanche hazard mitigation to water resource management. Historical model runs of Isnobal, the physically based energy balance snow model, have been produced over the 2150 km2 Boise River Basin for water years 2012 - 2014 at 100-meter resolution. Spatially distributed forcing parameters such as precipitation, wind, and relative humidity are generated from automated weather stations located throughout the watershed, and are supplied to Isnobal at hourly timesteps. Similarly, the Weather Research & Forecasting (WRF) Model provides hourly predictions of the same forcing parameters from an atmospheric physics perspective. This work aims to quantitatively compare WRF model output to the spatial meteorologic fields developed to force Isnobal, with the hopes of eventually using WRF predictions to create accurate hourly forecasts of SWE over a large mountainous basin.

  9. Toward Relatively General and Accurate Quantum Chemical Predictions of Solid-State 17O NMR Chemical Shifts in Various Biologically Relevant Oxygen-containing Compounds

    PubMed Central

    Rorick, Amber; Michael, Matthew A.; Yang, Liu; Zhang, Yong

    2015-01-01

    Oxygen is an important element in most biologically significant molecules and experimental solid-state 17O NMR studies have provided numerous useful structural probes to study these systems. However, computational predictions of solid-state 17O NMR chemical shift tensor properties are still challenging in many cases and in particular each of the prior computational work is basically limited to one type of oxygen-containing systems. This work provides the first systematic study of the effects of geometry refinement, method and basis sets for metal and non-metal elements in both geometry optimization and NMR property calculations of some biologically relevant oxygen-containing compounds with a good variety of XO bonding groups, X= H, C, N, P, and metal. The experimental range studied is of 1455 ppm, a major part of the reported 17O NMR chemical shifts in organic and organometallic compounds. A number of computational factors towards relatively general and accurate predictions of 17O NMR chemical shifts were studied to provide helpful and detailed suggestions for future work. For the studied various kinds of oxygen-containing compounds, the best computational approach results in a theory-versus-experiment correlation coefficient R2 of 0.9880 and mean absolute deviation of 13 ppm (1.9% of the experimental range) for isotropic NMR shifts and R2 of 0.9926 for all shift tensor properties. These results shall facilitate future computational studies of 17O NMR chemical shifts in many biologically relevant systems, and the high accuracy may also help refinement and determination of active-site structures of some oxygen-containing substrate bound proteins. PMID:26274812

  10. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    NASA Astrophysics Data System (ADS)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7×10-4 Ha/Bohr.

  11. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    SciTech Connect

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7x10-4 Ha/Bohr.

  12. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  13. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  14. iTagPlot: an accurate computation and interactive drawing tool for tag density plot

    PubMed Central

    Kim, Sung-Hwan; Ezenwoye, Onyeka; Cho, Hwan-Gue; Robertson, Keith D.; Choi, Jeong-Hyeon

    2015-01-01

    Motivation: Tag density plots are very important to intuitively reveal biological phenomena from capture-based sequencing data by visualizing the normalized read depth in a region. Results: We have developed iTagPlot to compute tag density across functional features in parallel using multicores and a grid engine and to interactively explore it in a graphical user interface. It allows us to stratify features by defining groups based on biological function and measurement, summary statistics and unsupervised clustering. Availability and implementation: http://sourceforge.net/projects/itagplot/. Contact: jechoi@gru.edu and jeochoi@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25792550

  15. A hyperspectral imaging system for an accurate prediction of the above-ground biomass of individual rice plants.

    PubMed

    Feng, Hui; Jiang, Ni; Huang, Chenglong; Fang, Wei; Yang, Wanneng; Chen, Guoxing; Xiong, Lizhong; Liu, Qian

    2013-09-01

    Biomass is an important component of the plant phenomics, and the existing methods for biomass estimation for individual plants are either destructive or lack accuracy. In this study, a hyperspectral imaging system was developed for the accurate prediction of the above-ground biomass of individual rice plants in the visible and near-infrared spectral region. First, the structure of the system and the influence of various parameters on the camera acquisition speed were established. Then the system was used to image 152 rice plants, which selected from the rice mini-core collection, in two stages, the tillering to elongation (T-E) stage and the booting to heading (B-H) stage. Several variables were extracted from the images. Following, linear stepwise regression analysis and 5-fold cross-validation were used to select effective variables for model construction and test the stability of the model, respectively. For the T-E stage, the R(2) value was 0.940 for the fresh weight (FW) and 0.935 for the dry weight (DW). For the B-H stage, the R(2) value was 0.891 for the FW and 0.783 for the DW. Moreover, estimations of the biomass using visible light images were also calculated. These comparisons showed that hyperspectral imaging performed better than the visible light imaging. Therefore, this study provides not only a stable hyperspectral imaging platform but also an accurate and nondestructive method for the prediction of biomass for individual rice plants. PMID:24089866

  16. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  17. Reduction of computer usage costs in predicting unsteady aerodynamic loadings caused by control surface motion. Addendum to computer program description

    NASA Technical Reports Server (NTRS)

    Rowe, W. S.; Petrarca, J. R.

    1980-01-01

    Changes to be made that provide increased accuracy and increased user flexibility in prediction of unsteady loadings caused by control surface motions are described. Analysis flexibility is increased by reducing the restrictions on the location of the downwash stations relative to the leading edge and the edges of the control surface boundaries. Analysis accuracy is increased in predicting unsteady loading for high Mach number analysis conditions through use of additional chordwise downwash stations. User guideline are presented to enlarge analysis capabilities of unusual wing control surface configurations. Comparative results indicate that the revised procedures provide accurate predictions of unsteady loadings as well as providing reductions of 40 to 75 percent in computer usage cost required by previous versions of this program.

  18. Accurate Experiment to Computation Coupling for Understanding QH-mode physics using NIMROD

    NASA Astrophysics Data System (ADS)

    King, J. R.; Burrell, K. H.; Garofalo, A. M.; Groebner, R. J.; Hanson, J. D.; Hebert, J. D.; Hudson, S. R.; Pankin, A. Y.; Kruger, S. E.; Snyder, P. B.

    2015-11-01

    It is desirable to have an ITER H-mode regime that is quiescent to edge-localized modes (ELMs). The quiescent H-mode (QH-mode) with edge harmonic oscillations (EHO) is one such regime. High quality equilibria are essential for accurate EHO simulations with initial-value codes such as NIMROD. We include profiles outside the LCFS which generate associated currents when we solve the Grad-Shafranov equation with open-flux regions using the NIMEQ solver. The new solution is an equilibrium that closely resembles the original reconstruction (which does not contain open-flux currents). This regenerated equilibrium is consistent with the profiles that are measured by the high quality diagnostics on DIII-D. Results from nonlinear NIMROD simulations of the EHO are presented. The full measured rotation profiles are included in the simulation. The simulation develops into a saturated state. The saturation mechanism of the EHO is explored and simulation is compared to magnetic-coil measurements. This work is currently supported in part by the US DOE Office of Science under awards DE-FC02-04ER54698, DE-AC02-09CH11466 and the SciDAC Center for Extended MHD Modeling.

  19. Gravitational Focusing and the Computation of an Accurate Moon/Mars Cratering Ratio

    NASA Technical Reports Server (NTRS)

    Matney, Mark J.

    2006-01-01

    There have been a number of attempts to use asteroid populations to simultaneously compute cratering rates on the Moon and bodies elsewhere in the Solar System to establish the cratering ratio (e.g., [1],[2]). These works use current asteroid orbit population databases combined with collision rate calculations based on orbit intersections alone. As recent work on meteoroid fluxes [3] have highlighted, however, collision rates alone are insufficient to describe the cratering rates on planetary surfaces - especially planets with stronger gravitational fields than the Moon, such as Earth and Mars. Such calculations also need to include the effects of gravitational focusing, whereby the spatial density of the slower-moving impactors is preferentially "focused" by the gravity of the body. This leads overall to higher fluxes and cratering rates, and is highly dependent on the detailed velocity distributions of the impactors. In this paper, a comprehensive gravitational focusing algorithm originally developed to describe fluxes of interplanetary meteoroids [3] is applied to the collision rates and cratering rates of populations of asteroids and long-period comets to compute better cratering ratios for terrestrial bodies in the Solar System. These results are compared to the calculations of other researchers.

  20. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  1. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  2. Moving Toward Integrating Gene Expression Profiling Into High-Throughput Testing: A Gene Expression Biomarker Accurately Predicts Estrogen Receptor α Modulation in a Microarray Compendium.

    PubMed

    Ryan, Natalia; Chorley, Brian; Tice, Raymond R; Judson, Richard; Corton, J Christopher

    2016-05-01

    Microarray profiling of chemical-induced effects is being increasingly used in medium- and high-throughput formats. Computational methods are described here to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), often modulated by potential endocrine disrupting chemicals. ERα biomarker genes were identified by their consistent expression after exposure to 7 structurally diverse ERα agonists and 3 ERα antagonists in ERα-positive MCF-7 cells. Most of the biomarker genes were shown to be directly regulated by ERα as determined by ESR1 gene knockdown using siRNA as well as through chromatin immunoprecipitation coupled with DNA sequencing analysis of ERα-DNA interactions. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression datasets from experiments using MCF-7 cells, including those evaluating the transcriptional effects of hormones and chemicals. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% and 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) ER reference chemicals including "very weak" agonists. Importantly, the biomarker predictions accurately replicated predictions based on 18 in vitro high-throughput screening assays that queried different steps in ERα signaling. For 114 chemicals, the balanced accuracies were 95% and 98% for activation or suppression, respectively. These results demonstrate that the ERα gene expression biomarker can accurately identify ERα modulators in large collections of microarray data derived from MCF-7 cells. PMID:26865669

  3. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time. PMID:26808380

  4. A model for the accurate computation of the lateral scattering of protons in water

    NASA Astrophysics Data System (ADS)

    Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.

    2016-02-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  5. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  6. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    PubMed Central

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  7. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  8. Computational Chemical Imaging for Cardiovascular Pathology: Chemical Microscopic Imaging Accurately Determines Cardiac Transplant Rejection

    PubMed Central

    Tiwari, Saumya; Reddy, Vijaya B.; Bhargava, Rohit; Raman, Jaishankar

    2015-01-01

    Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients’ biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures. PMID:25932912

  9. An accurate and scalable O(N) algorithm for First-Principles Molecular Dynamics computations on petascale computers and beyond

    NASA Astrophysics Data System (ADS)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-03-01

    We present a truly scalable First-Principles Molecular Dynamics algorithm with O(N) complexity and fully controllable accuracy, capable of simulating systems of sizes that were previously impossible with this degree of accuracy. By avoiding global communication, we have extended W. Kohn's condensed matter ``nearsightedness'' principle to a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wavefunctions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 100,000 atoms on 100,000 processors, with a wall-clock time of the order of one minute per molecular dynamics time step. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. Accurate computation of the radiation from simple antennas using the finite-difference time-domain method

    NASA Astrophysics Data System (ADS)

    Maloney, James G.; Smith, Glenn S.; Scott, Waymond R., Jr.

    1990-07-01

    Two antennas are considered, a cylindrical monopole and a conical monopole. Both are driven through an image plane from a coaxial transmission line. Each of these antennas corresponds to a well-posed theoretical electromagnetic boundary value problem and a realizable experimental model. These antennas are analyzed by a straightforward application of the time-domain finite-difference method. The computed results for these antennas are shown to be in excellent agreement with accurate experimental measurements for both the time domain and the frequency domain. The graphical displays presented for the transient near-zone and far-zone radiation from these antennas provide physical insight into the radiation process.

  11. Computational Methods for Failure Analysis and Life Prediction

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Harris, Charles E. (Compiler); Housner, Jerrold M. (Compiler); Hopkins, Dale A. (Compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

  12. Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework

    SciTech Connect

    Austin, T.M.; Cary, J.R.; Bellantoni, L.; /Argonne

    2009-05-01

    We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.

  13. A Survey of Computational Intelligence Techniques in Protein Function Prediction

    PubMed Central

    Tiwari, Arvind Kumar; Srivastava, Rajeev

    2014-01-01

    During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction. PMID:25574395

  14. Integrating Crop Growth Models with Whole Genome Prediction through Approximate Bayesian Computation

    PubMed Central

    Technow, Frank; Messina, Carlos D.; Totir, L. Radu; Cooper, Mark

    2015-01-01

    Genomic selection, enabled by whole genome prediction (WGP) methods, is revolutionizing plant breeding. Existing WGP methods have been shown to deliver accurate predictions in the most common settings, such as prediction of across environment performance for traits with additive gene effects. However, prediction of traits with non-additive gene effects and prediction of genotype by environment interaction (G×E), continues to be challenging. Previous attempts to increase prediction accuracy for these particularly difficult tasks employed prediction methods that are purely statistical in nature. Augmenting the statistical methods with biological knowledge has been largely overlooked thus far. Crop growth models (CGMs) attempt to represent the impact of functional relationships between plant physiology and the environment in the formation of yield and similar output traits of interest. Thus, they can explain the impact of G×E and certain types of non-additive gene effects on the expressed phenotype. Approximate Bayesian computation (ABC), a novel and powerful computational procedure, allows the incorporation of CGMs directly into the estimation of whole genome marker effects in WGP. Here we provide a proof of concept study for this novel approach and demonstrate its use with synthetic data sets. We show that this novel approach can be considerably more accurate than the benchmark WGP method GBLUP in predicting performance in environments represented in the estimation set as well as in previously unobserved environments for traits determined by non-additive gene effects. We conclude that this proof of concept demonstrates that using ABC for incorporating biological knowledge in the form of CGMs into WGP is a very promising and novel approach to improving prediction accuracy for some of the most challenging scenarios in plant breeding and applied genetics. PMID:26121133

  15. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    PubMed Central

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870

  16. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-01

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1). PMID:24274986

  17. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding.

    PubMed

    Nissley, Daniel A; Sharma, Ajeet K; Ahmed, Nabeel; Friedrich, Ulrike A; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P

    2016-01-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592

  18. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding

    PubMed Central

    Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.

    2016-01-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally—a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592

  19. Sampling strategies for accurate computational inferences of gametic phase across highly polymorphic major histocompatibility complex loci

    PubMed Central

    2011-01-01

    Background Genes of the Major Histocompatibility Complex (MHC) are very popular genetic markers among evolutionary biologists because of their potential role in pathogen confrontation and sexual selection. However, MHC genotyping still remains challenging and time-consuming in spite of substantial methodological advances. Although computational haplotype inference has brought into focus interesting alternatives, high heterozygosity, extensive genetic variation and population admixture are known to cause inaccuracies. We have investigated the role of sample size, genetic polymorphism and genetic structuring on the performance of the popular Bayesian PHASE algorithm. To cover this aim, we took advantage of a large database of known genotypes (using traditional laboratory-based techniques) at single MHC class I (N = 56 individuals and 50 alleles) and MHC class II B (N = 103 individuals and 62 alleles) loci in the lesser kestrel Falco naumanni. Findings Analyses carried out over real MHC genotypes showed that the accuracy of gametic phase reconstruction improved with sample size as a result of the reduction in the allele to individual ratio. We then simulated different data sets introducing variations in this parameter to define an optimal ratio. Conclusions Our results demonstrate a critical influence of the allele to individual ratio on PHASE performance. We found that a minimum allele to individual ratio (1:2) yielded 100% accuracy for both MHC loci. Sampling effort is therefore a crucial step to obtain reliable MHC haplotype reconstructions and must be accomplished accordingly to the degree of MHC polymorphism. We expect our findings provide a foothold into the design of straightforward and cost-effective genotyping strategies of those MHC loci from which locus-specific primers are available. PMID:21615903

  20. Towards an accurate and computationally-efficient modelling of Fe(II)-based spin crossover materials.

    PubMed

    Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent

    2015-07-01

    The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals. PMID:26040609

  1. Accurate micro-computed tomography imaging of pore spaces in collagen-based scaffold.

    PubMed

    Zidek, Jan; Vojtova, Lucy; Abdel-Mohsen, A M; Chmelik, Jiri; Zikmund, Tomas; Brtnikova, Jana; Jakubicek, Roman; Zubal, Lukas; Jan, Jiri; Kaiser, Jozef

    2016-06-01

    In this work we have used X-ray micro-computed tomography (μCT) as a method to observe the morphology of 3D porous pure collagen and collagen-composite scaffolds useful in tissue engineering. Two aspects of visualizations were taken into consideration: improvement of the scan and investigation of its sensitivity to the scan parameters. Due to the low material density some parts of collagen scaffolds are invisible in a μCT scan. Therefore, here we present different contrast agents, which increase the contrast of the scanned biopolymeric sample for μCT visualization. The increase of contrast of collagenous scaffolds was performed with ceramic hydroxyapatite microparticles (HAp), silver ions (Ag(+)) and silver nanoparticles (Ag-NPs). Since a relatively small change in imaging parameters (e.g. in 3D volume rendering, threshold value and μCT acquisition conditions) leads to a completely different visualized pattern, we have optimized these parameters to obtain the most realistic picture for visual and qualitative evaluation of the biopolymeric scaffold. Moreover, scaffold images were stereoscopically visualized in order to better see the 3D biopolymer composite scaffold morphology. However, the optimized visualization has some discontinuities in zoomed view, which can be problematic for further analysis of interconnected pores by commonly used numerical methods. Therefore, we applied the locally adaptive method to solve discontinuities issue. The combination of contrast agent and imaging techniques presented in this paper help us to better understand the structure and morphology of the biopolymeric scaffold that is crucial in the design of new biomaterials useful in tissue engineering. PMID:27153826

  2. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Astrophysics Data System (ADS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  3. Special Issue: Big data and predictive computational modeling

    NASA Astrophysics Data System (ADS)

    Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.

    2016-09-01

    The motivation for this special issue stems from the symposium on "Big Data and Predictive Computational Modeling" that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.

  4. Advanced Computational Aeroacoustics Methods for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane (Technical Monitor); Tam, Christopher

    2003-01-01

    Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

  5. A low computation cost method for seizure prediction.

    PubMed

    Zhang, Yanli; Zhou, Weidong; Yuan, Qi; Wu, Qi

    2014-10-01

    The dynamic changes of electroencephalograph (EEG) signals in the period prior to epileptic seizures play a major role in the seizure prediction. This paper proposes a low computation seizure prediction algorithm that combines a fractal dimension with a machine learning algorithm. The presented seizure prediction algorithm extracts the Higuchi fractal dimension (HFD) of EEG signals as features to classify the patient's preictal or interictal state with Bayesian linear discriminant analysis (BLDA) as a classifier. The outputs of BLDA are smoothed by a Kalman filter for reducing possible sporadic and isolated false alarms and then the final prediction results are produced using a thresholding procedure. The algorithm was evaluated on the intracranial EEG recordings of 21 patients in the Freiburg EEG database. For seizure occurrence period of 30 min and 50 min, our algorithm obtained an average sensitivity of 86.95% and 89.33%, an average false prediction rate of 0.20/h, and an average prediction time of 24.47 min and 39.39 min, respectively. The results confirm that the changes of HFD can serve as a precursor of ictal activities and be used for distinguishing between interictal and preictal epochs. Both HFD and BLDA classifier have a low computational complexity. All of these make the proposed algorithm suitable for real-time seizure prediction. PMID:25062892

  6. Small-scale field experiments accurately scale up to predict density dependence in reef fish populations at large scales

    PubMed Central

    Steele, Mark A.; Forrester, Graham E.

    2005-01-01

    Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats. PMID:16150721

  7. Small-scale field experiments accurately scale up to predict density dependence in reef fish populations at large scales.

    PubMed

    Steele, Mark A; Forrester, Graham E

    2005-09-20

    Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats. PMID:16150721

  8. Accurate treatments of electrostatics for computer simulations of biological systems: A brief survey of developments and existing problems

    NASA Astrophysics Data System (ADS)

    Yi, Sha-Sha; Pan, Cong; Hu, Zhong-Han

    2015-12-01

    Modern computer simulations of biological systems often involve an explicit treatment of the complex interactions among a large number of molecules. While it is straightforward to compute the short-ranged Van der Waals interaction in classical molecular dynamics simulations, it has been a long-lasting issue to develop accurate methods for the longranged Coulomb interaction. In this short review, we discuss three types of methodologies for the accurate treatment of electrostatics in simulations of explicit molecules: truncation-type methods, Ewald-type methods, and mean-field-type methods. Throughout the discussion, we brief the formulations and developments of these methods, emphasize the intrinsic connections among the three types of methods, and focus on the existing problems which are often associated with the boundary conditions of electrostatics. This brief survey is summarized with a short perspective on future trends along the method developments and applications in the field of biological simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 91127015 and 21522304) and the Open Project from the State Key Laboratory of Theoretical Physics, and the Innovation Project from the State Key Laboratory of Supramolecular Structure and Materials.

  9. Accurate guidance for percutaneous access to a specific target in soft tissues: preclinical study of computer-assisted pericardiocentesis.

    PubMed

    Chavanon, O; Barbe, C; Troccaz, J; Carrat, L; Ribuot, C; Noirclerc, M; Maitrasse, B; Blin, D

    1999-06-01

    In the field of percutaneous access to soft tissues, our project was to improve classical pericardiocentesis by performing accurate guidance to a selected target, according to a model of the pericardial effusion acquired through three-dimensional (3D) data recording. Required hardware is an echocardiographic device and a needle, both linked to a 3D localizer, and a computer. After acquiring echographic data, a modeling procedure allows definition of the optimal puncture strategy, taking into consideration the mobility of the heart, by determining a stable region, whatever the period of the cardiac cycle. A passive guidance system is then used to reach the planned target accurately, generally a site in the middle of the stable region. After validation on a dynamic phantom and a feasibility study in dogs, an accuracy and reliability analysis protocol was realized on pigs with experimental pericardial effusion. Ten consecutive successful punctures using various trajectories were performed on eight pigs. Nonbloody liquid was collected from pericardial effusions in the stable region (5 to 9 mm wide) within 10 to 15 minutes from echographic acquisition to drainage. Accuracy of at least 2.5 mm was demonstrated. This study demonstrates the feasibility of computer-assisted pericardiocentesis. Beyond the simple improvement of the current technique, this method could be a new way to reach the heart or a new tool for percutaneous access and image-guided puncture of soft tissues. Further investigation will be necessary before routine human application. PMID:10414543

  10. Accurate and efficient prediction of fine-resolution hydrologic and carbon dynamic simulations from coarse-resolution models

    NASA Astrophysics Data System (ADS)

    Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning

    2016-02-01

    The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.

  11. Computational predictions of substituted benzyne and indolyne regioselectivities

    PubMed Central

    Picazo, Elias; Houk, K. N.; Garg, Neil K.

    2015-01-01

    A computational study using DFT methods was performed for an array of mono and disubstituted benzynes and indolynes. The inherent distortion present in the geometry-optimized structures predicts the regioselectivity of aryne trapping by nucleophiles or cycloaddition partners. These studies will serve to enable the further use of unsymmetrical arynes in organic synthesis. PMID:26034336

  12. Computer program for predicting creep behavior of bodies of revolution

    NASA Technical Reports Server (NTRS)

    Adams, R.; Greenbaum, G.

    1971-01-01

    Computer program, CRAB, uses finite-element method to calculate creep behavior and predict steady-state stresses in an arbitrary body of revolution subjected to a time-dependent axisymmetric load. Creep strains follow a time hardening law and a Prandtl-Reuss stress-strain relationship.

  13. TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients

    PubMed Central

    Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.

    2016-01-01

    BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218

  14. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    NASA Astrophysics Data System (ADS)

    Hrubý, Jan

    2012-04-01

    Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  15. Evolutionary computational methods to predict oral bioavailability QSPRs.

    PubMed

    Bains, William; Gilbert, Richard; Sviridenko, Lilya; Gascon, Jose-Miguel; Scoffin, Robert; Birchall, Kris; Harvey, Inman; Caldwell, John

    2002-01-01

    This review discusses evolutionary and adaptive methods for predicting oral bioavailability (OB) from chemical structure. Genetic Programming (GP), a specific form of evolutionary computing, is compared with some other advanced computational methods for OB prediction. The results show that classifying drugs into 'high' and 'low' OB classes on the basis of their structure alone is solvable, and initial models are already producing output that would be useful for pharmaceutical research. The results also suggest that quantitative prediction of OB will be tractable. Critical aspects of the solution will involve the use of techniques that can: (i) handle problems with a very large number of variables (high dimensionality); (ii) cope with 'noisy' data; and (iii) implement binary choices to sub-classify molecules with behavior that are qualitatively different. Detailed quantitative predictions will emerge from more refined models that are hybrids derived from mechanistic models of the biology of oral absorption and the power of advanced computing techniques to predict the behavior of the components of those models in silico. PMID:11865672

  16. A large-scale evaluation of computational protein function prediction

    PubMed Central

    Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650

  17. Algorithms for Target Prediction for Computer Users with Athetosis

    PubMed Central

    Rodriguez, Sergio Peral; Ding, Dan; Riviere, Cameron N.

    2012-01-01

    Athetosis is a movement disorder that afflicts numerous persons with cerebral palsy, resulting in significant problems in their control of computer interfaces. As a step toward increasing the efficiency of icon selection by computer users with athetosis, we have implemented three techniques to reduce the time of target acquisition: transition assistance via directional gain variation based on target prediction during initial movement toward the target, settling assistance via gain reduction when in the vicinity of a predicted target, and expansion of the predicted target as the cursor approaches it. The paper describes each method, and presents results from evaluation of each method using a closed-loop model of a human subject with athetosis, trained using recorded data, at three different severity levels. PMID:21096307

  18. A large-scale evaluation of computational protein function prediction.

    PubMed

    Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-03-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools. PMID:23353650

  19. IGS-global ionospheric maps for accurate computation of GPS single- frequency ionospheric delay-simulation study

    NASA Astrophysics Data System (ADS)

    Farah, A.

    The Ionospheric delay is still one of the largest sources of error that affects the positioning accuracy of any satellite positioning system. This problem could be solved due to the dispersive nature of the Ionosphere by combining simultaneous measurements of signals at two different frequencies but it is still there for single- frequency users. Much effort has been made in establishing models for single- frequency users to make this effect as small as possible. These models vary in accuracy, input data and computational complexity, so the choice between the different models depends on the individual circumstances of the user. From the simulation point of view, the model needed should be accurate with a global coverage and good description to the Ionosphere's variable nature with both time and location. The author reviews some of these established models, starting with the BENT model, the Klobuchar model and the IRI (International Reference Ionosphere) model. Since quiet a long time, Klobuchar model considers the most widely used model ever in this field, due to its simplicity and time saving. Any GPS user could find Klobuchar model's coefficients in the broadcast navigation message. CODE, Centre for Orbit Determination in Europe provides a new set of coefficients for Klobuchar model, which gives more accurate results for the Ionospheric delay computation. IGS (International GPS Service) services include providing GPS community with a global Ionospheric maps in IONEX-format (IONosphere Map Exchange format) which enables the computation of the Ionospheric delay at the desired location and time. The study was undertaken from GPS-data simulation point of view. The aim was to select a model for the simulation of GPS data that gives a good description of the Ionosphere's nature with a high degree of accuracy in computing the Ionospheric delay that yields to better-simulated data. A new model developed by the author based on IGS global Ionospheric maps. A comparison

  20. A computational approach for prediction of donor splice sites with improved accuracy.

    PubMed

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D

    2016-09-01

    Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. PMID:27302911

  1. Computational prediction of type III and IV secreted effectors in Gram-negative bacteria

    SciTech Connect

    McDermott, Jason E.; Corrigan, Abigail L.; Peterson, Elena S.; Oehmen, Christopher S.; Niemann, George; Cambronne, Eric; Sharp, Danna; Adkins, Joshua N.; Samudrala, Ram; Heffron, Fred

    2011-01-01

    In this review, we provide an overview of the methods employed by four recent papers that described novel methods for computational prediction of secreted effectors from type III and IV secretion systems in Gram-negative bacteria. The results of the studies in terms of performance at accurately predicting secreted effectors and similarities found between secretion signals that may reflect biologically relevant features for recognition. We discuss the web-based tools for secreted effector prediction described in these studies and announce the availability of our tool, the SIEVEserver (http://www.biopilot.org). Finally, we assess the accuracy of the three type III effector prediction methods on a small set of proteins not known prior to the development of these tools that we have recently discovered and validated using both experimental and computational approaches. Our comparison shows that all methods use similar approaches and, in general arrive at similar conclusions. We discuss the possibility of an order-dependent motif in the secretion signal, which was a point of disagreement in the studies. Our results show that there may be classes of effectors in which the signal has a loosely defined motif, and others in which secretion is dependent only on compositional biases. Computational prediction of secreted effectors from protein sequences represents an important step toward better understanding the interaction between pathogens and hosts.

  2. A universal and efficient method to compute maps from image-based prediction models.

    PubMed

    Sabuncu, Mert R

    2014-01-01

    Discriminative supervised learning algorithms, such as Support Vector Machines, are becoming increasingly popular in biomedical image computing. One of their main uses is to construct image-based prediction models, e.g., for computer aided diagnosis or "mind reading." A major challenge in these applications is the biological interpretation of the machine learning models, which can be arbitrarily complex functions of the input features (e.g., as induced by kernel-based methods). Recent work has proposed several strategies for deriving maps that highlight regions relevant for accurate prediction. Yet most of these methods o n strong assumptions about t he prediction model (e.g., linearity, sparsity) and/or data (e.g., Gaussianity), or fail to exploit the covariance structure in the data. In this work, we propose a computationally efficient and universal framework for quantifying associations captured by black box machine learning models. Furthermore, our theoretical perspective reveals that examining associations with predictions, in the absence of ground truth labels, can be very informative. We apply the proposed method to machine learning models trained to predict cognitive impairment from structural neuroimaging data. We demonstrate that our approach yields biologically meaningful maps of association. PMID:25320819

  3. A procedure for computing accurate ab initio quartic force fields: Application to HO2+ and H2O

    NASA Astrophysics Data System (ADS)

    Huang, Xinchuan; Lee, Timothy J.

    2008-07-01

    A procedure for the calculation of molecular quartic force fields (QFFs) is proposed and investigated. The goal is to generate highly accurate ab initio QFFs that include many of the so-called ``small'' effects that are necessary to achieve high accuracy. The small effects investigated in the present study include correlation of the core electrons (core correlation), extrapolation to the one-particle basis set limit, correction for scalar relativistic contributions, correction for higher-order correlation effects, and inclusion of diffuse functions in the one-particle basis set. The procedure is flexible enough to allow for some effects to be computed directly, while others may be added as corrections. A single grid of points is used and is centered about an initial reference geometry that is designed to be as close as possible to the final ab initio equilibrium structure (with all effects included). It is shown that the least-squares fit of the QFF is not compromised by the added corrections, and the balance between elimination of contamination from higher-order force constants while retaining energy differences large enough to yield meaningful quartic force constants is essentially unchanged from the standard procedures we have used for many years. The initial QFF determined from the least-squares fit is transformed to the exact minimum in order to eliminate gradient terms and allow for the use of second-order perturbation theory for evaluation of spectroscopic constants. It is shown that this step has essentially no effect on the quality of the QFF largely because the initial reference structure is, by design, very close to the final ab initio equilibrium structure. The procedure is used to compute an accurate, purely ab initio QFF for the H2O molecule, which is used as a benchmark test case. The procedure is then applied to the ground and first excited electronic states of the HO2+ molecular cation. Fundamental vibrational frequencies and spectroscopic

  4. The origins of computer weather prediction and climate modeling

    SciTech Connect

    Lynch, Peter

    2008-03-20

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

  5. The origins of computer weather prediction and climate modeling

    NASA Astrophysics Data System (ADS)

    Lynch, Peter

    2008-03-01

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

  6. Accurate electrical prediction of memory array through SEM-based edge-contour extraction using SPICE simulation

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya

    2009-03-01

    The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.

  7. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    SciTech Connect

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  8. RAXJET: A computer program for predicting transonic, axisymmetric flow over nozzle afterbodies with supersonic jet exhausts

    NASA Technical Reports Server (NTRS)

    Wilmoth, R. G.

    1982-01-01

    A viscous-inviscid interaction method to calculate the subsonic and transonic flow over nozzle afterbodies with supersonic jet exhausts was developed. The method iteratively combines a relaxation solution of the full potential equation for the inviscid external flow, a shock capturing-shock fitting inviscid jet solution, an integral boundary layer solution, a control volume method for treating separated flows, and an overlaid mixing layer solution. A computer program called RAXJET which incorporates the method, illustrates the predictive capabilities of the method by comparison with experimental data is described, a user's guide to the computer program is provided. The method accurately predicts afterbody pressures, drag, and flow field properties for attached and separated flows for which no shock induced separation occurs.

  9. Reduction of computer usage costs in predicting unsteady aerodynamic loadings caused by control surface motions: Analysis and results

    NASA Technical Reports Server (NTRS)

    Rowe, W. S.; Sebastian, J. D.; Petrarca, J. R.

    1979-01-01

    Results of theoretical and numerical investigations conducted to develop economical computing procedures were applied to an existing computer program that predicts unsteady aerodynamic loadings caused by leading and trailing edge control surface motions in subsonic compressible flow. Large reductions in computing costs were achieved by removing the spanwise singularity of the downwash integrand and evaluating its effect separately in closed form. Additional reductions were obtained by modifying the incremental pressure term that account for downwash singularities at control surface edges. Accuracy of theoretical predictions of unsteady loading at high reduced frequencies was increased by applying new pressure expressions that exactly satisified the high frequency boundary conditions of an oscillating control surface. Comparative computer result indicated that the revised procedures provide more accurate predictions of unsteady loadings as well as providing reduction of 50 to 80 percent in computer usage costs.

  10. Ring polymer molecular dynamics fast computation of rate coefficients on accurate potential energy surfaces in local configuration space: Application to the abstraction of hydrogen from methane

    NASA Astrophysics Data System (ADS)

    Meng, Qingyong; Chen, Jun; Zhang, Dong H.

    2016-04-01

    To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3 reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ˜20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.

  11. Computational Predictions of the Performance Wright 'Bent End' Propellers

    NASA Technical Reports Server (NTRS)

    Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)

    2002-01-01

    Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.

  12. Computational Prediction of RNA-Binding Proteins and Binding Sites

    PubMed Central

    Si, Jingna; Cui, Jing; Cheng, Jin; Wu, Rongling

    2015-01-01

    Proteins and RNA interaction have vital roles in many cellular processes such as protein synthesis, sequence encoding, RNA transfer, and gene regulation at the transcriptional and post-transcriptional levels. Approximately 6%–8% of all proteins are RNA-binding proteins (RBPs). Distinguishing these RBPs or their binding residues is a major aim of structural biology. Previously, a number of experimental methods were developed for the determination of protein–RNA interactions. However, these experimental methods are expensive, time-consuming, and labor-intensive. Alternatively, researchers have developed many computational approaches to predict RBPs and protein–RNA binding sites, by combining various machine learning methods and abundant sequence and/or structural features. There are three kinds of computational approaches, which are prediction from protein sequence, prediction from protein structure, and protein-RNA docking. In this paper, we review all existing studies of predictions of RNA-binding sites and RBPs and complexes, including data sets used in different approaches, sequence and structural features used in several predictors, prediction method classifications, performance comparisons, evaluation methods, and future directions. PMID:26540053

  13. Computational prediction of human salivary proteins from blood circulation and application to diagnostic biomarker identification.

    PubMed

    Wang, Jiaxin; Liang, Yanchun; Wang, Yan; Cui, Juan; Liu, Ming; Du, Wei; Xu, Ying

    2013-01-01

    Proteins can move from blood circulation into salivary glands through active transportation, passive diffusion or ultrafiltration, some of which are then released into saliva and hence can potentially serve as biomarkers for diseases if accurately identified. We present a novel computational method for predicting salivary proteins that come from circulation. The basis for the prediction is a set of physiochemical and sequence features we found to be discerning between human proteins known to be movable from circulation to saliva and proteins deemed to be not in saliva. A classifier was trained based on these features using a support-vector machine to predict protein secretion into saliva. The classifier achieved 88.56% average recall and 90.76% average precision in 10-fold cross-validation on the training data, indicating that the selected features are informative. Considering the possibility that our negative training data may not be highly reliable (i.e., proteins predicted to be not in saliva), we have also trained a ranking method, aiming to rank the known salivary proteins from circulation as the highest among the proteins in the general background, based on the same features. This prediction capability can be used to predict potential biomarker proteins for specific human diseases when coupled with the information of differentially expressed proteins in diseased versus healthy control tissues and a prediction capability for blood-secretory proteins. Using such integrated information, we predicted 31 candidate biomarker proteins in saliva for breast cancer. PMID:24324552

  14. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions

    PubMed Central

    Brezovský, Jan

    2016-01-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools’ predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations

  15. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    PubMed

    Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan

    2016-05-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To

  16. Computational predictions of energy materials using density functional theory

    NASA Astrophysics Data System (ADS)

    Jain, Anubhav; Shin, Yongwoo; Persson, Kristin A.

    2016-01-01

    In the search for new functional materials, quantum mechanics is an exciting starting point. The fundamental laws that govern the behaviour of electrons have the possibility, at the other end of the scale, to predict the performance of a material for a targeted application. In some cases, this is achievable using density functional theory (DFT). In this Review, we highlight DFT studies predicting energy-related materials that were subsequently confirmed experimentally. The attributes and limitations of DFT for the computational design of materials for lithium-ion batteries, hydrogen production and storage materials, superconductors, photovoltaics and thermoelectric materials are discussed. In the future, we expect that the accuracy of DFT-based methods will continue to improve and that growth in computing power will enable millions of materials to be virtually screened for specific applications. Thus, these examples represent a first glimpse of what may become a routine and integral step in materials discovery.

  17. A Review of Computational Methods to Predict the Risk of Rupture of Abdominal Aortic Aneurysms

    PubMed Central

    Canchi, Tejas; Kumar, S. D.; Ng, E. Y. K.; Narayanan, Sriram

    2015-01-01

    Computational methods have played an important role in health care in recent years, as determining parameters that affect a certain medical condition is not possible in experimental conditions in many cases. Computational fluid dynamics (CFD) methods have been used to accurately determine the nature of blood flow in the cardiovascular and nervous systems and air flow in the respiratory system, thereby giving the surgeon a diagnostic tool to plan treatment accordingly. Machine learning or data mining (MLD) methods are currently used to develop models that learn from retrospective data to make a prediction regarding factors affecting the progression of a disease. These models have also been successful in incorporating factors such as patient history and occupation. MLD models can be used as a predictive tool to determine rupture potential in patients with abdominal aortic aneurysms (AAA) along with CFD-based prediction of parameters like wall shear stress and pressure distributions. A combination of these computer methods can be pivotal in bridging the gap between translational and outcomes research in medicine. This paper reviews the use of computational methods in the diagnosis and treatment of AAA. PMID:26509168

  18. Mathematical models for accurate prediction of atmospheric visibility with particular reference to the seasonal and environmental patterns in Hong Kong.

    PubMed

    Mui, K W; Wong, L T; Chung, L Y

    2009-11-01

    Atmospheric visibility impairment has gained increasing concern as it is associated with the existence of a number of aerosols as well as common air pollutants and produces unfavorable conditions for observation, dispersion, and transportation. This study analyzed the atmospheric visibility data measured in urban and suburban Hong Kong (two selected stations) with respect to time-matched mass concentrations of common air pollutants including nitrogen dioxide (NO(2)), nitrogen monoxide (NO), respirable suspended particulates (PM(10)), sulfur dioxide (SO(2)), carbon monoxide (CO), and meteorological parameters including air temperature, relative humidity, and wind speed. No significant difference in atmospheric visibility was reported between the two measurement locations (p > or = 0.6, t test); and good atmospheric visibility was observed more frequently in summer and autumn than in winter and spring (p < 0.01, t test). It was also found that atmospheric visibility increased with temperature but decreased with the concentrations of SO(2), CO, PM(10), NO, and NO(2). The results showed that atmospheric visibility was season dependent and would have significant correlations with temperature, the mass concentrations of PM(10) and NO(2), and the air pollution index API (correlation coefficients mid R: R mid R: > or = 0.7, p < or = 0.0001, t test). Mathematical expressions catering to the seasonal variations of atmospheric visibility were thus proposed. By comparison, the proposed visibility prediction models were more accurate than some existing regional models. In addition to improving visibility prediction accuracy, this study would be useful for understanding the context of low atmospheric visibility, exploring possible remedial measures, and evaluating the impact of air pollution and atmospheric visibility impairment in this region. PMID:18951139

  19. Verification, Validation, and Predictive Capability in Computational Engineering and Physics

    SciTech Connect

    OBERKAMPF, WILLIAM L.; TRUCANO, TIMOTHY G.; HIRSCH, CHARLES

    2003-02-01

    Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue. This paper presents our viewpoint of the state of the art in V&V in computational physics. (In this paper we refer to all fields of computational engineering and physics, e.g., computational fluid dynamics, computational solid mechanics, structural dynamics, shock wave physics, computational chemistry, etc., as computational physics.) We do not provide a comprehensive review of the multitudinous contributions to V&V, although we do reference a large number of previous works from many fields. We have attempted to bring together many different perspectives on V&V, highlight those perspectives that are effective from a practical engineering viewpoint, suggest future research topics, and discuss key implementation issues that are necessary to improve the effectiveness of V&V. We describe our view of the framework in which predictive capability relies on V&V, as well as other factors that affect predictive capability. Our opinions about the research needs and management issues in V&V are very practical: What methods and techniques need to be developed and what changes in the views of management need to occur to increase the usefulness, reliability, and impact of computational physics for decision making about engineering

  20. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, B.T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  1. Predictive medicine: computational techniques in therapeutic decision-making.

    PubMed

    Taylor, C A; Draney, M T; Ku, J P; Parker, D; Steele, B N; Wang, K; Zarins, C K

    1999-01-01

    The current paradigm for surgery planning for the treatment of cardiovascular disease relies exclusively on diagnostic imaging data to define the present state of the patient, empirical data to evaluate the efficacy of prior treatments for similar patients, and the judgement of the surgeon to decide on a preferred treatment. The individual variability and inherent complexity of human biological systems is such that diagnostic imaging and empirical data alone are insufficient to predict the outcome of a given treatment for an individual patient. We propose a new paradigm of predictive medicine in which the physician utilizes computational tools to construct and evaluate a combined anatomic/physiologic model to predict the outcome of alternative treatment plans for an individual patient. The predictive medicine paradigm is implemented in a software system developed for Simulation-Based Medical Planning. This system provides an integrated set of tools to test hypotheses regarding the effect of alternate treatment plans on blood flow in the cardiovascular system of an individual patient. It combines an Internet-based user interface developed using Java and VRML, image segmentation, geometric solid modeling, automatic finite element mesh generation, computational fluid dynamics, and scientific visualization techniques. This system is applied to the evaluation of alternate, patient-specific treatments for a case of lower extremity occlusive cardiovascular disease. PMID:10581521

  2. Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules

    PubMed Central

    Desai, Aarti; Singh, Vivek K.; Jere, Abhay

    2016-01-01

    Introduction Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense) that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage. Results The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with ‘High’ reliability scoring), DEREK (accuracy = 72.73% and CCR = 71.44%) and TOPKAT (accuracy = 60.00% and CCR = 61.67%). Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%), the coverage was very low (only 10 out of 77 molecules were predicted reliably). Conclusions Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing. PMID:27271321

  3. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM{sup +}-up scheme

    SciTech Connect

    Chang, Chih-Hao . E-mail: chchang@engineering.ucsb.edu; Liou, Meng-Sing . E-mail: meng-sing.liou@grc.nasa.gov

    2007-07-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM{sup +} scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM{sup +}-up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion.

  4. A Support Vector Machine model for the prediction of proteotypic peptides for accurate mass and time proteomics

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.

    2008-07-01

    Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php

  5. A new accurate ground-state potential energy surface of ethylene and predictions for rotational and vibrational energy levels

    NASA Astrophysics Data System (ADS)

    Delahaye, Thibault; Nikitin, Andrei; Rey, Michaël; Szalay, Péter G.; Tyuterev, Vladimir G.

    2014-09-01

    In this paper we report a new ground state potential energy surface for ethylene (ethene) C2H4 obtained from extended ab initio calculations. The coupled-cluster approach with the perturbative inclusion of the connected triple excitations CCSD(T) and correlation consistent polarized valence basis set cc-pVQZ was employed for computations of electronic ground state energies. The fit of the surface included 82 542 nuclear configurations using sixth order expansion in curvilinear symmetry-adapted coordinates involving 2236 parameters. A good convergence for variationally computed vibrational levels of the C2H4 molecule was obtained with a RMS(Obs.-Calc.) deviation of 2.7 cm-1 for fundamental bands centers and 5.9 cm-1 for vibrational bands up to 7800 cm-1. Large scale vibrational and rotational calculations for 12C2H4, 13C2H4, and 12C2D4 isotopologues were performed using this new surface. Energy levels for J = 20 up to 6000 cm-1 are in a good agreement with observations. This represents a considerable improvement with respect to available global predictions of vibrational levels of 13C2H4 and 12C2D4 and rovibrational levels of 12C2H4.

  6. CAFE: A Computer Tool for Accurate Simulation of the Regulatory Pool Fire Environment for Type B Packages

    SciTech Connect

    Gritzo, L.A.; Koski, J.A.; Suo-Anttila, A.J.

    1999-03-16

    The Container Analysis Fire Environment computer code (CAFE) is intended to provide Type B package designers with an enhanced engulfing fire boundary condition when combined with the PATRAN/P-Thermal commercial code. Historically an engulfing fire boundary condition has been modeled as {sigma}T{sup 4} where {sigma} is the Stefan-Boltzman constant, and T is the fire temperature. The CAFE code includes the necessary chemistry, thermal radiation, and fluid mechanics to model an engulfing fire. Effects included are the local cooling of gases that form a protective boundary layer that reduces the incoming radiant heat flux to values lower than expected from a simple {sigma}T{sup 4} model. In addition, the effect of object shape on mixing that may increase the local fire temperature is included. Both high and low temperature regions that depend upon the local availability of oxygen are also calculated. Thus the competing effects that can both increase and decrease the local values of radiant heat flux are included in a reamer that is not predictable a-priori. The CAFE package consists of a group of computer subroutines that can be linked to workstation-based thermal analysis codes in order to predict package performance during regulatory and other accident fire scenarios.

  7. High IFIT1 expression predicts improved clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma.

    PubMed

    Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong

    2016-06-01

    Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma. PMID:26980050

  8. Computer software improves CT drag and buckling prediction

    SciTech Connect

    Wu, J.

    1998-12-31

    Coiled tubing drag and buckling prediction is very important in coiled tubing operations including drilling, completion and workover. Bit weight, packer load, and well depth penetration can be limited by a severe drag and buckling problem in coiled tubing operations. Enormous drag can be resulted from the buckling of coiled tubing, causing a lockup of coiled tubing in the wellbore. Many factors can affect coiled tubing drag and buckling, including wellbore condition, coiled tubing size, bit weight/packer load, well depth, residual bend, and wellbore pressure. This paper presents a newly developed computer software to help predict coiled tubing drag and buckling. The software`s user-friendly interface makes it easy for field engineers to predict coiled tubing drag and buckling. Three coiled tubing operation categories and several buckling criteria are used in the software to improve coiled tubing drag and buckling prediction. The advanced graphical animation helps visualize the development of coiled tubing drag and buckling in the operation process. The prediction of coiled tubing drag and buckling is improved by using this software to obtain a success in coiled tubing operations.

  9. Property prediction of new semiconductors by computer modeling and simulation

    NASA Astrophysics Data System (ADS)

    Wu, Ping; Lin, Guo Q.; Zeng, Yingzhi

    2002-11-01

    A new methodology of systematic design of new materials for various applications is presented in this paper. In particular, a large number of candidate compounds that are formed by all possible combinations of the targeted elements in the periodic table are first screened and shortlisted by artificial neural network techniques. Then the quantum mechanics computation is employed to evaluate the promising candidates selected from the first step. Finally experiments are performed to further examine the computation results. In the present work, we apply this methodology to the study of semiconductors of binary (III-V and II-VI) and ternary (I-III-VI2 and II-IV-V2) compounds. Firstly, we systematically study all possible binary and ternary compounds by using pattern recognition and perform prediction of two important properties, namely band gap energy and lattice constant, with the artificial neural network model. Candidate semiconductors are then selected. On the basis of the above study, we perform first principles quantum mechanics computation for some promising II-VI binary candidates. The first principles study of the ternary candidates will be conducted in the near future, and the experiment study of the binary compounds is ongoing. The model predicted new compounds as well as the developed design methodology may be of interest to general materials scientists including these of smart materials research.

  10. A Systematic Review of Predictions of Survival in Palliative Care: How Accurate Are Clinicians and Who Are the Experts?

    PubMed Central

    Harris, Adam; Harries, Priscilla

    2016-01-01

    overall accuracy being reported. Data were extracted using a standardised tool, by one reviewer, which could have introduced bias. Devising search terms for prognostic studies is challenging. Every attempt was made to devise search terms that were sufficiently sensitive to detect all prognostic studies; however, it remains possible that some studies were not identified. Conclusion Studies of prognostic accuracy in palliative care are heterogeneous, but the evidence suggests that clinicians’ predictions are frequently inaccurate. No sub-group of clinicians was consistently shown to be more accurate than any other. Implications of Key Findings Further research is needed to understand how clinical predictions are formulated and how their accuracy can be improved. PMID:27560380

  11. Computer code for the prediction of nozzle admittance

    NASA Technical Reports Server (NTRS)

    Nguyen, Thong V.

    1988-01-01

    A procedure which can accurately characterize injector designs for large thrust (0.5 to 1.5 million pounds), high pressure (500 to 3000 psia) LOX/hydrocarbon engines is currently under development. In this procedure, a rectangular cross-sectional combustion chamber is to be used to simulate the lower traverse frequency modes of the large scale chamber. The chamber will be sized so that the first width mode of the rectangular chamber corresponds to the first tangential mode of the full-scale chamber. Test data to be obtained from the rectangular chamber will be used to assess the full scale engine stability. This requires the development of combustion stability models for rectangular chambers. As part of the combustion stability model development, a computer code, NOAD based on existing theory was developed to calculate the nozzle admittances for both rectangular and axisymmetric nozzles. This code is detailed.

  12. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    PubMed

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  13. DEEP: a general computational framework for predicting enhancers

    PubMed Central

    Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B.

    2015-01-01

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer's properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/. PMID:25378307

  14. A Whole-Cell Computational Model Predicts Phenotype from Genotype

    PubMed Central

    Karr, Jonathan R.; Sanghvi, Jayodita C.; Macklin, Derek N.; Gutschow, Miriam V.; Jacobs, Jared M.; Bolival, Benjamin; Assad-Garcia, Nacyra; Glass, John I.; Covert, Markus W.

    2012-01-01

    SUMMARY Understanding how complex phenotypes arise from individual molecules and their interactions is a primary challenge in biology that computational approaches are poised to tackle. We report a whole-cell computational model of the life cycle of the human pathogen Mycoplasma genitalium that includes all of its molecular components and their interactions. An integrative approach to modeling that combines diverse mathematics enabled the simultaneous inclusion of fundamentally different cellular processes and experimental measurements. Our whole-cell model accounts for all annotated gene functions and was validated against a broad range of data. The model provides insights into many previously unobserved cellular behaviors, including in vivo rates of protein-DNA association and an inverse relationship between the durations of DNA replication initiation and replication rates. In addition, experimental analysis directed by model predictions identified previously undetected kinetic parameters and biological functions. We conclude that comprehensive whole-cell models can be used to facilitate biological discovery. PMID:22817898

  15. Computational AeroAcoustics for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Ed; Hixon, Ray; Dyson, Rodger; Huff, Dennis (Technical Monitor)

    2002-01-01

    An overview of the current state-of-the-art in computational aeroacoustics as applied to fan noise prediction at NASA Glenn is presented. Results from recent modeling efforts using three dimensional inviscid formulations in both frequency and time domains are summarized. In particular, the application of a frequency domain method, called LINFLUX, to the computation of rotor-stator interaction tone noise is reviewed and the influence of the background inviscid flow on the acoustic results is analyzed. It has been shown that the noise levels are very sensitive to the gradients of the mean flow near the surface and that the correct computation of these gradients for highly loaded airfoils is especially problematic using an inviscid formulation. The ongoing development of a finite difference time marching code that is based on a sixth order compact scheme is also reviewed. Preliminary results from the nonlinear computation of a gust-airfoil interaction model problem demonstrate the fidelity and accuracy of this approach. Spatial and temporal features of the code as well as its multi-block nature are discussed. Finally, latest results from an ongoing effort in the area of arbitrarily high order methods are reviewed and technical challenges associated with implementing correct high order boundary conditions are discussed and possible strategies for addressing these challenges ore outlined.

  16. Program Predicts Time Courses of Human/Computer Interactions

    NASA Technical Reports Server (NTRS)

    Vera, Alonso; Howes, Andrew

    2005-01-01

    CPM X is a computer program that predicts sequences of, and amounts of time taken by, routine actions performed by a skilled person performing a task. Unlike programs that simulate the interaction of the person with the task environment, CPM X predicts the time course of events as consequences of encoded constraints on human behavior. The constraints determine which cognitive and environmental processes can occur simultaneously and which have sequential dependencies. The input to CPM X comprises (1) a description of a task and strategy in a hierarchical description language and (2) a description of architectural constraints in the form of rules governing interactions of fundamental cognitive, perceptual, and motor operations. The output of CPM X is a Program Evaluation Review Technique (PERT) chart that presents a schedule of predicted cognitive, motor, and perceptual operators interacting with a task environment. The CPM X program allows direct, a priori prediction of skilled user performance on complex human-machine systems, providing a way to assess critical interfaces before they are deployed in mission contexts.

  17. Computational prediction of the chromosome-damaging potential of chemicals.

    PubMed

    Rothfuss, Andreas; Steger-Hartmann, Thomas; Heinrich, Nikolaus; Wichard, Jörg

    2006-10-01

    We report on the generation of computer-based models for the prediction of the chromosome-damaging potential of chemicals as assessed in the in vitro chromosome aberration (CA) test. On the basis of publicly available CA-test results of more than 650 chemical substances, half of which are drug-like compounds, we generated two different computational models. The first model was realized using the (Q)SAR tool MCASE. Results obtained with this model indicate a limited performance (53%) for the assessment of a chromosome-damaging potential (sensitivity), whereas CA-test negative compounds were correctly predicted with a specificity of 75%. The low sensitivity of this model might be explained by the fact that the underlying 2D-structural descriptors only describe part of the molecular mechanism leading to the induction of chromosome aberrations, that is, direct drug-DNA interactions. The second model was constructed with a more sophisticated machine learning approach and generated a classification model based on 14 molecular descriptors, which were obtained after feature selection. The performance of this model was superior to the MCASE model, primarily because of an improved sensitivity, suggesting that the more complex molecular descriptors in combination with statistical learning approaches are better suited to model the complex nature of mechanisms leading to a positive effect in the CA-test. An analysis of misclassified pharmaceuticals by this model showed that a large part of the false-negative predicted compounds were uniquely positive in the CA-test but lacked a genotoxic potential in other mutagenicity tests of the regulatory testing battery, suggesting that biologically nonsignificant mechanisms could be responsible for the observed positive CA-test result. Since such mechanisms are not amenable to modeling approaches it is suggested that a positive prediction made by the model reflects a biologically significant genotoxic potential. An integration of the

  18. Computational Prediction of Alanine Scanning and Ligand Binding Energetics in G-Protein Coupled Receptors

    PubMed Central

    Boukharta, Lars; Gutiérrez-de-Terán, Hugo; Åqvist, Johan

    2014-01-01

    Site-directed mutagenesis combined with binding affinity measurements is widely used to probe the nature of ligand interactions with GPCRs. Such experiments, as well as structure-activity relationships for series of ligands, are usually interpreted with computationally derived models of ligand binding modes. However, systematic approaches for accurate calculations of the corresponding binding free energies are still lacking. Here, we report a computational strategy to quantitatively predict the effects of alanine scanning and ligand modifications based on molecular dynamics free energy simulations. A smooth stepwise scheme for free energy perturbation calculations is derived and applied to a series of thirteen alanine mutations of the human neuropeptide Y1 receptor and series of eight analogous antagonists. The robustness and accuracy of the method enables univocal interpretation of existing mutagenesis and binding data. We show how these calculations can be used to validate structural models and demonstrate their ability to discriminate against suboptimal ones. PMID:24743773

  19. Assessment of the extended Koopmans' theorem for the chemical reactivity: Accurate computations of chemical potentials, chemical hardnesses, and electrophilicity indices.

    PubMed

    Yildiz, Dilan; Bozkaya, Uğur

    2016-01-30

    The extended Koopmans' theorem (EKT) provides a straightforward way to compute ionization potentials and electron affinities from any level of theory. Although it is widely applied to ionization potentials, the EKT approach has not been applied to evaluation of the chemical reactivity. We present the first benchmarking study to investigate the performance of the EKT methods for predictions of chemical potentials (μ) (hence electronegativities), chemical hardnesses (η), and electrophilicity indices (ω). We assess the performance of the EKT approaches for post-Hartree-Fock methods, such as Møller-Plesset perturbation theory, the coupled-electron pair theory, and their orbital-optimized counterparts for the evaluation of the chemical reactivity. Especially, results of the orbital-optimized coupled-electron pair theory method (with the aug-cc-pVQZ basis set) for predictions of the chemical reactivity are very promising; the corresponding mean absolute errors are 0.16, 0.28, and 0.09 eV for μ, η, and ω, respectively. PMID:26458329

  20. Computational Prediction of HIV-1 Resistance to Protease Inhibitors.

    PubMed

    Hosseini, Ali; Alibés, Andreu; Noguera-Julian, Marc; Gil, Victor; Paredes, Roger; Soliva, Robert; Orozco, Modesto; Guallar, Victor

    2016-05-23

    The development of mutations in HIV-1 protease (PR) hinders the activity of antiretroviral drugs, forcing changes in drug prescription. Most resistance assessments used to date rely on expert-based rules on predefined sets of stereotypical mutations; such an information-driven approach cannot capture new polymorphisms or be applied for new drugs. Computational modeling could provide a more general assessment of drug resistance and could be made available to clinicians through the Internet. We have created a protocol involving sequence comparison and all-atom protein-ligand induced fit simulations to predict resistance at the molecular level. We first compared our predictions with the experimentally determined IC50 values of darunavir, amprenavir, ritonavir, and indinavir from reference PR mutants displaying different resistance levels. We then performed analyses on a large set of variants harboring more than 10 mutations. Finally, several sequences from real patients were analyzed for amprenavir and darunavir. Our computational approach detected all of the genotype changes triggering high-level resistance, even those involving a large number of mutations. PMID:27082876

  1. Computational prediction of the human-microbial oral interactome

    PubMed Central

    2014-01-01

    Background The oral cavity is a complex ecosystem where human chemical compounds coexist with a particular microbiota. However, shifts in the normal composition of this microbiota may result in the onset of oral ailments, such as periodontitis and dental caries. In addition, it is known that the microbial colonization of the oral cavity is mediated by protein-protein interactions (PPIs) between the host and microorganisms. Nevertheless, this kind of PPIs is still largely undisclosed. To elucidate these interactions, we have created a computational prediction method that allows us to obtain a first model of the Human-Microbial oral interactome. Results We collected high-quality experimental PPIs from five major human databases. The obtained PPIs were used to create our positive dataset and, indirectly, our negative dataset. The positive and negative datasets were merged and used for training and validation of a naïve Bayes classifier. For the final prediction model, we used an ensemble methodology combining five distinct PPI prediction techniques, namely: literature mining, primary protein sequences, orthologous profiles, biological process similarity, and domain interactions. Performance evaluation of our method revealed an area under the ROC-curve (AUC) value greater than 0.926, supporting our primary hypothesis, as no single set of features reached an AUC greater than 0.877. After subjecting our dataset to the prediction model, the classified result was filtered for very high confidence PPIs (probability ≥ 1-10−7), leading to a set of 46,579 PPIs to be further explored. Conclusions We believe this dataset holds not only important pathways involved in the onset of infectious oral diseases, but also potential drug-targets and biomarkers. The dataset used for training and validation, the predictions obtained and the network final network are available at http://bioinformatics.ua.pt/software/oralint. PMID:24576332

  2. DEPTH: a web server to compute depth and predict small-molecule binding cavities in proteins.

    PubMed

    Tan, Kuan Pern; Varadarajan, Raghavan; Madhusudhan, M S

    2011-07-01

    Depth measures the extent of atom/residue burial within a protein. It correlates with properties such as protein stability, hydrogen exchange rate, protein-protein interaction hot spots, post-translational modification sites and sequence variability. Our server, DEPTH, accurately computes depth and solvent-accessible surface area (SASA) values. We show that depth can be used to predict small molecule ligand binding cavities in proteins. Often, some of the residues lining a ligand binding cavity are both deep and solvent exposed. Using the depth-SASA pair values for a residue, its likelihood to form part of a small molecule binding cavity is estimated. The parameters of the method were calibrated over a training set of 900 high-resolution X-ray crystal structures of single-domain proteins bound to small molecules (molecular weight <1.5  KDa). The prediction accuracy of DEPTH is comparable to that of other geometry-based prediction methods including LIGSITE, SURFNET and Pocket-Finder (all with Matthew's correlation coefficient of ∼0.4) over a testing set of 225 single and multi-chain protein structures. Users have the option of tuning several parameters to detect cavities of different sizes, for example, geometrically flat binding sites. The input to the server is a protein 3D structure in PDB format. The users have the option of tuning the values of four parameters associated with the computation of residue depth and the prediction of binding cavities. The computed depths, SASA and binding cavity predictions are displayed in 2D plots and mapped onto 3D representations of the protein structure using Jmol. Links are provided to download the outputs. Our server is useful for all structural analysis based on residue depth and SASA, such as guiding site-directed mutagenesis experiments and small molecule docking exercises, in the context of protein functional annotation and drug discovery. PMID:21576233

  3. Calibrating transition-metal energy levels and oxygen bands in first-principles calculations: Accurate prediction of redox potentials and charge transfer in lithium transition-metal oxides

    NASA Astrophysics Data System (ADS)

    Seo, Dong-Hwa; Urban, Alexander; Ceder, Gerbrand

    2015-09-01

    Transition-metal (TM) oxides play an increasingly important role in technology today, including applications such as catalysis, solar energy harvesting, and energy storage. In many of these applications, the details of their electronic structure near the Fermi level are critically important for their properties. We propose a first-principles-based computational methodology for the accurate prediction of oxygen charge transfer in TM oxides and lithium TM (Li-TM) oxides. To obtain accurate electronic structures, the Heyd-Scuseria-Ernzerhof (HSE06) hybrid functional is adopted, and the amount of exact Hartree-Fock exchange (mixing parameter) is adjusted to reproduce reference band gaps. We show that the HSE06 functional with optimal mixing parameter yields not only improved electronic densities of states, but also better energetics (Li-intercalation voltages) for LiCo O2 and LiNi O2 as compared to the generalized gradient approximation (GGA), Hubbard U corrected GGA (GGA +U ), and standard HSE06. We find that the optimal mixing parameters for TM oxides are system specific and correlate with the covalency (ionicity) of the TM species. The strong covalent (ionic) nature of TM-O bonding leads to lower (higher) optimal mixing parameters. We find that optimized HSE06 functionals predict stronger hybridization of the Co 3 d and O 2 p orbitals as compared to GGA, resulting in a greater contribution from oxygen states to charge compensation upon delithiation in LiCo O2 . We also find that the band gaps of Li-TM oxides increase linearly with the mixing parameter, enabling the straightforward determination of optimal mixing parameters based on GGA (α =0.0 ) and HSE06 (α =0.25 ) calculations. Our results also show that G0W0@GGA +U band gaps of TM oxides (M O ,M =Mn ,Co ,Ni ) and LiCo O2 agree well with experimental references, suggesting that G0W0 calculations can be used as a reference for the calibration of the mixing parameter in cases when no experimental band gap has been

  4. Prediction of nitroxide hyperfine coupling constants in solution from combined nanosecond scale simulations and quantum computations

    NASA Astrophysics Data System (ADS)

    Houriez, Céline; Ferré, Nicolas; Masella, Michel; Siri, Didier

    2008-06-01

    We present a combined theoretical approach based on analyzing molecular dynamics trajectories (at the nanosecond scale) generated by use of classical polarizable force fields and on quantum calculations to compute averaged hyperfine coupling constants. That method is used to estimate the constant of a prototypical nitroxide: the dimethylnitroxide. The molecule is embedded during the simulations in a cubic box containing about 500 water molecules and the molecular dynamics is generated using periodic conditions. Once the trajectories are achieved, the nitroxide and its first hydration shell molecules are extracted, and the coupling constants are computed by considering the latter aggregates by means of quantum computations. However, all the water molecules of the bulk are also accounted for during those computations by means of the electrostatic potential fitted method. Our results exhibit that in order to predict accurate and reliable coupling constants, one needs to describe carefully the out-of-plane motion of the nitroxide nitrogen and to sample trajectories with a time interval of 400 fs at least to generate an uncorrelated large set of nitroxide structures. Compared to Car-Parrinello molecular dynamics techniques, our approach can be used readily to compute hyperfine coupling constants of large systems, such as nitroxides of great size interacting with macromolecules such as proteins or polymers.

  5. Prediction of nitroxide hyperfine coupling constants in solution from combined nanosecond scale simulations and quantum computations.

    PubMed

    Houriez, Céline; Ferré, Nicolas; Masella, Michel; Siri, Didier

    2008-06-28

    We present a combined theoretical approach based on analyzing molecular dynamics trajectories (at the nanosecond scale) generated by use of classical polarizable force fields and on quantum calculations to compute averaged hyperfine coupling constants. That method is used to estimate the constant of a prototypical nitroxide: the dimethylnitroxide. The molecule is embedded during the simulations in a cubic box containing about 500 water molecules and the molecular dynamics is generated using periodic conditions. Once the trajectories are achieved, the nitroxide and its first hydration shell molecules are extracted, and the coupling constants are computed by considering the latter aggregates by means of quantum computations. However, all the water molecules of the bulk are also accounted for during those computations by means of the electrostatic potential fitted method. Our results exhibit that in order to predict accurate and reliable coupling constants, one needs to describe carefully the out-of-plane motion of the nitroxide nitrogen and to sample trajectories with a time interval of 400 fs at least to generate an uncorrelated large set of nitroxide structures. Compared to Car-Parrinello molecular dynamics techniques, our approach can be used readily to compute hyperfine coupling constants of large systems, such as nitroxides of great size interacting with macromolecules such as proteins or polymers. PMID:18601346

  6. Aircraft T-tail flutter predictions using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Attorni, A.; Cavagna, L.; Quaranta, G.

    2011-02-01

    The paper presents the application of computational aeroelasticity (CA) methods to the analysis of a T-tail stability in transonic regime. For this flow condition unsteady aerodynamics show a significant dependency from the aircraft equilibrium flight configuration, which rules both the position of shock waves in the flow field and the load distribution on the horizontal tail plane. Both these elements have an influence on the aerodynamic forces, and so on the aeroelastic stability of the system. The numerical procedure proposed allows to investigate flutter stability for a free-flying aircraft, iterating until convergence the following sequence of sub-problems: search for the trimmed condition for the deformable aircraft; linearize the system about the stated equilibrium point; predict the aeroelastic stability boundaries using the inferred linear model. An innovative approach based on sliding meshes allows to represent the changes of the computational fluid domain due to the motion of control surfaces used to trim the aircraft. To highlight the importance of keeping the linear model always aligned to the trim condition, and at the same time the capabilities of the computational fluid dynamics approach, the method is applied to a real aircraft with a T-tail configuration: the P180.

  7. A Computational Model for Predicting RNase H Domain of Retrovirus.

    PubMed

    Wu, Sijia; Zhang, Xinman; Han, Jiuqiang

    2016-01-01

    RNase H (RNH) is a pivotal domain in retrovirus to cleave the DNA-RNA hybrid for continuing retroviral replication. The crucial role indicates that RNH is a promising drug target for therapeutic intervention. However, annotated RNHs in UniProtKB database have still been insufficient for a good understanding of their statistical characteristics so far. In this work, a computational RNH model was proposed to annotate new putative RNHs (np-RNHs) in the retroviruses. It basically predicts RNH domains through recognizing their start and end sites separately with SVM method. The classification accuracy rates are 100%, 99.01% and 97.52% respectively corresponding to jack-knife, 10-fold cross-validation and 5-fold cross-validation test. Subsequently, this model discovered 14,033 np-RNHs after scanning sequences without RNH annotations. All these predicted np-RNHs and annotated RNHs were employed to analyze the length, hydrophobicity and evolutionary relationship of RNH domains. They are all related to retroviral genera, which validates the classification of retroviruses to a certain degree. In the end, a software tool was designed for the application of our prediction model. The software together with datasets involved in this paper can be available for free download at https://sourceforge.net/projects/rhtool/files/?source=navbar. PMID:27574780

  8. Special session: computational predictability of natural convection flows in enclosures

    SciTech Connect

    Christon, M A; Gresho, P M; Sutton, S B

    2000-08-14

    Modern thermal design practices often rely on a ''predictive'' simulation capability--although predictability is rarely quantified and often difficult to confidently achieve in practice. The computational predictability of natural convection in enclosures is a significant issue for many industrial thermal design problems. One example of this is the design for mitigation of optical distortion due to buoyancy-driven flow in large-scale laser systems. In many instances the sensitivity of buoyancy-driven enclosure flows can be linked to the presence of multiple bifurcation points that yield laminar thermal convective processes that transition from steady to various modes of unsteady flow. This behavior is brought to light by a problem as ''simple'' as a differentially-heated tall rectangular cavity (8:1 height/width aspect ratio) filled with a Boussinesq fluid with Pr = 0.71--which defines, at least partially, the focus of this special session. For our purposes, the differentially-heated cavity provides a virtual fluid dynamics laboratory.

  9. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  10. A computational method to predict carbonylation sites in yeast proteins.

    PubMed

    Lv, H Q; Liu, J; Han, J Q; Zheng, J G; Liu, R L

    2016-01-01

    Several post-translational modifications (PTM) have been discussed in literature. Among a variety of oxidative stress-induced PTM, protein carbonylation is considered a biomarker of oxidative stress. Only certain proteins can be carbonylated because only four amino acid residues, namely lysine (K), arginine (R), threonine (T) and proline (P), are susceptible to carbonylation. The yeast proteome is an excellent model to explore oxidative stress, especially protein carbonylation. Current experimental approaches in identifying carbonylation sites are expensive, time-consuming and limited in their abilities to process proteins. Furthermore, there is no bioinformational method to predict carbonylation sites in yeast proteins. Therefore, we propose a computational method to predict yeast carbonylation sites. This method has total accuracies of 86.32, 85.89, 84.80, and 86.80% in predicting the carbonylation sites of K, R, T, and P, respectively. These results were confirmed by 10-fold cross-validation. The ability to identify carbonylation sites in different kinds of features was analyzed and the position-specific composition of the modification site-flanking residues was discussed. Additionally, a software tool has been developed to help with the calculations in this method. Datasets and the software are available at https://sourceforge.net/projects/hqlstudio/ files/CarSpred.Y/. PMID:27420944

  11. Improvement and Analysis of Computational Methods for Prediction of Residual Dipolar Couplings

    PubMed Central

    Berlin, Konstantin; O’Leary, Dianne P.; Fushman, David

    2009-01-01

    We describe a new, computationally effcient method for computing the molecular alignment tensor based on the molecular shape. The increase in speed is achieved by re-expressing the problem as one of numerical integration, rather than a simple uniform sampling (as in the PALES method), and by using a convex hull rather than a detailed representation of the surface of a molecule. This method is applicable to bicelles, PEG/hexanol, and other alignment media that can be modeled by steric restrictions introduced by a planar barrier. This method is used to further explore and compare various representations of protein shape by an equivalent ellipsoid. We also examine the accuracy of the alignment tensor and residual dipolar couplings (RDC) prediction using various ab initio methods. We separately quantify the inaccuracy in RDC prediction caused by the inaccuracy in the orientation and in the magnitude of the alignment tensor, concluding that orientation accuracy is much more important in accurate prediction of RDCs. PMID:19700353

  12. Computational tools and resources for prediction and analysis of gene regulatory regions in the chick genome

    PubMed Central

    Khan, Mohsin A. F.; Soto-Jimenez, Luz Mayela; Howe, Timothy; Streit, Andrea; Sosinsky, Alona; Stern, Claudio D.

    2013-01-01

    The discovery of cis-regulatory elements is a challenging problem in bioinformatics, owing to distal locations and context-specific roles of these elements in controlling gene regulation. Here we review the current bioinformatics methodologies and resources available for systematic discovery of cis-acting regulatory elements and conserved transcription factor binding sites in the chick genome. In addition, we propose and make available, a novel workflow using computational tools that integrate CTCF analysis to predict putative insulator elements, enhancer prediction and TFBS analysis. To demonstrate the usefulness of this computational workflow, we then use it to analyze the locus of the gene Sox2 whose developmental expression is known to be controlled by a complex array of cis-acting regulatory elements. The workflow accurately predicts most of the experimentally verified elements along with some that have not yet been discovered. A web version of the CTCF tool, together with instructions for using the workflow can be accessed from http://toolshed.g2.bx.psu.edu/view/mkhan1980/ctcf_analysis. For local installation of the tool, relevant Perl scripts and instructions are provided in the directory named “code” in the supplementary materials. PMID:23355428

  13. DONBOL: A computer program for predicting axisymmetric nozzle afterbody pressure distributions and drag at subsonic speeds

    NASA Technical Reports Server (NTRS)

    Putnam, L. E.

    1979-01-01

    A Neumann solution for inviscid external flow was coupled to a modified Reshotko-Tucker integral boundary-layer technique, the control volume method of Presz for calculating flow in the separated region, and an inviscid one-dimensional solution for the jet exhaust flow in order to predict axisymmetric nozzle afterbody pressure distributions and drag. The viscous and inviscid flows are solved iteratively until convergence is obtained. A computer algorithm of this procedure was written and is called DONBOL. A description of the computer program and a guide to its use is given. Comparisons of the predictions of this method with experiments show that the method accurately predicts the pressure distributions of boattail afterbodies which have the jet exhaust flow simulated by solid bodies. For nozzle configurations which have the jet exhaust simulated by high-pressure air, the present method significantly underpredicts the magnitude of nozzle pressure drag. This deficiency results because the method neglects the effects of jet plume entrainment. This method is limited to subsonic free-stream Mach numbers below that for which the flow over the body of revolution becomes sonic.

  14. Staging of osteonecrosis of the jaw requires computed tomography for accurate definition of the extent of bony disease.

    PubMed

    Bedogni, Alberto; Fedele, Stefano; Bedogni, Giorgio; Scoletta, Matteo; Favia, Gianfranco; Colella, Giuseppe; Agrillo, Alessandro; Bettini, Giordana; Di Fede, Olga; Oteri, Giacomo; Fusco, Vittorio; Gabriele, Mario; Ottolenghi, Livia; Valsecchi, Stefano; Porter, Stephen; Petruzzi, Massimo; Arduino, Paolo; D'Amato, Salvatore; Ungari, Claudio; Fung Polly, Pok-Lam; Saia, Giorgia; Campisi, Giuseppina

    2014-09-01

    Management of osteonecrosis of the jaw associated with antiresorptive agents is challenging, and outcomes are unpredictable. The severity of disease is the main guide to management, and can help to predict prognosis. Most available staging systems for osteonecrosis, including the widely-used American Association of Oral and Maxillofacial Surgeons (AAOMS) system, classify severity on the basis of clinical and radiographic findings. However, clinical inspection and radiography are limited in their ability to identify the extent of necrotic bone disease compared with computed tomography (CT). We have organised a large multicentre retrospective study (known as MISSION) to investigate the agreement between the AAOMS staging system and the extent of osteonecrosis of the jaw (focal compared with diffuse involvement of bone) as detected on CT. We studied 799 patients with detailed clinical phenotyping who had CT images taken. Features of diffuse bone disease were identified on CT within all AAOMS stages (20%, 8%, 48%, and 24% of patients in stages 0, 1, 2, and 3, respectively). Of the patients classified as stage 0, 110/192 (57%) had diffuse disease on CT, and about 1 in 3 with CT evidence of diffuse bone disease was misclassified by the AAOMS system as having stages 0 and 1 osteonecrosis. In addition, more than a third of patients with AAOMS stage 2 (142/405, 35%) had focal bone disease on CT. We conclude that the AAOMS staging system does not correctly identify the extent of bony disease in patients with osteonecrosis of the jaw. PMID:24856927

  15. Computed tomography in the prediction of outcome in head injury

    SciTech Connect

    Lipper, M.H.; Kishore, P.R.S.; Enas, G.G.; Domingues da Silva, A.A.; Choi, S.C.; Becker, D.P.

    1985-03-01

    To determine the prognostic significance of computed tomographic (CT) findings in head injury, retrospective analysis was performed in 128 randomly selected severe head-injury patients managed with a standardized protocol. The minimal criterion for entry into this study was that the patients were unable to obey simple commands or utter formed words. Serial CT was performed on admission and 3-5 days, 2 weeks, 3 months, and 1 year after injury. A scale of severity of abnormalities was devised taking into account the size of the traumatic lesions on CT. The CT findings using the proposed scale were correlated with the clincial outcome and analyzed using linear logistic regression. The correct prediction rate of outcome using the proposed scale for CT findings alone was found to be 69.7%. When CT findings were combined with the Glasgow Coma Scale score this rate was increased to 75.8%.

  16. Temperature prediction of space flight experiments by computer thermal analysis

    NASA Technical Reports Server (NTRS)

    Birdsong, M. B.; Luttges, M. W.

    1994-01-01

    Life sciences experiments are especially sensitive to temperature. A small temperature difference between otherwise identical samples can cause various differences in biological reaction rates. Knowledge of experimental temperatures and temperature histories help to distinguish the effects of microgravity and temperature on spaceflight experiments compared to ground based studies, and allow appropriate controls and sensitivity tests. Up to the present time, the Orbiter (Space Shuttle) has not generally provided temperature measurement instrumentation inside ambient lockers located in the Mid-deck of the Orbiter, or inside similar facilities such as Spacehab and Spacelab, but many pieces of hardware do have temperature recording capability. Most of these temperatures, however, have only been roughly measured or estimated. Such reported experimental temperatures, while accurate within a range of several degrees Celsius, are of limited utility to biological researchers. The temperature controlled lockers used in spaceflight, such as Commerical-Refrigeration Incubation Modules (C-R/IMs), severely reduce the mass and volume available for test samples and do not necessarily provide uniform thermal environments. While these test carriers avoid some of the experimental temperature variations of the ambient lockers, the number of samples which can be accommodated in these temperature controlled units is limited. In the present work, improved models of thermal prediction and control were sought. Temperatures are predicted by thermal analysis software using empirical temperatures recorded during STS-57. These temperatures are compared to data recorded throughout the mission using Ambient Temperature Recorders (ATRs) located within several payload lockers. Additional test cases are undertaken using controlled ground experiments to more precisely determine the reliability of the thermal model. The approach presented should increase the utility of various spaceflight carriers in

  17. A computational model predicting disruption of blood vessel development.

    PubMed

    Kleinstreuer, Nicole; Dix, David; Rountree, Michael; Baker, Nancy; Sipes, Nisha; Reif, David; Spencer, Richard; Knudsen, Thomas

    2013-04-01

    Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis) and remodeling (angiogenesis) come from a variety of biological pathways linked to endothelial cell (EC) behavior, extracellular matrix (ECM) remodeling and the local generation of chemokines and growth factors. Simulating these interactions at a systems level requires sufficient biological detail about the relevant molecular pathways and associated cellular behaviors, and tractable computational models that offset mathematical and biological complexity. Here, we describe a novel multicellular agent-based model of vasculogenesis using the CompuCell3D (http://www.compucell3d.org/) modeling environment supplemented with semi-automatic knowledgebase creation. The model incorporates vascular endothelial growth factor signals, pro- and anti-angiogenic inflammatory chemokine signals, and the plasminogen activating system of enzymes and proteases linked to ECM interactions, to simulate nascent EC organization, growth and remodeling. The model was shown to recapitulate stereotypical capillary plexus formation and structural emergence of non-coded cellular behaviors, such as a heterologous bridging phenomenon linking endothelial tip cells together during formation of polygonal endothelial cords. Molecular targets in the computational model were mapped to signatures of vascular disruption derived from in vitro chemical profiling using the EPA's ToxCast high-throughput screening (HTS) dataset. Simulating the HTS data with the cell-agent based model of vascular development predicted adverse effects of a reference anti-angiogenic thalidomide analog, 5HPP-33, on in vitro angiogenesis with respect to both concentration-response and morphological consequences. These findings support the utility of cell agent-based models for simulating a morphogenetic

  18. Fast and accurate predictions of heat of formation by G4MP2-SFM parameterization scheme: An application to imidazole derivatives

    NASA Astrophysics Data System (ADS)

    Shoaib, Mahbubul Alam; Cho, Soo Gyeong; Choi, Cheol Ho

    2014-04-01

    We proposed a new parameterization scheme, G4MP2-SFM, for the prediction of heat of formation by combining SFM (Systematic Fragmentation Method) and high accuracy G4MP2 theories. In an application to imidazole derivatives, we found that the overall MAD and RMSD of the particular G4MP2-SFM(opt) are 1.9 and 2.2 kcal/mol, respectively, demonstrating its high prediction accuracy. In addition, our parameterization scheme replaces the ab initio computations with a set of simple arithmetic, allowing fast predictions. Our new computational scheme can be of practical use in high throughput search for new high energy materials.

  19. Advanced electric field computation for RF sheaths prediction with TOPICA

    NASA Astrophysics Data System (ADS)

    Milanesio, Daniele; Maggiora, Riccardo

    2012-10-01

    The design of an Ion Cyclotron (IC) launcher is not only driven by its coupling properties, but also by its capability of maintaining low parallel electric fields in front of it, in order to provide good power transfer to plasma and to reduce the impurities production. However, due to the impossibility to verify the antenna performances before the starting of the operations, advanced numerical simulation tools are the only alternative to carry out a proper antenna design. With this in mind, it should be clear that the adoption of a code, such as TOPICA [1], able to precisely take into account a realistic antenna geometry and an accurate plasma description, is extremely important to achieve these goals. Because of the recently introduced features that allow to compute the electric field distribution everywhere inside the antenna enclosure and in the plasma column, the TOPICA code appears to be the only alternative to understand which elements may have a not negligible impact on the antenna design and then to suggest further optimizations in order to mitigate RF potentials. The present work documents the evaluation of the electric field map from actual antennas, like the Tore Supra Q5 and the JET A2 launchers, and the foreseen ITER IC antenna. [4pt] [1] D. Milanesio et al., Nucl. Fusion 49, 115019 (2009).

  20. Using computational modeling to predict arrhythmogenesis and antiarrhythmic therapy

    PubMed Central

    Moreno, Jonathan D.; Clancy, Colleen E.

    2010-01-01

    The use of computational modeling to predict arrhythmia and arrhythmogensis is a relatively new field, but has nonetheless dramatically enhanced our understanding of the physiological and pathophysiological mechanisms that lead to arrhythmia. This review summarizes recent advances in the field of computational modeling approaches with a brief review of the evolution of cellular action potential models, and the incorporation of genetic mutations to understand fundamental arrhythmia mechanisms, including how simulations have revealed situation specific mechanisms leading to multiple phenotypes for the same genotype. The review then focuses on modeling drug blockade to understand how the less-than-intuitive effects some drugs have to either ameliorate or paradoxically exacerbate arrhythmia. Quantification of specific arrhythmia indicies are discussed at each spatial scale, from channel to tissue. The utility of hERG modeling to assess altered repolarization in response to drug blockade is also briefly discussed. Finally, insights gained from Ca2+ dynamical modeling and EC coupling, neurohumoral regulation of cardiac dynamics, and cell signaling pathways are also reviewed. PMID:20652086

  1. An Operational Computational Terminal Area PBL Prediction System

    NASA Technical Reports Server (NTRS)

    Lin, Yuh-Lang; Kaplan, Michael L.

    1998-01-01

    There are two fundamental goals of this research project which are listed here in terms of priority, i.e., a primary and secondary goal. The first and primary goal is to develop a prognostic system which could satisfy the operational weather prediction requirements of the meteorological subsystem within the Aircraft Vortex Spacing System (AVOSS), i.e., an operational computational Terminal Area PBL Prediction System (TAPPS). The second goal is to perform indepth diagnostic analyses of the meteorological conditions during the special wake vortex deployments at Memphis and Dallas during August 95 and September 97, respectively. These two goals are interdependent because a thorough understanding of the atmospheric dynamical processes which produced the unique meteorology during the Memphis and Dallas deployments will help us design a prognostic system for the planetary boundary layer (PBL) which could be utilized to support the meteorological subsystem within AVOSS. Concerning the primary goal, TAPPS Stage 2 was tested on the Memphis data and is about to be tested on the Dallas case studies. Furthermore benchmark tests have been undertaken to select the appropriate platform to run TAPPS in real time in support of the DFW AVOSS system. In addition, a technique to improve the initial data over the region surrounding Dallas was also tested and modified for potential operational use in TAPPS. The secondary goal involved several sensitivity simulations and comparisons to Memphis observational data sets in an effort to diagnose what specific atmospheric phenomena where occurring which may have impacted the dynamics of atmospheric wake vortices.

  2. Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.

  3. Computational Prediction of Electron Ionization Mass Spectra to Assist in GC/MS Compound Identification.

    PubMed

    Allen, Felicity; Pon, Allison; Greiner, Russ; Wishart, David

    2016-08-01

    We describe a tool, competitive fragmentation modeling for electron ionization (CFM-EI) that, given a chemical structure (e.g., in SMILES or InChI format), computationally predicts an electron ionization mass spectrum (EI-MS) (i.e., the type of mass spectrum commonly generated by gas chromatography mass spectrometry). The predicted spectra produced by this tool can be used for putative compound identification, complementing measured spectra in reference databases by expanding the range of compounds able to be considered when availability of measured spectra is limited. The tool extends CFM-ESI, a recently developed method for computational prediction of electrospray tandem mass spectra (ESI-MS/MS), but unlike CFM-ESI, CFM-EI can handle odd-electron ions and isotopes and incorporates an artificial neural network. Tests on EI-MS data from the NIST database demonstrate that CFM-EI is able to model fragmentation likelihoods in low-resolution EI-MS data, producing predicted spectra whose dot product scores are significantly better than full enumeration "bar-code" spectra. CFM-EI also outperformed previously reported results for MetFrag, MOLGEN-MS, and Mass Frontier on one compound identification task. It also outperformed MetFrag in a range of other compound identification tasks involving a much larger data set, containing both derivatized and nonderivatized compounds. While replicate EI-MS measurements of chemical standards are still a more accurate point of comparison, CFM-EI's predictions provide a much-needed alternative when no reference standard is available for measurement. CFM-EI is available at https://sourceforge.net/projects/cfm-id/ for download and http://cfmid.wishartlab.com as a web service. PMID:27381172

  4. Computational classifiers for predicting the short-term course of Multiple sclerosis

    PubMed Central

    2011-01-01

    Background The aim of this study was to assess the diagnostic accuracy (sensitivity and specificity) of clinical, imaging and motor evoked potentials (MEP) for predicting the short-term prognosis of multiple sclerosis (MS). Methods We obtained clinical data, MRI and MEP from a prospective cohort of 51 patients and 20 matched controls followed for two years. Clinical end-points recorded were: 1) expanded disability status scale (EDSS), 2) disability progression, and 3) new relapses. We constructed computational classifiers (Bayesian, random decision-trees, simple logistic-linear regression-and neural networks) and calculated their accuracy by means of a 10-fold cross-validation method. We also validated our findings with a second cohort of 96 MS patients from a second center. Results We found that disability at baseline, grey matter volume and MEP were the variables that better correlated with clinical end-points, although their diagnostic accuracy was low. However, classifiers combining the most informative variables, namely baseline disability (EDSS), MRI lesion load and central motor conduction time (CMCT), were much more accurate in predicting future disability. Using the most informative variables (especially EDSS and CMCT) we developed a neural network (NNet) that attained a good performance for predicting the EDSS change. The predictive ability of the neural network was validated in an independent cohort obtaining similar accuracy (80%) for predicting the change in the EDSS two years later. Conclusions The usefulness of clinical variables for predicting the course of MS on an individual basis is limited, despite being associated with the disease course. By training a NNet with the most informative variables we achieved a good accuracy for predicting short-term disability. PMID:21649880

  5. The use of computational models to predict response to HIV therapy for clinical cases in Romania

    PubMed Central

    Revell, Andrew D; Ene, Luminiţa; Duiculescu, Dan; Wang, Dechao; Youle, Mike; Pozniak, Anton; Montaner, Julio; Larder, Brendan A

    2012-01-01

    Introduction A major challenge in Romania is the optimisation of antiretroviral therapy for the many HIV-infected adults with, on average, a decade of treatment experience. The RDI has developed computational models that predict virological response to therapy but these require a genotype, which is not routinely available in Romania. Moreover the models, which were trained without any Romanian data, have proved most accurate for patients from the healthcare settings that contributed the training data. Here we develop and test a novel model that does not require a genotype, with test data from Romania. Methods A random forest (RF) model was developed to predict the probability of the HIV viral load (VL) being reduced to <50 copies/ml following therapy change. The input variables were baseline VL, CD4 count, treatment history and time to follow-up. The model was developed with 3188 treatment changes episodes (TCEs) from North America, Western Europe and Australia. The model’s predictions for 100 independent TCEs from the RDI database were compared to those of a model trained with the same data plus genotypes and then tested using 39 TCEs from Romania in terms of the area under the ROC curve (AUC). Results When tested with the 100 independent RDI TCEs, the AUC values for the models with and without genotypes were 0.88 and 0.86 respectively. For the 39 Romanian TCEs the AUC was 0.60. However, when 14 cases with viral loads that may have been between 50 and 400 copies were removed, the AUC increased to 0.83. Discussion Despite having been trained without data from Romania, the model predicted treatment responses in treatment-experienced Romanian patients with clade F virus accurately without the need for a genotype. The results suggest that this approach might be generalisable and useful in helping design optimal salvage regimens for treatment-experienced patients in countries with limited resources where genotyping is not always available. PMID:24432257

  6. Highly Accurate Infrared Line Lists of SO2 Isotopologues Computed for Atmospheric Modeling on Venus and Exoplanets

    NASA Astrophysics Data System (ADS)

    Huang, X.; Schwenke, D.; Lee, T. J.

    2014-12-01

    Last year we reported a semi-empirical 32S16O2 spectroscopic line list (denoted Ames-296K) for its atmospheric characterization in Venus and other Exoplanetary environments. In order to facilitate the Sulfur isotopic ratio and Sulfur chemistry model determination, now we present Ames-296K line lists for both 626 (upgraded) and other 4 symmetric isotopologues: 636, 646, 666 and 828. The line lists are computed on an ab initio potential energy surface refined with most reliable high resolution experimental data, using a high quality CCSD(T)/aug-cc-pV(Q+d)Z dipole moment surface. The most valuable part of our approach is to provide "truly reliable" predictions (and alternatives) for those unknown or hard-to-measure/analyze spectra. This strategy has guaranteed the lists are the best available alternative for those wide spectra region missing from spectroscopic databases such as HITRAN and GEISA, where only very limited data exist for 626/646 and no Infrared data at all for 636/666 or other minor isotopologues. Our general line position accuracy up to 5000 cm-1 is 0.01 - 0.02 cm-1 or better. Most transition intensity deviations are less than 5%, compare to experimentally measured quantities. Note that we have solved a convergence issue and further improved the quality and completeness of the main isotopologue 626 list at 296K. We will compare the lists to available models in CDMS/JPL/HITRAN and discuss the future mutually beneficial interactions between theoretical and experimental efforts.

  7. Accurate prediction of protein secondary structure and solvent accessibility by consensus combiners of sequence and structure information

    PubMed Central

    Pollastri, Gianluca; Martin, Alberto JM; Mooney, Catherine; Vullo, Alessandro

    2007-01-01

    Background Structural properties of proteins such as secondary structure and solvent accessibility contribute to three-dimensional structure prediction, not only in the ab initio case but also when homology information to known structures is available. Structural properties are also routinely used in protein analysis even when homology is available, largely because homology modelling is lower throughput than, say, secondary structure prediction. Nonetheless, predictors of secondary structure and solvent accessibility are virtually always ab initio. Results Here we develop high-throughput machine learning systems for the prediction of protein secondary structure and solvent accessibility that exploit homology to proteins of known structure, where available, in the form of simple structural frequency profiles extracted from sets of PDB templates. We compare these systems to their state-of-the-art ab initio counterparts, and with a number of baselines in which secondary structures and solvent accessibilities are extracted directly from the templates. We show that structural information from templates greatly improves secondary structure and solvent accessibility prediction quality, and that, on average, the systems significantly enrich the information contained in the templates. For sequence similarity exceeding 30%, secondary structure prediction quality is approximately 90%, close to its theoretical maximum, and 2-class solvent accessibility roughly 85%. Gains are robust with respect to template selection noise, and significant for marginal sequence similarity and for short alignments, supporting the claim that these improved predictions may prove beneficial beyond the case in which clear homology is available. Conclusion The predictive system are publicly available at the address . PMID:17570843

  8. Computer Assisted Design, Prediction, and Execution of Economical Organic Syntheses

    NASA Astrophysics Data System (ADS)

    Gothard, Nosheen Akber

    The synthesis of useful organic molecules via simple and cost-effective routes is a core challenge in organic chemistry. In industry or academia, organic chemists use their chemical intuition, technical expertise and published procedures to determine an optimal pathway. This approach, not only takes time and effort, but also is cost prohibitive. Many potential optimal routes scratched on paper fail to get experimentally tested. In addition, with new methods being discovered daily are often overlooked by established techniques. This thesis reports a computational technique that assist the discovery of economical synthetic routes to useful organic targets. Organic chemistry exists as a network where chemicals are connected by reactions, analogous to citied connected by roads in a geographic map. This network topology of organic reactions in the network of organic chemistry (NOC) allows the application of graph-theory to devise algorithms for synthetic optimization of organic targets. A computational approach comprised of customizable algorithms, pre-screening filters, and existing chemoinformatic techniques is capable of answering complex questions and perform mechanistic tasks desired by chemists such as optimization of organic syntheses. One-pot reactions are central to modern synthesis since they save resources and time by avoiding isolation, purification, characterization, and production of chemical waste after each synthetic step. Sometimes, such reactions are identified by chance or, more often, by careful inspection of individual steps that are to be wired together. Algorithms are used to discover one-pot reactions and validated experimentally. Which demonstrate that the computationally predicted sequences can indeed by carried out experimentally in good overall yields. The experimental examples are chosen to from small networks of reactions around useful chemicals such as quinoline scaffolds, quinoline-based inhibitors of phosphoinositide 3-kinase delta (PI3

  9. PSSP-RFE: accurate prediction of protein structural class by recursive feature extraction from PSI-BLAST profile, physical-chemical property and functional annotations.

    PubMed

    Li, Liqi; Cui, Xiang; Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi

    2014-01-01

    Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets. PMID:24675610

  10. Does computed tomography permeability predict hemorrhagic transformation after ischemic stroke?

    PubMed Central

    Yen, Peggy; Cobb, Allison; Shankar, Jai Jai Shiva

    2016-01-01

    AIM: To use perfusion-derived permeability-surface area product maps to predict hemorrhagic transformation following thrombolytic treatment for acute ischemic stroke. METHODS: We retrospectively analyzed our prospectively kept acute stroke database over five consecutive months for patients with symptoms of acute ischemic stroke (AIS) who had computed tomography (CT) perfusion (CTP) done at arrival. Patients included in the analyses also had to have a follow-up CT. The permeability-surface area product maps (PS) was calculated for the side of the ischemia and/or infarction and for the contralateral unaffected side at the same level. The cerebral blood flow map was used to delineate the ischemic territory. Next, a region of interest was drawn at the centre of this territory on the PS parametric map. Finally, a mirror region of interest was created on the contralateral side at the same level. The relative permeability-surface area product maps (rPS) provided an internal control and was calculated as the ratio of the PS on the side of the AIS to the PS on the contralateral side. A student t-test was performed after log conversion of rPS between patients with and without hemorrhagic transformation. Log conversion was used to convert the data into normal distribution to use t-test. For the group of patients who experienced intracranial bleed, a student t-test was performed between those with only petechial hemorrhage and those with more severe parenchymal hematoma with subarachnoid haemorrhage. RESULTS: Of 84 patients with AIS and CTP at admission, only 42 patients had a follow-up CT. The rPS derived using the normal side as the internal control was significantly higher (P = 0.003) for the 15 cases of hemorrhagic transformation (1.71 + 1.64) compared to 27 cases that did not have any (1.07 + 1.30). Patients with values above the overall mean rPS of 1.3 had an increased likelihood of subsequent hemorrhagic transformation. The sensitivity of using this score to predict

  11. New applications of computer-based section construction: strain analysis, local balancing, and subsurface fault prediction

    SciTech Connect

    Geiser, J.; Geiser, P.A.; Kligfield, R.; Ratliff, R.; Rowan, M.

    1988-04-01

    An increase in the use of computers in structural geology now encourages practical investigation of several topics which are of considerable importance to the explorationist. Computer-based cross section construction and analysis is one such application. Algorithms based on the geometry of flexural slip or flow deformation styles permit rapid construction restoration, and balancing of geological cross sections, which in turn allow evaluation of multiple working hypotheses in a time frame previously unattainable. These same techniques also simplify the application of several analytical methods which have tended to be restricted to structural geologists: predicted finite and incremental strain patterns within folds can be utilized in studies of porosity and permeability variation; the detailed geometry of fold can be evaluated and modified using local balancing methods when constraints provided by well, seismic, and surface data leave room for differing interpretations; and subsurface fault trajectories can be quickly and accurately predicted from knowledge or near-surface fold geometry. These and other methods discussed in the text permit the non-specialist to apply complex structural concepts of exploration in a practical and timely manner.

  12. Binding Mode and Induced Fit Predictions for Prospective Computational Drug Design.

    PubMed

    Grebner, Christoph; Iegre, Jessica; Ulander, Johan; Edman, Karl; Hogner, Anders; Tyrchan, Christian

    2016-04-25

    Computer-aided drug design plays an important role in medicinal chemistry to obtain insights into molecular mechanisms and to prioritize design strategies. Although significant improvement has been made in structure based design, it still remains a key challenge to accurately model and predict induced fit mechanisms. Most of the current available techniques either do not provide sufficient protein conformational sampling or are too computationally demanding to fit an industrial setting. The current study presents a systematic and exhaustive investigation of predicting binding modes for a range of systems using PELE (Protein Energy Landscape Exploration), an efficient and fast protein-ligand sampling algorithm. The systems analyzed (cytochrome P, kinase, protease, and nuclear hormone receptor) exhibit different complexities of ligand induced fit mechanisms and protein dynamics. The results are compared with results from classical molecular dynamics simulations and (induced fit) docking. This study shows that ligand induced side chain rearrangements and smaller to medium backbone movements are captured well in PELE. Large secondary structure rearrangements, however, remain challenging for all employed techniques. Relevant binding modes (ligand heavy atom RMSD < 1.0 Å) can be obtained by the PELE method within a few hours of simulation, positioning PELE as a tool applicable for rapid drug design cycles. PMID:26974351

  13. A Performance Prediction Model for a Fault-Tolerant Computer During Recovery and Restoration

    NASA Technical Reports Server (NTRS)

    Obando, Rodrigo A.; Stoughton, John W.

    1995-01-01

    The modeling and design of a fault-tolerant multiprocessor system is addressed. Of interest is the behavior of the system during recovery and restoration after a fault has occurred. The multiprocessor systems are based on the Algorithm to Architecture Mapping Model (ATAMM) and the fault considered is the death of a processor. The developed model is useful in the determination of performance bounds of the system during recovery and restoration. The performance bounds include time to recover from the fault, time to restore the system, and determination of any permanent delay in the input to output latency after the system has regained steady state. Implementation of an ATAMM based computer was developed for a four-processor generic VHSIC spaceborne computer (GVSC) as the target system. A simulation of the GVSC was also written on the code used in the ATAMM Multicomputer Operating System (AMOS). The simulation is used to verify the new model for tracking the propagation of the delay through the system and predicting the behavior of the transient state of recovery and restoration. The model is shown to accurately predict the transient behavior of an ATAMM based multicomputer during recovery and restoration.

  14. Conformations of 1,2-dimethoxypropane and 5-methoxy-1,3-dioxane: are ab initio quantum chemistry predictions accurate?

    NASA Astrophysics Data System (ADS)

    Smith, Grant D.; Jaffe, Richard L.; Yoon, Do. Y.

    1998-06-01

    High-level ab initio quantum chemistry calculations are shown to predict conformer populations of 1,2-dimethoxypropane and 5-methoxy-1,3-dioxane that are consistent with gas-phase NMR vicinal coupling constant measurements. The conformational energies of the cyclic ether 5-methoxy-1,3-dioxane are found to be consistent with those predicted by a rotational isomeric state (RIS) model based upon the acyclic analog 1,2-dimethoxypropane. The quantum chemistry and RIS calculations indicate the presence of strong attractive 1,5 C(H 3)⋯O electrostatic interactions in these molecules, similar to those found in 1,2-dimethoxyethane.

  15. Computational Prediction of Effector Proteins in Fungi: Opportunities and Challenges.

    PubMed

    Sonah, Humira; Deshmukh, Rupesh K; Bélanger, Richard R

    2016-01-01

    Effector proteins are mostly secretory proteins that stimulate plant infection by manipulating the host response. Identifying fungal effector proteins and understanding their function is of great importance in efforts to curb losses to plant diseases. Recent advances in high-throughput sequencing technologies have facilitated the availability of several fungal genomes and 1000s of transcriptomes. As a result, the growing amount of genomic information has provided great opportunities to identify putative effector proteins in different fungal species. There is little consensus over the annotation and functionality of effector proteins, and mostly small secretory proteins are considered as effector proteins, a concept that tends to overestimate the number of proteins involved in a plant-pathogen interaction. With the characterization of Avr genes, criteria for computational prediction of effector proteins are becoming more efficient. There are 100s of tools available for the identification of conserved motifs, signature sequences and structural features in the proteins. Many pipelines and online servers, which combine several tools, are made available to perform genome-wide identification of effector proteins. In this review, available tools and pipelines, their strength and limitations for effective identification of fungal effector proteins are discussed. We also present an exhaustive list of classically secreted proteins along with their key conserved motifs found in 12 common plant pathogens (11 fungi and one oomycete) through an analytical pipeline. PMID:26904083

  16. Hybrid soft computing systems for reservoir PVT properties prediction

    NASA Astrophysics Data System (ADS)

    Khoukhi, Amar

    2012-07-01

    In reservoir engineering, the knowledge of Pressure-Volume-Temperature (PVT) properties is of great importance for many uses, such as well test analyses, reserve estimation, material balance calculations, inflow performance calculations, fluid flow in porous media and the evaluation of new formations for the potential development and enhancement oil recovery projects. The determination of these properties is a complex problem because laboratory-measured properties of rock samples ("cores") are only available from limited and isolated well locations and/or intervals. Several correlation models have been developed to relate these properties to other measures which are relatively abundant. These models include empirical correlations, statistical regression and artificial neural networks (ANNs). In this paper, a comprehensive study is conducted on the prediction of the bubble point pressure and oil formation volume factor using two hybrid of soft computing techniques; a genetically optimised neural network and a genetically enhanced subtractive clustering for the parameter identification of an adaptive neuro-fuzzy inference system. Simulation experiments are provided, showing the performance of the proposed techniques as compared with commonly used regression correlations, including standard artificial neural networks.

  17. Computational Prediction of Effector Proteins in Fungi: Opportunities and Challenges

    PubMed Central

    Sonah, Humira; Deshmukh, Rupesh K.; Bélanger, Richard R.

    2016-01-01

    Effector proteins are mostly secretory proteins that stimulate plant infection by manipulating the host response. Identifying fungal effector proteins and understanding their function is of great importance in efforts to curb losses to plant diseases. Recent advances in high-throughput sequencing technologies have facilitated the availability of several fungal genomes and 1000s of transcriptomes. As a result, the growing amount of genomic information has provided great opportunities to identify putative effector proteins in different fungal species. There is little consensus over the annotation and functionality of effector proteins, and mostly small secretory proteins are considered as effector proteins, a concept that tends to overestimate the number of proteins involved in a plant–pathogen interaction. With the characterization of Avr genes, criteria for computational prediction of effector proteins are becoming more efficient. There are 100s of tools available for the identification of conserved motifs, signature sequences and structural features in the proteins. Many pipelines and online servers, which combine several tools, are made available to perform genome-wide identification of effector proteins. In this review, available tools and pipelines, their strength and limitations for effective identification of fungal effector proteins are discussed. We also present an exhaustive list of classically secreted proteins along with their key conserved motifs found in 12 common plant pathogens (11 fungi and one oomycete) through an analytical pipeline. PMID:26904083

  18. Survival outcomes scores (SOFT, BAR, and Pedi-SOFT) are accurate in predicting post-liver transplant survival in adolescents.

    PubMed

    Conjeevaram Selvakumar, Praveen Kumar; Maksimak, Brian; Hanouneh, Ibrahim; Youssef, Dalia H; Lopez, Rocio; Alkhouri, Naim

    2016-09-01

    SOFT and BAR scores utilize recipient, donor, and graft factors to predict the 3-month survival after LT in adults (≥18 years). Recently, Pedi-SOFT score was developed to predict 3-month survival after LT in young children (≤12 years). These scoring systems have not been studied in adolescent patients (13-17 years). We evaluated the accuracy of these scoring systems in predicting the 3-month post-LT survival in adolescents through a retrospective analysis of data from UNOS of patients aged 13-17 years who received LT between 03/01/2002 and 12/31/2012. Recipients of combined organ transplants, donation after cardiac death, or living donor graft were excluded. A total of 711 adolescent LT recipients were included with a mean age of 15.2±1.4 years. A total of 100 patients died post-LT including 33 within 3 months. SOFT, BAR, and Pedi-SOFT scores were all found to be good predictors of 3-month post-transplant survival outcome with areas under the ROC curve of 0.81, 0.80, and 0.81, respectively. All three scores provided good accuracy for predicting 3-month survival post-LT in adolescents and may help clinical decision making to optimize survival rate and organ utilization. PMID:27478012

  19. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    PubMed

    Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve

    2015-01-01

    For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease. PMID:25938675

  20. A Maximal Graded Exercise Test to Accurately Predict VO2max in 18-65-Year-Old Adults

    ERIC Educational Resources Information Center

    George, James D.; Bradshaw, Danielle I.; Hyde, Annette; Vehrs, Pat R.; Hager, Ronald L.; Yanowitz, Frank G.

    2007-01-01

    The purpose of this study was to develop an age-generalized regression model to predict maximal oxygen uptake (VO sub 2 max) based on a maximal treadmill graded exercise test (GXT; George, 1996). Participants (N = 100), ages 18-65 years, reached a maximal level of exertion (mean plus or minus standard deviation [SD]; maximal heart rate [HR sub…

  1. Preliminary Computational Analysis of the (HIRENASD) Configuration in Preparation for the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Chwalowski, Pawel; Florance, Jennifer P.; Heeg, Jennifer; Wieseman, Carol D.; Perry, Boyd P.

    2011-01-01

    This paper presents preliminary computational aeroelastic analysis results generated in preparation for the first Aeroelastic Prediction Workshop (AePW). These results were produced using FUN3D software developed at NASA Langley and are compared against the experimental data generated during the HIgh REynolds Number Aero- Structural Dynamics (HIRENASD) Project. The HIRENASD wind-tunnel model was tested in the European Transonic Windtunnel in 2006 by Aachen University0s Department of Mechanics with funding from the German Research Foundation. The computational effort discussed here was performed (1) to obtain a preliminary assessment of the ability of the FUN3D code to accurately compute physical quantities experimentally measured on the HIRENASD model and (2) to translate the lessons learned from the FUN3D analysis of HIRENASD into a set of initial guidelines for the first AePW, which includes test cases for the HIRENASD model and its experimental data set. This paper compares the computational and experimental results obtained at Mach 0.8 for a Reynolds number of 7 million based on chord, corresponding to the HIRENASD test conditions No. 132 and No. 159. Aerodynamic loads and static aeroelastic displacements are compared at two levels of the grid resolution. Harmonic perturbation numerical results are compared with the experimental data using the magnitude and phase relationship between pressure coefficients and displacement. A dynamic aeroelastic numerical calculation is presented at one wind-tunnel condition in the form of the time history of the generalized displacements. Additional FUN3D validation results are also presented for the AGARD 445.6 wing data set. This wing was tested in the Transonic Dynamics Tunnel and is commonly used in the preliminary benchmarking of computational aeroelastic software.

  2. A theoretical prediction of hydrogen molecule dissociation-recombination rates including an accurate treatment of internal state nonequilibrium effects

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.

    1990-01-01

    The dissociation and recombination of H2 over the temperature range 1000-5000 K are calculated in a nonempirical manner. The computation procedure involves the calculation of the state-to-state energy transfer rate coefficients, the solution of the 349 coupled equations which form the master equation, and the determination of the phenomenological rate coefficients. The nonempirical results presented here are in good agreement with experimental data at 1000 and 3000 K.

  3. How accurately can subject-specific finite element models predict strains and strength of human femora? Investigation using full-field measurements.

    PubMed

    Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna

    2016-03-21

    Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (<2% error) for two out of three specimens. In the third specimen, an accidental change in the boundary conditions occurred during the experiment, which compromised the femoral strength validation. The achieved strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response. PMID:26944687

  4. Can the Gibbs free energy of adsorption be predicted efficiently and accurately: an M05-2X DFT study.

    PubMed

    Michalkova, A; Gorb, L; Hill, F; Leszczynski, J

    2011-03-24

    This study presents new insight into the prediction of partitioning of organic compounds between a carbon surface (soot) and water, and it also sheds light on the sluggish desorption of interacting molecules from activated and nonactivated carbon surfaces. This paper provides details about the structure and interactions of benzene, polycyclic aromatic hydrocarbons, and aromatic nitrocompounds with a carbon surface modeled by coronene using a density functional theory approach along with the M05-2X functional. The adsorption was studied in vacuum and from water solution. The molecules studied are physisorbed on the carbon surface. While the intermolecular interactions of benzene and hydrocarbons are governed by dispersion forces, nitrocompounds are adsorbed also due to quite strong electrostatic interactions with all types of carbon surfaces. On the basis of these results, we conclude that the method of prediction presented in this study allows one to approach the experimental level of accuracy in predicting thermodynamic parameters of adsorption on a carbon surface from the gas phase. The empirical modification of the polarized continuum model leads also to a quantitative agreement with the experimental data for the Gibbs free energy values of the adsorption from water solution. PMID:21361266

  5. A highly accurate protein structural class prediction approach using auto cross covariance transformation and recursive feature elimination.

    PubMed

    Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming

    2015-12-01

    Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets. PMID:26460680

  6. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing

    PubMed Central

    Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628

  7. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing.

    PubMed

    Wang, Ting; He, Quanze; Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628

  8. Comparative study of exchange-correlation functionals for accurate predictions of structural and magnetic properties of multiferroic oxides

    NASA Astrophysics Data System (ADS)

    Chen, Hanghui; Millis, Andrew J.

    2016-05-01

    We systematically compare predictions of various exchange correlation functionals for the structural and magnetic properties of perovskite Sr1 -xBaxMnO3 (0 ≤x ≤1 )—a representative class of multiferroic oxides. The local spin density approximation (LSDA) and spin-dependent generalized gradient approximation with Perdew-Burke-Ernzerhof parametrization (sPBE) make substantial different predictions for ferroelectric atomic distortions, tetragonality, and ground state magnetic ordering. Neither approximation quantitatively reproduces all the measured structural and magnetic properties of perovskite Sr0.5Ba0.5MnO3 . The spin-dependent generalized gradient approximation with Perdew-Burke-Ernzerhof revised for solids parametrization (sPBEsol) and the charge-only Perdew-Burke-Ernzerhof parametrized generalized gradient approximation with Hubbard U and Hund's J extensions both provide overall better agreement with measured structural and magnetic properties of Sr0.5Ba0.5MnO3 , compared to LSDA and sPBE. Using these two methods, we find that different from previous predictions, perovskite BaMnO3 has large Mn off-center displacements and is close to a ferromagnetic-to-antiferromagnetic phase boundary, making it a promising candidate to induce effective giant magnetoelectric effects and to achieve cross-field control of polarization and magnetism.

  9. Accurate Prediction of Advanced Liver Fibrosis Using the Decision Tree Learning Algorithm in Chronic Hepatitis C Egyptian Patients

    PubMed Central

    Hashem, Somaya; Esmat, Gamal; Elakel, Wafaa; Habashy, Shahira; Abdel Raouf, Safaa; Darweesh, Samar; Soliman, Mohamad; Elhefnawi, Mohamed; El-Adawy, Mohamed; ElHefnawi, Mahmoud

    2016-01-01

    Background/Aim. Respectively with the prevalence of chronic hepatitis C in the world, using noninvasive methods as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy is significantly increasing. The aim of this study is to combine the serum biomarkers and clinical information to develop a classification model that can predict advanced liver fibrosis. Methods. 39,567 patients with chronic hepatitis C were included and randomly divided into two separate sets. Liver fibrosis was assessed via METAVIR score; patients were categorized as mild to moderate (F0–F2) or advanced (F3-F4) fibrosis stages. Two models were developed using alternating decision tree algorithm. Model 1 uses six parameters, while model 2 uses four, which are similar to FIB-4 features except alpha-fetoprotein instead of alanine aminotransferase. Sensitivity and receiver operating characteristic curve were performed to evaluate the performance of the proposed models. Results. The best model achieved 86.2% negative predictive value and 0.78 ROC with 84.8% accuracy which is better than FIB-4. Conclusions. The risk of advanced liver fibrosis, due to chronic hepatitis C, could be predicted with high accuracy using decision tree learning algorithm that could be used to reduce the need to assess the liver biopsy. PMID:26880886

  10. Accurate Prediction of Advanced Liver Fibrosis Using the Decision Tree Learning Algorithm in Chronic Hepatitis C Egyptian Patients.

    PubMed

    Hashem, Somaya; Esmat, Gamal; Elakel, Wafaa; Habashy, Shahira; Abdel Raouf, Safaa; Darweesh, Samar; Soliman, Mohamad; Elhefnawi, Mohamed; El-Adawy, Mohamed; ElHefnawi, Mahmoud

    2016-01-01

    Background/Aim. Respectively with the prevalence of chronic hepatitis C in the world, using noninvasive methods as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy is significantly increasing. The aim of this study is to combine the serum biomarkers and clinical information to develop a classification model that can predict advanced liver fibrosis. Methods. 39,567 patients with chronic hepatitis C were included and randomly divided into two separate sets. Liver fibrosis was assessed via METAVIR score; patients were categorized as mild to moderate (F0-F2) or advanced (F3-F4) fibrosis stages. Two models were developed using alternating decision tree algorithm. Model 1 uses six parameters, while model 2 uses four, which are similar to FIB-4 features except alpha-fetoprotein instead of alanine aminotransferase. Sensitivity and receiver operating characteristic curve were performed to evaluate the performance of the proposed models. Results. The best model achieved 86.2% negative predictive value and 0.78 ROC with 84.8% accuracy which is better than FIB-4. Conclusions. The risk of advanced liver fibrosis, due to chronic hepatitis C, could be predicted with high accuracy using decision tree learning algorithm that could be used to reduce the need to assess the liver biopsy. PMID:26880886

  11. Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2006-01-01

    Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.

  12. Accurate prediction of protein structural classes by incorporating predicted secondary structure information into the general form of Chou's pseudo amino acid composition.

    PubMed

    Kong, Liang; Zhang, Lichao; Lv, Jinfeng

    2014-03-01

    Extracting good representation from protein sequence is fundamental for protein structural classes prediction tasks. In this paper, we propose a novel and powerful method to predict protein structural classes based on the predicted secondary structure information. At the feature extraction stage, a 13-dimensional feature vector is extracted to characterize general contents and spatial arrangements of the secondary structural elements of a given protein sequence. Specially, four segment-level features are designed to elevate discriminative ability for proteins from the α/β and α+β classes. After the features are extracted, a multi-class non-linear support vector machine classifier is used to implement protein structural classes prediction. We report extensive experiments comparing the proposed method to the state-of-the-art in protein structural classes prediction on three widely used low-similarity benchmark datasets: FC699, 1189 and 640. Our method achieves competitive performance on prediction accuracies, especially for the overall prediction accuracies which have exceeded the best reported results on all of the three datasets. PMID:24316044

  13. IsoFinder: computational prediction of isochores in genome sequences.

    PubMed

    Oliver, José L; Carpena, Pedro; Hackenberg, Michael; Bernaola-Galván, Pedro

    2004-07-01

    Isochores are long genome segments homogeneous in G+C. Here, we describe an algorithm (IsoFinder) running on the web (http://bioinfo2.ugr.es/IsoF/isofinder.html) able to predict isochores at the sequence level. We move a sliding pointer from left to right along the DNA sequence. At each position of the pointer, we compute the mean G+C values to the left and to the right of the pointer. We then determine the position of the pointer for which the difference between left and right mean values (as measured by the t-statistic) reaches its maximum. Next, we determine the statistical significance of this potential cutting point, after filtering out short-scale heterogeneities below 3 kb by applying a coarse-graining technique. Finally, the program checks whether this significance exceeds a probability threshold. If so, the sequence is cut at this point into two subsequences; otherwise, the sequence remains undivided. The procedure continues recursively for each of the two resulting subsequences created by each cut. This leads to the decomposition of a chromosome sequence into long homogeneous genome regions (LHGRs) with well-defined mean G+C contents, each significantly different from the G+C contents of the adjacent LHGRs. Most LHGRs can be identified with Bernardi's isochores, given their correlation with biological features such as gene density, SINE and LINE (short, long interspersed repetitive elements) densities, recombination rate or single nucleotide polymorphism variability. The resulting isochore maps are available at our web site (http://bioinfo2.ugr.es/isochores/), and also at the UCSC Genome Browser (http://genome.cse.ucsc.edu/). PMID:15215396

  14. Computation of interactional aerodynamics for noise prediction of heavy lift rotorcraft

    NASA Astrophysics Data System (ADS)

    Hennes, Christopher C.

    Many computational tools are used when developing a modern helicopter. As the design space is narrowed, more accurate and time-intensive tools are brought to bear. These tools are used to determine the effect of a design decision on the performance, handling, stability and efficiency of the aircraft. One notable parameter left out of this process is acoustics. This is due in part to the difficulty in making useful acoustics calculations that reveal the differences between various design configurations. This thesis presents a new approach designed to bridge the gap in prediction capability between fast but low-fidelity Lagrangian particle methods, and slow but high-fidelity Eulerian computational fluid dynamics simulations. A multi-pronged approach is presented. First, a simple flow solver using well-understood and tested flow solution methodologies is developed specifically to handle bodies in arbitrary motion. To this basic flow solver two new technologies are added. The first is an Immersed Boundary technique designed to be tolerant of geometric degeneracies and low-resolution grids. This new technique allows easy inclusion of complex fuselage geometries at minimal computational cost, improving the ability of a solver to capture the complex interactional aerodynamic effects expected in modern rotorcraft design. The second new technique is an extension of a concept from flow visualization where the motion of tip vortices are tracked through the solution using massless particles convecting with the local flow. In this extension of that concept, the particles maintain knowledge of the expected and actual vortex strength. As a post-processing step, when the acoustic calculations are made, these particles are used to augment the loading noise calculation and reproduce the highly-impulsive character of blade-vortex interaction noise. In combination these new techniques yield a significant improvement to the state of the art in rotorcraft blade-vortex interaction noise

  15. Stratified neutrophil-to-lymphocyte ratio accurately predict mortality risk in hepatocellular carcinoma patients following curative liver resection

    PubMed Central

    Huang, Gui-Qian; Zhu, Gui-Qi; Liu, Yan-Long; Wang, Li-Ren; Braddock, Martin; Zheng, Ming-Hua; Zhou, Meng-Tao

    2016-01-01

    Objectives Neutrophil lymphocyte ratio (NLR) has been shown to predict prognosis of cancers in several studies. This study was designed to evaluate the impact of stratified NLR in patients who have received curative liver resection (CLR) for hepatocellular carcinoma (HCC). Methods A total of 1659 patients who underwent CLR for suspected HCC between 2007 and 2014 were reviewed. The preoperative NLR was categorized into quartiles based on the quantity of the study population and the distribution of NLR. Hazard ratios (HRs) and 95% confidence intervals (CIs) were significantly associated with overall survival (OS) and derived by Cox proportional hazard regression analyses. Univariate and multivariate Cox proportional hazard regression analyses were evaluated for association of all independent parameters with disease prognosis. Results Multivariable Cox proportional hazards models showed that the level of NLR (HR = 1.031, 95%CI: 1.002-1.060, P = 0.033), number of nodules (HR = 1.679, 95%CI: 1.285-2.194, P<0.001), portal vein thrombosis (HR = 4.329, 95%CI: 1.968-9.521, P<0.001), microvascular invasion (HR = 2.527, 95%CI: 1.726-3.700, P<0.001) and CTP score (HR = 1.675, 95%CI: 1.153-2.433, P = 0.007) were significant predictors of mortality. From the Kaplan-Meier analysis of overall survival (OS), each NLR quartile showed a progressively worse OS and apparent separation (log-rank P=0.008). The highest 5-year OS rate following CLR (60%) in HCC patients was observed in quartile 1. In contrast, the lowest 5-year OS rate (27%) was obtained in quartile 4. Conclusions Stratified NLR may predict significantly improved outcomes and strengthen the predictive power for patient responses to therapeutic intervention. PMID:26716411

  16. Prostate cancer nodal oligometastasis accurately assessed using prostate-specific membrane antigen positron emission tomography-computed tomography and confirmed histologically following robotic-assisted lymph node dissection

    PubMed Central

    O’Kane, Dermot B.; Lawrentschuk, Nathan; Bolton, Damien M.

    2016-01-01

    We herein present a case of a 76-year-old gentleman, where prostate-specific membrane antigen positron emission tomography-computed tomography (PSMA PET-CT) was used to accurately detect prostate cancer (PCa), pelvic lymph node (LN) metastasis in the setting of biochemical recurrence following definitive treatment for PCa. The positive PSMA PET-CT result was confirmed with histological examination of the involved pelvic LNs following pelvic LN dissection. PMID:27141207

  17. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    SciTech Connect

    Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-04-15

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10{sup −5}). Conclusions: This study

  18. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    PubMed Central

    Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-01-01

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10−5). Conclusions: This study

  19. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events.

    PubMed

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The [Formula: see text] class contains tandem [Formula: see text]-type motif sequences, and the [Formula: see text] class contains alternating [Formula: see text], [Formula: see text] and [Formula: see text] type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a [Formula: see text]-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the [Formula: see text] class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for [Formula: see text]-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  20. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events

    PubMed Central

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B.; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The P class contains tandem P-type motif sequences, and the PLS class contains alternating P, L and S type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a PLS-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the PLS class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for PLS-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  1. An ONIOM study of the Bergman reaction: a computationally efficient and accurate method for modeling the enediyne anticancer antibiotics

    NASA Astrophysics Data System (ADS)

    Feldgus, Steven; Shields, George C.

    2001-10-01

    The Bergman cyclization of large polycyclic enediyne systems that mimic the cores of the enediyne anticancer antibiotics was studied using the ONIOM hybrid method. Tests on small enediynes show that ONIOM can accurately match experimental data. The effect of the triggering reaction in the natural products is investigated, and we support the argument that it is strain effects that lower the cyclization barrier. The barrier for the triggered molecule is very low, leading to a reasonable half-life at biological temperatures. No evidence is found that would suggest a concerted cyclization/H-atom abstraction mechanism is necessary for DNA cleavage.

  2. Predicting Accurate Electronic Excitation Transfer Rates via Marcus Theory with Boys or Edmiston-Ruedenberg Localized Diabatization

    SciTech Connect

    Subotnik, Joseph E.; Vura-Weis, Josh; Sodt, Alex J.; Ratner, Mark A.

    2010-05-06

    We model the triplet-triplet energy-transfer experiments from the Closs group [Closs, G. L.; et al. J. Am. Chem. Soc. 1988, 110, 2652.] using a combination of Marcus theory and either Boys or Edmiston-Ruedenberg localized diabatization, and we show that relative and absolute rates of electronic excitation transfer may be computed successfully. For the case where both the donor and acceptor occupy equatorial positions on a rigid cyclohexane bridge, we find βcalc = 2.8 per C-C bond, compared with the experimental value βexp = 2.6. This work highlights the power of using localized diabatization methods as a tool for modeling nonequilibrium processes.

  3. Prediction of helicopter rotor discrete frequency noise: A computer program incorporating realistic blade motions and advanced acoustic formulation

    NASA Technical Reports Server (NTRS)

    Brentner, K. S.

    1986-01-01

    A computer program has been developed at the Langley Research Center to predict the discrete frequency noise of conventional and advanced helicopter rotors. The program, called WOPWOP, uses the most advanced subsonic formulation of Farassat that is less sensitive to errors and is valid for nearly all helicopter rotor geometries and flight conditions. A brief derivation of the acoustic formulation is presented along with a discussion of the numerical implementation of the formulation. The computer program uses realistic helicopter blade motion and aerodynamic loadings, input by the user, for noise calculation in the time domain. A detailed definition of all the input variables, default values, and output data is included. A comparison with experimental data shows good agreement between prediction and experiment; however, accurate aerodynamic loading is needed.

  4. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems

    SciTech Connect

    Samudrala, Ram; Heffron, Fred; McDermott, Jason E.

    2009-04-24

    The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, after eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.

  5. Computational prediction of disease microRNAs in domestic animals

    PubMed Central

    2014-01-01

    Background The most important means of identifying diseases before symptoms appear is through the discovery of disease-associated biomarkers. Recently, microRNAs (miRNAs) have become highly useful biomarkers of infectious, genetic and metabolic diseases in human but they have not been well studied in domestic animals. It is probable that many of the animal homologs of human disease-associated miRNAs may be involved in domestic animal diseases. Here we describe a computational biology study in which human disease miRNAs were utilized to predict orthologous miRNAs in cow, chicken, pig, horse, and dog. Results We identified 287 human disease-associated miRNAs which had at least one 100% identical animal homolog. The 287 miRNAs were associated with 359 human diseases referenced in 2,863 Pubmed articles. Multiple sequence analysis indicated that over 60% of known horse mature miRNAs found perfect matches in human disease-associated miRNAs, followed by dog (50%). As expected, chicken had the least number of perfect matches (5%). Phylogenetic analysis of miRNA precursors indicated that 85% of human disease pre-miRNAs were highly conserved in animals, showing less than 5% nucleotide substitution rates over evolutionary time. As an example we demonstrated conservation of human hsa-miR-143-3p which is associated with type 2 diabetes and targets AKT1 gene which is highly conserved in pig, horse and dog. Functional analysis of AKT1 gene using Gene Ontology (GO) showed that it is involved in glucose homeostasis, positive regulation of glucose import, positive regulation of glycogen biosynthetic process, glucose transport and response to food. Conclusions This data provides the animal and veterinary research community with a resource to assist in generating hypothesis-driven research for discovering animal disease-related miRNA from their datasets and expedite development of prophylactic and disease-treatment strategies and also influence research efforts to identify novel

  6. Computer Aided Detection System for Prediction of the Malaise during Hemodialysis

    PubMed Central

    Fanizzi, Annarita; Corciulo, Roberto; Garuccio, Elena; Gesualdo, Loreto; Loizzo, Giuliana; Procaccini, Deni Aldo; Vernò, Lucia; Bellotti, Roberto

    2016-01-01

    Monitoring of dialysis sessions is crucial as different stress factors can yield suffering or critical situations. Specialized personnel is usually required for the administration of this medical treatment; nevertheless, subjects whose clinical status can be considered stable require different monitoring strategies when compared with subjects with critical clinical conditions. In this case domiciliary treatment or monitoring can substantially improve the quality of life of patients undergoing dialysis. In this work, we present a Computer Aided Detection (CAD) system for the telemonitoring of patients' clinical parameters. The CAD was mainly designed to predict the insurgence of critical events; it consisted of two Random Forest (RF) classifiers: the first one (RF1) predicting the onset of any malaise one hour after the treatment start and the second one (RF2) again two hours later. The developed system shows an accurate classification performance in terms of both sensitivity and specificity. The specificity in the identification of nonsymptomatic sessions and the sensitivity in the identification of symptomatic sessions for RF2 are equal to 86.60% and 71.40%, respectively, thus suggesting the CAD as an effective tool to support expert nephrologists in telemonitoring the patients. PMID:27042200

  7. Predicting College Students' First Year Success: Should Soft Skills Be Taken into Consideration to More Accurately Predict the Academic Achievement of College Freshmen?

    ERIC Educational Resources Information Center

    Powell, Erica Dion

    2013-01-01

    This study presents a survey developed to measure the skills of entering college freshmen in the areas of responsibility, motivation, study habits, literacy, and stress management, and explores the predictive power of this survey as a measure of academic performance during the first semester of college. The survey was completed by 334 incoming…

  8. Very accurate variational non-relativistic non-Born-Oppenheimer atomic & molecular spectra predictions employing explicitly correlated Gaussian basis functions

    NASA Astrophysics Data System (ADS)

    Sharkey, Keeper

    2015-03-01

    Due to the fast increasing capabilities of modern computers it now becomes feasible to calculate spectra of small atom and molecules with accuracy which matches the accuracy of high-resolution measurements. The algorithms for the calculations are directly derived from the first principles of quantum mechanics. The Hamiltonian operator used in the approach is called the internal Hamiltonian and is obtained by rigorously separating out the center-of-mass motion from the laboratory-frame Hamiltonian. Algorithms for determining the isotopic energy shifts of L=0 and M=0 states of atoms were implemented and tested in the calculations of the ground 4S state of the nitrogen atom. Bound states of diatomic molecules corresponding to the total angular momentum quantum number equal to one (N=1) was derived and implemented and was tested in the calculations of the N=1, v=0, . . . , 22 states of the HD+ ion and in the calculations of the ortho-para spin isomerization of the hydrogen molecule in its all bound vibrational states. This has lead to the development of a new studying of muonic molecules (dp μ, tp μ and td μ). The algorithms for calculating rovibrational states of small molecules is currently being extended to H3+using sin and cos ECGs. National Science Foundation.

  9. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field

  10. Microdosing of a Carbon-14 Labeled Protein in Healthy Volunteers Accurately Predicts Its Pharmacokinetics at Therapeutic Dosages.

    PubMed

    Vlaming, M L H; van Duijn, E; Dillingh, M R; Brands, R; Windhorst, A D; Hendrikse, N H; Bosgra, S; Burggraaf, J; de Koning, M C; Fidder, A; Mocking, J A J; Sandman, H; de Ligt, R A F; Fabriek, B O; Pasman, W J; Seinen, W; Alves, T; Carrondo, M; Peixoto, C; Peeters, P A M; Vaes, W H J

    2015-08-01

    Preclinical development of new biological entities (NBEs), such as human protein therapeutics, requires considerable expenditure of time and costs. Poor prediction of pharmacokinetics in humans further reduces net efficiency. In this study, we show for the first time that pharmacokinetic data of NBEs in humans can be successfully obtained early in the drug development process by the use of microdosing in a small group of healthy subjects combined with ultrasensitive accelerator mass spectrometry (AMS). After only minimal preclinical testing, we performed a first-in-human phase 0/phase 1 trial with a human recombinant therapeutic protein (RESCuing Alkaline Phosphatase, human recombinant placental alkaline phosphatase [hRESCAP]) to assess its safety and kinetics. Pharmacokinetic analysis showed dose linearity from microdose (53 μg) [(14) C]-hRESCAP to therapeutic doses (up to 5.3 mg) of the protein in healthy volunteers. This study demonstrates the value of a microdosing approach in a very small cohort for accelerating the clinical development of NBEs. PMID:25869840

  11. Verification, validation, and predictive capability in computational engineering and physics.

    SciTech Connect

    Oberkampf, William Louis; Hirsch, Charles; Trucano, Timothy Guy

    2003-02-01

    Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.

  12. High order accurate and low dissipation method for unsteady compressible viscous flow computation on helicopter rotor in forward flight

    NASA Astrophysics Data System (ADS)

    Xu, Li; Weng, Peifen

    2014-02-01

    An improved fifth-order weighted essentially non-oscillatory (WENO-Z) scheme combined with the moving overset grid technique has been developed to compute unsteady compressible viscous flows on the helicopter rotor in forward flight. In order to enforce periodic rotation and pitching of the rotor and relative motion between rotor blades, the moving overset grid technique is extended, where a special judgement standard is presented near the odd surface of the blade grid during search donor cells by using the Inverse Map method. The WENO-Z scheme is adopted for reconstructing left and right state values with the Roe Riemann solver updating the inviscid fluxes and compared with the monotone upwind scheme for scalar conservation laws (MUSCL) and the classical WENO scheme. Since the WENO schemes require a six point stencil to build the fifth-order flux, the method of three layers of fringes for hole boundaries and artificial external boundaries is proposed to carry out flow information exchange between chimera grids. The time advance on the unsteady solution is performed by the full implicit dual time stepping method with Newton type LU-SGS subiteration, where the solutions of pseudo steady computation are as the initial fields of the unsteady flow computation. Numerical results on non-variable pitch rotor and periodic variable pitch rotor in forward flight reveal that the approach can effectively capture vortex wake with low dissipation and reach periodic solutions very soon.

  13. Computer prediction of three-dimensional potential flow fields in which aircraft propellers operate: Computer program description and users manual

    NASA Technical Reports Server (NTRS)

    Jumper, S. J.

    1979-01-01

    A method was developed for predicting the potential flow velocity field at the plane of a propeller operating under the influence of a wing-fuselage-cowl or nacelle combination. A computer program was written which predicts the three dimensional potential flow field. The contents of the program, its input data, and its output results are described.

  14. Demonstrating the improvement of predictive maturity of a computational model

    SciTech Connect

    Hemez, Francois M; Unal, Cetin; Atamturktur, Huriye S

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  15. Analyzing Log Files to Predict Students' Problem Solving Performance in a Computer-Based Physics Tutor

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2015-01-01

    This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…

  16. Prediction of State Mandated Assessment Mathematics Scores from Computer Based Mathematics and Reading Preview Assessments

    ERIC Educational Resources Information Center

    Costa-Guerra, Boris

    2012-01-01

    The study sought to understand whether MAPs computer based assessment of math and language skills using MAPs reading scores can predict student scores on the NMSBA. A key question was whether or not the prediction can be improved by including student language skill scores. The study explored the effectiveness of computer based preview assessments…

  17. THE FUTURE OF COMPUTER-BASED TOXICITY PREDICTION: MECHANISM-BASED MODELS VS. INFORMATION MINING APPROACHES

    EPA Science Inventory


    The Future of Computer-Based Toxicity Prediction:
    Mechanism-Based Models vs. Information Mining Approaches

    When we speak of computer-based toxicity prediction, we are generally referring to a broad array of approaches which rely primarily upon chemical structure ...

  18. Advances and Computational Tools towards Predictable Design in Biological Engineering

    PubMed Central

    2014-01-01

    The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids) when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated. PMID:25161694

  19. Computational Embryology and Predictive Toxicology of Cleft Palate

    EPA Science Inventory

    Capacity to model and simulate key events in developmental toxicity using computational systems biology and biological knowledge steps closer to hazard identification across the vast landscape of untested environmental chemicals. In this context, we chose cleft palate as a model ...

  20. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Astrophysics Data System (ADS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-07-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively