Sample records for accurate prediction results

  1. Mental models accurately predict emotion transitions.

    PubMed

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  2. Mental models accurately predict emotion transitions

    PubMed Central

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  3. Biomarker Surrogates Do Not Accurately Predict Sputum Eosinophils and Neutrophils in Asthma

    PubMed Central

    Hastie, Annette T.; Moore, Wendy C.; Li, Huashi; Rector, Brian M.; Ortega, Victor E.; Pascual, Rodolfo M.; Peters, Stephen P.; Meyers, Deborah A.; Bleecker, Eugene R.

    2013-01-01

    Background Sputum eosinophils (Eos) are a strong predictor of airway inflammation, exacerbations, and aid asthma management, whereas sputum neutrophils (Neu) indicate a different severe asthma phenotype, potentially less responsive to TH2-targeted therapy. Variables such as blood Eos, total IgE, fractional exhaled nitric oxide (FeNO) or FEV1% predicted, may predict airway Eos, while age, FEV1%predicted, or blood Neu may predict sputum Neu. Availability and ease of measurement are useful characteristics, but accuracy in predicting airway Eos and Neu, individually or combined, is not established. Objectives To determine whether blood Eos, FeNO, and IgE accurately predict sputum eosinophils, and age, FEV1% predicted, and blood Neu accurately predict sputum neutrophils (Neu). Methods Subjects in the Wake Forest Severe Asthma Research Program (N=328) were characterized by blood and sputum cells, healthcare utilization, lung function, FeNO, and IgE. Multiple analytical techniques were utilized. Results Despite significant association with sputum Eos, blood Eos, FeNO and total IgE did not accurately predict sputum Eos, and combinations of these variables failed to improve prediction. Age, FEV1%predicted and blood Neu were similarly unsatisfactory for prediction of sputum Neu. Factor analysis and stepwise selection found FeNO, IgE and FEV1% predicted, but not blood Eos, correctly predicted 69% of sputum Eospredicted 64% of sputum Neupredict both sputum Eos and Neu accurately assigned only 41% of samples. Conclusion Despite statistically significant associations FeNO, IgE, blood Eos and Neu, FEV1%predicted, and age are poor surrogates, separately and combined, for accurately predicting sputum eosinophils and neutrophils. PMID:23706399

  4. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  5. Robust and accurate decoding of motoneuron behavior and prediction of the resulting force output.

    PubMed

    Thompson, Christopher K; Negro, Francesco; Johnson, Michael D; Holmes, Matthew R; McPherson, Laura Miller; Powers, Randall K; Farina, Dario; Heckman, Charles J

    2018-05-03

    The spinal alpha motoneuron is the only cell in the human CNS whose discharge can be routinely recorded in humans. We have reengineered motor unit collection and decomposition approaches, originally developed in humans, to measure the neural drive to muscle and estimate muscle force generation in the decerebrate cat model. Experimental, computational, and predictive approaches are used to demonstrate the validity of this approach across a wide range of modes to activate the motor pool. The utility of this approach is shown through the ability to track individual motor units across trials, allowing for better predictions of muscle force than the electromyography signal, and providing insights in to the stereotypical discharge characteristics in response to synaptic activation of the motor pool. This approach now allows for a direct link between the intracellular data of single motoneurons, the discharge properties of motoneuron populations, and muscle force generation in the same preparation. The discharge of a spinal alpha motoneuron and the resulting contraction of its muscle fibers represents the functional quantum of the motor system. Recent advances in the recording and decomposition of the electromyographic signal allows for the identification of several tens of concurrently active motor units. These detailed population data provide the potential to achieve deep insights into the synaptic organization of motor commands. Yet most of our understanding of the synaptic input to motoneurons is derived from intracellular recordings in animal preparations. Thus, it is necessary to extend the new electrode and decomposition methods to recording of motor unit populations in these same preparations. To achieve this goal, we use high-density electrode arrays and decomposition techniques, analogous to those developed for humans, to record and decompose the activity of tens of concurrently active motor units in a hindlimb muscle in the decerebrate cat. Our results showed

  6. SCPRED: Accurate prediction of protein structural class for sequences of twilight-zone similarity with predicting sequences

    PubMed Central

    Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke

    2008-01-01

    Background Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. Results SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. Conclusion The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of

  7. PredictSNP: Robust and Accurate Consensus Classifier for Prediction of Disease-Related Mutations

    PubMed Central

    Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D.; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri

    2014-01-01

    Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp. PMID:24453961

  8. Accurate Identification of Fear Facial Expressions Predicts Prosocial Behavior

    PubMed Central

    Marsh, Abigail A.; Kozak, Megan N.; Ambady, Nalini

    2009-01-01

    The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will behave more prosocially. In Study 1, participants who identified fear more accurately also donated more money and time to a victim in a classic altruism paradigm. In Studies 2 and 3, participants’ ability to identify the fear expression predicted prosocial behavior in a novel task designed to control for confounding variables. In Study 3, accuracy for recognizing fear proved a better predictor of prosocial behavior than gender, mood, or scores on an empathy scale. PMID:17516803

  9. Accurate identification of fear facial expressions predicts prosocial behavior.

    PubMed

    Marsh, Abigail A; Kozak, Megan N; Ambady, Nalini

    2007-05-01

    The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will behave more prosocially. In Study 1, participants who identified fear more accurately also donated more money and time to a victim in a classic altruism paradigm. In Studies 2 and 3, participants' ability to identify the fear expression predicted prosocial behavior in a novel task designed to control for confounding variables. In Study 3, accuracy for recognizing fear proved a better predictor of prosocial behavior than gender, mood, or scores on an empathy scale.

  10. Accurate Binding Free Energy Predictions in Fragment Optimization.

    PubMed

    Steinbrecher, Thomas B; Dahlgren, Markus; Cappel, Daniel; Lin, Teng; Wang, Lingle; Krilov, Goran; Abel, Robert; Friesner, Richard; Sherman, Woody

    2015-11-23

    Predicting protein-ligand binding free energies is a central aim of computational structure-based drug design (SBDD)--improved accuracy in binding free energy predictions could significantly reduce costs and accelerate project timelines in lead discovery and optimization. The recent development and validation of advanced free energy calculation methods represents a major step toward this goal. Accurately predicting the relative binding free energy changes of modifications to ligands is especially valuable in the field of fragment-based drug design, since fragment screens tend to deliver initial hits of low binding affinity that require multiple rounds of synthesis to gain the requisite potency for a project. In this study, we show that a free energy perturbation protocol, FEP+, which was previously validated on drug-like lead compounds, is suitable for the calculation of relative binding strengths of fragment-sized compounds as well. We study several pharmaceutically relevant targets with a total of more than 90 fragments and find that the FEP+ methodology, which uses explicit solvent molecular dynamics and physics-based scoring with no parameters adjusted, can accurately predict relative fragment binding affinities. The calculations afford R(2)-values on average greater than 0.5 compared to experimental data and RMS errors of ca. 1.1 kcal/mol overall, demonstrating significant improvements over the docking and MM-GBSA methods tested in this work and indicating that FEP+ has the requisite predictive power to impact fragment-based affinity optimization projects.

  11. SCPRED: accurate prediction of protein structural class for sequences of twilight-zone similarity with predicting sequences.

    PubMed

    Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke

    2008-05-01

    Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are

  12. Accurate prediction of energy expenditure using a shoe-based activity monitor.

    PubMed

    Sazonova, Nadezhda; Browning, Raymond C; Sazonov, Edward

    2011-07-01

    The aim of this study was to develop and validate a method for predicting energy expenditure (EE) using a footwear-based system with integrated accelerometer and pressure sensors. We developed a footwear-based device with an embedded accelerometer and insole pressure sensors for the prediction of EE. The data from the device can be used to perform accurate recognition of major postures and activities and to estimate EE using the acceleration, pressure, and posture/activity classification information in a branched algorithm without the need for individual calibration. We measured EE via indirect calorimetry as 16 adults (body mass index=19-39 kg·m) performed various low- to moderate-intensity activities and compared measured versus predicted EE using several models based on the acceleration and pressure signals. Inclusion of pressure data resulted in better accuracy of EE prediction during static postures such as sitting and standing. The activity-based branched model that included predictors from accelerometer and pressure sensors (BACC-PS) achieved the lowest error (e.g., root mean squared error (RMSE)=0.69 METs) compared with the accelerometer-only-based branched model BACC (RMSE=0.77 METs) and nonbranched model (RMSE=0.94-0.99 METs). Comparison of EE prediction models using data from both legs versus models using data from a single leg indicates that only one shoe needs to be equipped with sensors. These results suggest that foot acceleration combined with insole pressure measurement, when used in an activity-specific branched model, can accurately estimate the EE associated with common daily postures and activities. The accuracy and unobtrusiveness of a footwear-based device may make it an effective physical activity monitoring tool.

  13. ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.

    PubMed

    Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P

    2016-11-01

    ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P < 0.0001). 394 (61.2%) of physicians' estimates about the percentage probability of post-thrombolysis symptomatic intracranial haemorrhage were accurate compared with 583 (90.5%) of SEDAN score estimates (P < 0.0001). 160 (24.8%) of physicians' estimates about post-thrombolysis 3-month percentage probability of mRS 0-2 were accurate compared with 240 (37.3%) DRAGON score estimates (P < 0.0001). 260 (40.4%) of physicians' estimates about the percentage probability of post-thrombolysis mRS 5-6 were accurate compared with 518 (80.4%) DRAGON score estimates (P < 0.0001). ASTRAL, DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.

  14. Rapid and accurate prediction and scoring of water molecules in protein binding sites.

    PubMed

    Ross, Gregory A; Morris, Garrett M; Biggin, Philip C

    2012-01-01

    Water plays a critical role in ligand-protein interactions. However, it is still challenging to predict accurately not only where water molecules prefer to bind, but also which of those water molecules might be displaceable. The latter is often seen as a route to optimizing affinity of potential drug candidates. Using a protocol we call WaterDock, we show that the freely available AutoDock Vina tool can be used to predict accurately the binding sites of water molecules. WaterDock was validated using data from X-ray crystallography, neutron diffraction and molecular dynamics simulations and correctly predicted 97% of the water molecules in the test set. In addition, we combined data-mining, heuristic and machine learning techniques to develop probabilistic water molecule classifiers. When applied to WaterDock predictions in the Astex Diverse Set of protein ligand complexes, we could identify whether a water molecule was conserved or displaced to an accuracy of 75%. A second model predicted whether water molecules were displaced by polar groups or by non-polar groups to an accuracy of 80%. These results should prove useful for anyone wishing to undertake rational design of new compounds where the displacement of water molecules is being considered as a route to improved affinity.

  15. Are EMS call volume predictions based on demand pattern analysis accurate?

    PubMed

    Brown, Lawrence H; Lerner, E Brooke; Larmon, Baxter; LeGassick, Todd; Taigman, Michael

    2007-01-01

    Most EMS systems determine the number of crews they will deploy in their communities and when those crews will be scheduled based on anticipated call volumes. Many systems use historical data to calculate their anticipated call volumes, a method of prediction known as demand pattern analysis. To evaluate the accuracy of call volume predictions calculated using demand pattern analysis. Seven EMS systems provided 73 consecutive weeks of hourly call volume data. The first 20 weeks of data were used to calculate three common demand pattern analysis constructs for call volume prediction: average peak demand (AP), smoothed average peak demand (SAP), and 90th percentile rank (90%R). The 21st week served as a buffer. Actual call volumes in the last 52 weeks were then compared to the predicted call volumes by using descriptive statistics. There were 61,152 hourly observations in the test period. All three constructs accurately predicted peaks and troughs in call volume but not exact call volume. Predictions were accurate (+/-1 call) 13% of the time using AP, 10% using SAP, and 19% using 90%R. Call volumes were overestimated 83% of the time using AP, 86% using SAP, and 74% using 90%R. When call volumes were overestimated, predictions exceeded actual call volume by a median (Interquartile range) of 4 (2-6) calls for AP, 4 (2-6) for SAP, and 3 (2-5) for 90%R. Call volumes were underestimated 4% of time using AP, 4% using SAP, and 7% using 90%R predictions. When call volumes were underestimated, call volumes exceeded predictions by a median (Interquartile range; maximum under estimation) of 1 (1-2; 18) call for AP, 1 (1-2; 18) for SAP, and 2 (1-3; 20) for 90%R. Results did not vary between systems. Generally, demand pattern analysis estimated or overestimated call volume, making it a reasonable predictor for ambulance staffing patterns. However, it did underestimate call volume between 4% and 7% of the time. Communities need to determine if these rates of over

  16. Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study

    NASA Astrophysics Data System (ADS)

    Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash

    2018-02-01

    Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.

  17. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    NASA Astrophysics Data System (ADS)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2017-04-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  18. Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana

    2017-03-01

    Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.

  19. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. © 2016 John Wiley & Sons Ltd.

  20. Earthquake prediction; new studies yield promising results

    USGS Publications Warehouse

    Robinson, R.

    1974-01-01

    On Agust 3, 1973, a small earthquake (magnitude 2.5) occurred near Blue Mountain Lake in the Adirondack region of northern New York State. This seemingly unimportant event was of great significance, however, because it was predicted. Seismologsits at the Lamont-Doherty geologcal Observatory of Columbia University accurately foretold the time, place, and magnitude of the event. Their prediction was based on certain pre-earthquake processes that are best explained by a hypothesis known as "dilatancy," a concept that has injected new life and direction into the science of earthquake prediction. Although much mroe reserach must be accomplished before we can expect to predict potentially damaging earthquakes with any degree of consistency, results such as this indicate that we are on a promising road. 

  1. Accurate high-throughput structure mapping and prediction with transition metal ion FRET

    PubMed Central

    Yu, Xiaozhen; Wu, Xiongwu; Bermejo, Guillermo A.; Brooks, Bernard R.; Taraska, Justin W.

    2013-01-01

    Mapping the landscape of a protein’s conformational space is essential to understanding its functions and regulation. The limitations of many structural methods have made this process challenging for most proteins. Here, we report that transition metal ion FRET (tmFRET) can be used in a rapid, highly parallel screen, to determine distances from multiple locations within a protein at extremely low concentrations. The distances generated through this screen for the protein Maltose Binding Protein (MBP) match distances from the crystal structure to within a few angstroms. Furthermore, energy transfer accurately detects structural changes during ligand binding. Finally, fluorescence-derived distances can be used to guide molecular simulations to find low energy states. Our results open the door to rapid, accurate mapping and prediction of protein structures at low concentrations, in large complex systems, and in living cells. PMID:23273426

  2. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    PubMed Central

    Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.

    2015-01-01

    Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results

  3. Highly accurate prediction of emotions surrounding the attacks of September 11, 2001 over 1-, 2-, and 7-year prediction intervals.

    PubMed

    Doré, Bruce P; Meksin, Robert; Mather, Mara; Hirst, William; Ochsner, Kevin N

    2016-06-01

    In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting (a) the overall intensity of their future negative emotion, and (b) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Highly accurate prediction of emotions surrounding the attacks of September 11, 2001 over 1-, 2-, and 7-year prediction intervals

    PubMed Central

    Doré, B.P.; Meksin, R.; Mather, M.; Hirst, W.; Ochsner, K.N

    2016-01-01

    In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting 1) the overall intensity of their future negative emotion, and 2) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. PMID:27100309

  5. Heart rate during basketball game play and volleyball drills accurately predicts oxygen uptake and energy expenditure.

    PubMed

    Scribbans, T D; Berg, K; Narazaki, K; Janssen, I; Gurd, B J

    2015-09-01

    There is currently little information regarding the ability of metabolic prediction equations to accurately predict oxygen uptake and exercise intensity from heart rate (HR) during intermittent sport. The purpose of the present study was to develop and, cross-validate equations appropriate for accurately predicting oxygen cost (VO2) and energy expenditure from HR during intermittent sport participation. Eleven healthy adult males (19.9±1.1yrs) were recruited to establish the relationship between %VO2peak and %HRmax during low-intensity steady state endurance (END), moderate-intensity interval (MOD) and high intensity-interval exercise (HI), as performed on a cycle ergometer. Three equations (END, MOD, and HI) for predicting %VO2peak based on %HRmax were developed. HR and VO2 were directly measured during basketball games (6 male, 20.8±1.0 yrs; 6 female, 20.0±1.3yrs) and volleyball drills (12 female; 20.8±1.0yrs). Comparisons were made between measured and predicted VO2 and energy expenditure using the 3 equations developed and 2 previously published equations. The END and MOD equations accurately predicted VO2 and energy expenditure, while the HI equation underestimated, and the previously published equations systematically overestimated VO2 and energy expenditure. Intermittent sport VO2 and energy expenditure can be accurately predicted from heart rate data using either the END (%VO2peak=%HRmax x 1.008-17.17) or MOD (%VO2peak=%HRmax x 1.2-32) equations. These 2 simple equations provide an accessible and cost-effective method for accurate estimation of exercise intensity and energy expenditure during intermittent sport.

  6. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shvab, I.; Sadus, Richard J., E-mail: rsadus@swin.edu.au

    2013-11-21

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm{sup 3} for a wide range of temperatures (298–650 K) and pressures (0.1–700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys.more » 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.« less

  8. Accurate prediction of protein–protein interactions from sequence alignments using a Bayesian method

    PubMed Central

    Burger, Lukas; van Nimwegen, Erik

    2008-01-01

    Accurate and large-scale prediction of protein–protein interactions directly from amino-acid sequences is one of the great challenges in computational biology. Here we present a new Bayesian network method that predicts interaction partners using only multiple alignments of amino-acid sequences of interacting protein domains, without tunable parameters, and without the need for any training examples. We first apply the method to bacterial two-component systems and comprehensively reconstruct two-component signaling networks across all sequenced bacteria. Comparisons of our predictions with known interactions show that our method infers interaction partners genome-wide with high accuracy. To demonstrate the general applicability of our method we show that it also accurately predicts interaction partners in a recent dataset of polyketide synthases. Analysis of the predicted genome-wide two-component signaling networks shows that cognates (interacting kinase/regulator pairs, which lie adjacent on the genome) and orphans (which lie isolated) form two relatively independent components of the signaling network in each genome. In addition, while most genes are predicted to have only a small number of interaction partners, we find that 10% of orphans form a separate class of ‘hub' nodes that distribute and integrate signals to and from up to tens of different interaction partners. PMID:18277381

  9. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, andmore » its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.« less

  10. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Visel, Axel; Blow, Matthew J.; Li, Zirong

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. Wemore » tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.« less

  11. Limb-Enhancer Genie: An accessible resource of accurate enhancer predictions in the developing limb

    DOE PAGES

    Monti, Remo; Barozzi, Iros; Osterwalder, Marco; ...

    2017-08-21

    Epigenomic mapping of enhancer-associated chromatin modifications facilitates the genome-wide discovery of tissue-specific enhancers in vivo. However, reliance on single chromatin marks leads to high rates of false-positive predictions. More sophisticated, integrative methods have been described, but commonly suffer from limited accessibility to the resulting predictions and reduced biological interpretability. Here we present the Limb-Enhancer Genie (LEG), a collection of highly accurate, genome-wide predictions of enhancers in the developing limb, available through a user-friendly online interface. We predict limb enhancers using a combination of > 50 published limb-specific datasets and clusters of evolutionarily conserved transcription factor binding sites, taking advantage ofmore » the patterns observed at previously in vivo validated elements. By combining different statistical models, our approach outperforms current state-of-the-art methods and provides interpretable measures of feature importance. Our results indicate that including a previously unappreciated score that quantifies tissue-specific nuclease accessibility significantly improves prediction performance. We demonstrate the utility of our approach through in vivo validation of newly predicted elements. Moreover, we describe general features that can guide the type of datasets to include when predicting tissue-specific enhancers genome-wide, while providing an accessible resource to the general biological community and facilitating the functional interpretation of genetic studies of limb malformations.« less

  12. Accurate predictions of iron redox state in silicate glasses: A multivariate approach using X-ray absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyar, M. Darby; McCanta, Molly; Breves, Elly

    2016-03-01

    Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe 3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe 3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yieldmore » accurate results from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less

  13. Rapid and accurate prediction of degradant formation rates in pharmaceutical formulations using high-performance liquid chromatography-mass spectrometry.

    PubMed

    Darrington, Richard T; Jiao, Jim

    2004-04-01

    Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.

  14. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  15. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE PAGES

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-12-28

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  16. Measuring the value of accurate link prediction for network seeding.

    PubMed

    Wei, Yijin; Spencer, Gwen

    2017-01-01

    The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.

  17. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    PubMed

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. © 2013 Wiley Periodicals, Inc.

  18. Accurate prediction of secondary metabolite gene clusters in filamentous fungi.

    PubMed

    Andersen, Mikael R; Nielsen, Jakob B; Klitgaard, Andreas; Petersen, Lene M; Zachariasen, Mia; Hansen, Tilde J; Blicher, Lene H; Gotfredsen, Charlotte H; Larsen, Thomas O; Nielsen, Kristian F; Mortensen, Uffe H

    2013-01-02

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify supporting enzymes for key synthases one cluster at a time. In this study, we design and apply a DNA expression array for Aspergillus nidulans in combination with legacy data to form a comprehensive gene expression compendium. We apply a guilt-by-association-based analysis to predict the extent of the biosynthetic clusters for the 58 synthases active in our set of experimental conditions. A comparison with legacy data shows the method to be accurate in 13 of 16 known clusters and nearly accurate for the remaining 3 clusters. Furthermore, we apply a data clustering approach, which identifies cross-chemistry between physically separate gene clusters (superclusters), and validate this both with legacy data and experimentally by prediction and verification of a supercluster consisting of the synthase AN1242 and the prenyltransferase AN11080, as well as identification of the product compound nidulanin A. We have used A. nidulans for our method development and validation due to the wealth of available biochemical data, but the method can be applied to any fungus with a sequenced and assembled genome, thus supporting further secondary metabolite pathway elucidation in the fungal kingdom.

  19. Simple prediction scores predict good and devastating outcomes after stroke more accurately than physicians.

    PubMed

    Reid, John Michael; Dai, Dingwei; Delmonte, Susanna; Counsell, Carl; Phillips, Stephen J; MacLeod, Mary Joan

    2017-05-01

    physicians are often asked to prognosticate soon after a patient presents with stroke. This study aimed to compare two outcome prediction scores (Five Simple Variables [FSV] score and the PLAN [Preadmission comorbidities, Level of consciousness, Age, and focal Neurologic deficit]) with informal prediction by physicians. demographic and clinical variables were prospectively collected from consecutive patients hospitalised with acute ischaemic or haemorrhagic stroke (2012-13). In-person or telephone follow-up at 6 months established vital and functional status (modified Rankin score [mRS]). Area under the receiver operating curves (AUC) was used to establish prediction score performance. five hundred and seventy-five patients were included; 46% female, median age 76 years, 88% ischaemic stroke. Six months after stroke, 47% of patients had a good outcome (alive and independent, mRS 0-2) and 26% a devastating outcome (dead or severely dependent, mRS 5-6). The FSV and PLAN scores were superior to physician prediction (AUCs of 0.823-0.863 versus 0.773-0.805, P < 0.0001) for good and devastating outcomes. The FSV score was superior to the PLAN score for predicting good outcomes and vice versa for devastating outcomes (P < 0.001). Outcome prediction was more accurate for those with later presentations (>24 hours from onset). the FSV and PLAN scores are validated in this population for outcome prediction after both ischaemic and haemorrhagic stroke. The FSV score is the least complex of all developed scores and can assist outcome prediction by physicians. © The Author 2016. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  20. Accurate Prediction of Contact Numbers for Multi-Spanning Helical Membrane Proteins

    PubMed Central

    Li, Bian; Mendenhall, Jeffrey; Nguyen, Elizabeth Dong; Weiner, Brian E.; Fischer, Axel W.; Meiler, Jens

    2017-01-01

    Prediction of the three-dimensional (3D) structures of proteins by computational methods is acknowledged as an unsolved problem. Accurate prediction of important structural characteristics such as contact number is expected to accelerate the otherwise slow progress being made in the prediction of 3D structure of proteins. Here, we present a dropout neural network-based method, TMH-Expo, for predicting the contact number of transmembrane helix (TMH) residues from sequence. Neuronal dropout is a strategy where certain neurons of the network are excluded from back-propagation to prevent co-adaptation of hidden-layer neurons. By using neuronal dropout, overfitting was significantly reduced and performance was noticeably improved. For multi-spanning helical membrane proteins, TMH-Expo achieved a remarkable Pearson correlation coefficient of 0.69 between predicted and experimental values and a mean absolute error of only 1.68. In addition, among those membrane protein–membrane protein interface residues, 76.8% were correctly predicted. Mapping of predicted contact numbers onto structures indicates that contact numbers predicted by TMH-Expo reflect the exposure patterns of TMHs and reveal membrane protein–membrane protein interfaces, reinforcing the potential of predicted contact numbers to be used as restraints for 3D structure prediction and protein–protein docking. TMH-Expo can be accessed via a Web server at www.meilerlab.org. PMID:26804342

  1. CPO Prediction: Accuracy Assessment and Impact on UT1 Intensive Results

    NASA Technical Reports Server (NTRS)

    Malkin, Zinovy

    2010-01-01

    The UT1 Intensive results heavily depend on the celestial pole offset (CPO) model used during data processing. Since accurate CPO values are available with a delay of two to four weeks, CPO predictions are necessarily applied to the UT1 Intensive data analysis, and errors in the predictions can influence the operational UT1 accuracy. In this paper we assess the real accuracy of CPO prediction using the actual IERS and PUL predictions made in 2007-2009. Also, results of operational processing were analyzed to investigate the actual impact of EOP prediction errors on the rapid UT1 results. It was found that the impact of CPO prediction errors is at a level of several microseconds, whereas the impact of the inaccuracy in the polar motion prediction may be about one order of magnitude larger for ultra-rapid UT1 results. The situation can be amended if the IERS Rapid solution will be updated more frequently.

  2. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGES

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  3. ILT based defect simulation of inspection images accurately predicts mask defect printability on wafer

    NASA Astrophysics Data System (ADS)

    Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2016-05-01

    printability of defects at wafer level and automates the process of defect dispositioning from images captured using high resolution inspection machine. It first eliminates false defects due to registration, focus errors, image capture errors and random noise caused during inspection. For the remaining real defects, actual mask-like contours are generated using the Calibre® ILT solution [1][2], which is enhanced to predict the actual mask contours from high resolution defect images. It enables accurate prediction of defect contours, which is not possible from images captured using inspection machine because some information is already lost due to optical effects. Calibre's simulation engine is used to generate images at wafer level using scanner optical conditions and mask-like contours as input. The tool then analyses simulated images and predicts defect printability. It automatically calculates maximum CD variation and decides which defects are severe to affect patterns on wafer. In this paper, we assess the printability of defects for the mask of advanced technology nodes. In particular, we will compare the recovered mask contours with contours extracted from SEM image of the mask and compare simulation results with AIMSTM for a variety of defects and patterns. The results of printability assessment and the accuracy of comparison are presented in this paper. We also suggest how this method can be extended to predict printability of defects identified on EUV photomasks.

  4. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  5. Competitive Abilities in Experimental Microcosms Are Accurately Predicted by a Demographic Index for R*

    PubMed Central

    Murrell, Ebony G.; Juliano, Steven A.

    2012-01-01

    Resource competition theory predicts that R*, the equilibrium resource amount yielding zero growth of a consumer population, should predict species' competitive abilities for that resource. This concept has been supported for unicellular organisms, but has not been well-tested for metazoans, probably due to the difficulty of raising experimental populations to equilibrium and measuring population growth rates for species with long or complex life cycles. We developed an index (Rindex) of R* based on demography of one insect cohort, growing from egg to adult in a non-equilibrium setting, and tested whether Rindex yielded accurate predictions of competitive abilities using mosquitoes as a model system. We estimated finite rate of increase (λ′) from demographic data for cohorts of three mosquito species raised with different detritus amounts, and estimated each species' Rindex using nonlinear regressions of λ′ vs. initial detritus amount. All three species' Rindex differed significantly, and accurately predicted competitive hierarchy of the species determined in simultaneous pairwise competition experiments. Our Rindex could provide estimates and rigorous statistical comparisons of competitive ability for organisms for which typical chemostat methods and equilibrium population conditions are impractical. PMID:22970128

  6. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    PubMed Central

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875

  7. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  8. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE PAGES

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  9. XenoSite: accurately predicting CYP-mediated sites of metabolism with neural networks.

    PubMed

    Zaretzki, Jed; Matlock, Matthew; Swamidass, S Joshua

    2013-12-23

    Understanding how xenobiotic molecules are metabolized is important because it influences the safety, efficacy, and dose of medicines and how they can be modified to improve these properties. The cytochrome P450s (CYPs) are proteins responsible for metabolizing 90% of drugs on the market, and many computational methods can predict which atomic sites of a molecule--sites of metabolism (SOMs)--are modified during CYP-mediated metabolism. This study improves on prior methods of predicting CYP-mediated SOMs by using new descriptors and machine learning based on neural networks. The new method, XenoSite, is faster to train and more accurate by as much as 4% or 5% for some isozymes. Furthermore, some "incorrect" predictions made by XenoSite were subsequently validated as correct predictions by revaluation of the source literature. Moreover, XenoSite output is interpretable as a probability, which reflects both the confidence of the model that a particular atom is metabolized and the statistical likelihood that its prediction for that atom is correct.

  10. Accurate prediction of personalized olfactory perception from large-scale chemoinformatic features.

    PubMed

    Li, Hongyang; Panwar, Bharat; Omenn, Gilbert S; Guan, Yuanfang

    2018-02-01

    The olfactory stimulus-percept problem has been studied for more than a century, yet it is still hard to precisely predict the odor given the large-scale chemoinformatic features of an odorant molecule. A major challenge is that the perceived qualities vary greatly among individuals due to different genetic and cultural backgrounds. Moreover, the combinatorial interactions between multiple odorant receptors and diverse molecules significantly complicate the olfaction prediction. Many attempts have been made to establish structure-odor relationships for intensity and pleasantness, but no models are available to predict the personalized multi-odor attributes of molecules. In this study, we describe our winning algorithm for predicting individual and population perceptual responses to various odorants in the DREAM Olfaction Prediction Challenge. We find that random forest model consisting of multiple decision trees is well suited to this prediction problem, given the large feature spaces and high variability of perceptual ratings among individuals. Integrating both population and individual perceptions into our model effectively reduces the influence of noise and outliers. By analyzing the importance of each chemical feature, we find that a small set of low- and nondegenerative features is sufficient for accurate prediction. Our random forest model successfully predicts personalized odor attributes of structurally diverse molecules. This model together with the top discriminative features has the potential to extend our understanding of olfactory perception mechanisms and provide an alternative for rational odorant design.

  11. Accurate predictions of iron redox state in silicate glasses: A multivariate approach using X-ray absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyar, M. Darby; McCanta, Molly; Breves, Elly

    2016-03-01

    Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yield accurate resultsmore » from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less

  12. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    PubMed

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  13. WegoLoc: accurate prediction of protein subcellular localization using weighted Gene Ontology terms.

    PubMed

    Chi, Sang-Mun; Nam, Dougu

    2012-04-01

    We present an accurate and fast web server, WegoLoc for predicting subcellular localization of proteins based on sequence similarity and weighted Gene Ontology (GO) information. A term weighting method in the text categorization process is applied to GO terms for a support vector machine classifier. As a result, WegoLoc surpasses the state-of-the-art methods for previously used test datasets. WegoLoc supports three eukaryotic kingdoms (animals, fungi and plants) and provides human-specific analysis, and covers several sets of cellular locations. In addition, WegoLoc provides (i) multiple possible localizations of input protein(s) as well as their corresponding probability scores, (ii) weights of GO terms representing the contribution of each GO term in the prediction, and (iii) a BLAST E-value for the best hit with GO terms. If the similarity score does not meet a given threshold, an amino acid composition-based prediction is applied as a backup method. WegoLoc and User's guide are freely available at the website http://www.btool.org/WegoLoc smchiks@ks.ac.kr; dougnam@unist.ac.kr Supplementary data is available at http://www.btool.org/WegoLoc.

  14. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    PubMed

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  15. Sex-specific lean body mass predictive equations are accurate in the obese paediatric population

    PubMed Central

    Jackson, Lanier B.; Henshaw, Melissa H.; Carter, Janet; Chowdhury, Shahryar M.

    2015-01-01

    Background The clinical assessment of lean body mass (LBM) is challenging in obese children. A sex-specific predictive equation for LBM derived from anthropometric data was recently validated in children. Aim The purpose of this study was to independently validate these predictive equations in the obese paediatric population. Subjects and methods Obese subjects aged 4–21 were analysed retrospectively. Predicted LBM (LBMp) was calculated using equations previously developed in children. Measured LBM (LBMm) was derived from dual-energy x-ray absorptiometry. Agreement was expressed as [(LBMm-LBMp)/LBMm] with 95% limits of agreement. Results Of 310 enrolled patients, 195 (63%) were females. The mean age was 11.8 ± 3.4 years and mean BMI Z-score was 2.3 ± 0.4. The average difference between LBMm and LBMp was −0.6% (−17.0%, 15.8%). Pearson’s correlation revealed a strong linear relationship between LBMm and LBMp (r=0.97, p<0.01). Conclusion This study validates the use of these clinically-derived sex-specific LBM predictive equations in the obese paediatric population. Future studies should use these equations to improve the ability to accurately classify LBM in obese children. PMID:26287383

  16. Accurate prediction of vaccine stability under real storage conditions and during temperature excursions.

    PubMed

    Clénet, Didier

    2018-04-01

    Due to their thermosensitivity, most vaccines must be kept refrigerated from production to use. To successfully carry out global immunization programs, ensuring the stability of vaccines is crucial. In this context, two important issues are critical, namely: (i) predicting vaccine stability and (ii) preventing product damage due to excessive temperature excursions outside of the recommended storage conditions (cold chain break). We applied a combination of advanced kinetics and statistical analyses on vaccine forced degradation data to accurately describe the loss of antigenicity for a multivalent freeze-dried inactivated virus vaccine containing three variants. The screening of large amounts of kinetic models combined with a statistical model selection approach resulted in the identification of two-step kinetic models. Predictions based on kinetic analysis and experimental stability data were in agreement, with approximately five percentage points difference from real values for long-term stability storage conditions, after excursions of temperature and during experimental shipments of freeze-dried products. Results showed that modeling a few months of forced degradation can be used to predict various time and temperature profiles endured by vaccines, i.e. long-term stability, short time excursions outside the labeled storage conditions or shipments at ambient temperature, with high accuracy. Pharmaceutical applications of the presented kinetics-based approach are discussed. Copyright © 2018 The Author. Published by Elsevier B.V. All rights reserved.

  17. Ensemble predictive model for more accurate soil organic carbon spectroscopic estimation

    NASA Astrophysics Data System (ADS)

    Vašát, Radim; Kodešová, Radka; Borůvka, Luboš

    2017-07-01

    A myriad of signal pre-processing strategies and multivariate calibration techniques has been explored in attempt to improve the spectroscopic prediction of soil organic carbon (SOC) over the last few decades. Therefore, to come up with a novel, more powerful, and accurate predictive approach to beat the rank becomes a challenging task. However, there may be a way, so that combine several individual predictions into a single final one (according to ensemble learning theory). As this approach performs best when combining in nature different predictive algorithms that are calibrated with structurally different predictor variables, we tested predictors of two different kinds: 1) reflectance values (or transforms) at each wavelength and 2) absorption feature parameters. Consequently we applied four different calibration techniques, two per each type of predictors: a) partial least squares regression and support vector machines for type 1, and b) multiple linear regression and random forest for type 2. The weights to be assigned to individual predictions within the ensemble model (constructed as a weighted average) were determined by an automated procedure that ensured the best solution among all possible was selected. The approach was tested at soil samples taken from surface horizon of four sites differing in the prevailing soil units. By employing the ensemble predictive model the prediction accuracy of SOC improved at all four sites. The coefficient of determination in cross-validation (R2cv) increased from 0.849, 0.611, 0.811 and 0.644 (the best individual predictions) to 0.864, 0.650, 0.824 and 0.698 for Site 1, 2, 3 and 4, respectively. Generally, the ensemble model affected the final prediction so that the maximal deviations of predicted vs. observed values of the individual predictions were reduced, and thus the correlation cloud became thinner as desired.

  18. A Machine Learned Classifier That Uses Gene Expression Data to Accurately Predict Estrogen Receptor Status

    PubMed Central

    Bastani, Meysam; Vos, Larissa; Asgarian, Nasimeh; Deschenes, Jean; Graham, Kathryn; Mackey, John; Greiner, Russell

    2013-01-01

    Background Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER) status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. Methods To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. Results This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. Conclusions Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions. PMID:24312637

  19. Towards First Principles-Based Prediction of Highly Accurate Electrochemical Pourbaix Diagrams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Zhenhua; Chan, Maria K. Y.; Zhao, Zhi-Jian

    2015-08-13

    Electrochemical potential/pH (Pourbaix) diagrams underpin many aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such diagrams, inherent errors in the description of transition metal (hydroxy)oxides, together with neglect of van der Waals interactions, have limited the reliability of such predictions for even the simplest pure metal bulk compounds, and corresponding predictions for more complex alloy or surface structures are even more challenging. In the present work, through synergistic use of a Hubbard U correction,more » a state-of-the-art dispersion correction, and a water-based bulk reference state for the calculations, these errors are systematically corrected. The approach describes the weak binding that occurs between hydroxyl-containing functional groups in certain compounds in Pourbaix diagrams, corrects for self-interaction errors in transition metal compounds, and reduces residual errors on oxygen atoms by preserving a consistent oxidation state between the reference state, water, and the relevant bulk phases. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxides, oxyhydroxides, binary, and ternary oxides, where the corresponding thermodynamics of redox and (de)hydration are described with standard errors of 0.04 eV per (reaction) formula unit. The approach further preserves accurate descriptions of the overall thermodynamics of electrochemically-relevant bulk reactions, such as water formation, which is an essential condition for facilitating accurate analysis of reaction energies for electrochemical processes on surfaces. The overall generality and transferability of the scheme suggests that it may find useful application in the construction of a broad array of electrochemical phase diagrams, including

  20. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model

    PubMed Central

    Li, Zhen; Zhang, Renyu

    2017-01-01

    Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact

  1. High Order Schemes in Bats-R-US for Faster and More Accurate Predictions

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Toth, G.; Gombosi, T. I.

    2014-12-01

    BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.

  2. Accurate indel prediction using paired-end short reads

    PubMed Central

    2013-01-01

    Background One of the major open challenges in next generation sequencing (NGS) is the accurate identification of structural variants such as insertions and deletions (indels). Current methods for indel calling assign scores to different types of evidence or counter-evidence for the presence of an indel, such as the number of split read alignments spanning the boundaries of a deletion candidate or reads that map within a putative deletion. Candidates with a score above a manually defined threshold are then predicted to be true indels. As a consequence, structural variants detected in this manner contain many false positives. Results Here, we present a machine learning based method which is able to discover and distinguish true from false indel candidates in order to reduce the false positive rate. Our method identifies indel candidates using a discriminative classifier based on features of split read alignment profiles and trained on true and false indel candidates that were validated by Sanger sequencing. We demonstrate the usefulness of our method with paired-end Illumina reads from 80 genomes of the first phase of the 1001 Genomes Project ( http://www.1001genomes.org) in Arabidopsis thaliana. Conclusion In this work we show that indel classification is a necessary step to reduce the number of false positive candidates. We demonstrate that missing classification may lead to spurious biological interpretations. The software is available at: http://agkb.is.tuebingen.mpg.de/Forschung/SV-M/. PMID:23442375

  3. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasetsmore » having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds

  4. Predicting Middle Level State Standardized Test Results Using Family and Community Demographic Data

    ERIC Educational Resources Information Center

    Tienken, Christopher H.; Colella, Anthony; Angelillo, Christian; Fox, Meredith; McCahill, Kevin R.; Wolfe, Adam

    2017-01-01

    The use of standardized test results to drive school administrator evaluations pervades education policymaking in more than 40 states. However, the results of state standardized tests are strongly influenced by non-school factors. The models of best fit (n = 18) from this correlational, explanatory, longitudinal study predicted accurately the…

  5. Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers.

    PubMed

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2008-06-01

    Several accurate prediction systems have been developed for prediction of class I major histocompatibility complex (MHC):peptide binding. Most of these are trained on binding affinity data of primarily 9mer peptides. Here, we show how prediction methods trained on 9mer data can be used for accurate binding affinity prediction of peptides of length 8, 10 and 11. The method gives the opportunity to predict peptides with a different length than nine for MHC alleles where no such peptides have been measured. As validation, the performance of this approach is compared to predictors trained on peptides of the peptide length in question. In this validation, the approximation method has an accuracy that is comparable to or better than methods trained on a peptide length identical to the predicted peptides. The algorithm has been implemented in the web-accessible servers NetMHC-3.0: http://www.cbs.dtu.dk/services/NetMHC-3.0, and NetMHCpan-1.1: http://www.cbs.dtu.dk/services/NetMHCpan-1.1

  6. SnowyOwl: accurate prediction of fungal genes by using RNA-Seq and homology information to select among ab initio models

    PubMed Central

    2014-01-01

    Background Locating the protein-coding genes in novel genomes is essential to understanding and exploiting the genomic information but it is still difficult to accurately predict all the genes. The recent availability of detailed information about transcript structure from high-throughput sequencing of messenger RNA (RNA-Seq) delineates many expressed genes and promises increased accuracy in gene prediction. Computational gene predictors have been intensively developed for and tested in well-studied animal genomes. Hundreds of fungal genomes are now or will soon be sequenced. The differences of fungal genomes from animal genomes and the phylogenetic sparsity of well-studied fungi call for gene-prediction tools tailored to them. Results SnowyOwl is a new gene prediction pipeline that uses RNA-Seq data to train and provide hints for the generation of Hidden Markov Model (HMM)-based gene predictions and to evaluate the resulting models. The pipeline has been developed and streamlined by comparing its predictions to manually curated gene models in three fungal genomes and validated against the high-quality gene annotation of Neurospora crassa; SnowyOwl predicted N. crassa genes with 83% sensitivity and 65% specificity. SnowyOwl gains sensitivity by repeatedly running the HMM gene predictor Augustus with varied input parameters and selectivity by choosing the models with best homology to known proteins and best agreement with the RNA-Seq data. Conclusions SnowyOwl efficiently uses RNA-Seq data to produce accurate gene models in both well-studied and novel fungal genomes. The source code for the SnowyOwl pipeline (in Python) and a web interface (in PHP) is freely available from http://sourceforge.net/projects/snowyowl/. PMID:24980894

  7. How accurate is our clinical prediction of "minimal prostate cancer"?

    PubMed

    Leibovici, Dan; Shikanov, Sergey; Gofrit, Ofer N; Zagaja, Gregory P; Shilo, Yaniv; Shalhav, Arieh L

    2013-07-01

    Recommendations for active surveillance versus immediate treatment for low risk prostate cancer are based on biopsy and clinical data, assuming that a low volume of well-differentiated carcinoma will be associated with a low progression risk. However, the accuracy of clinical prediction of minimal prostate cancer (MPC) is unclear. To define preoperative predictors for MPC in prostatectomy specimens and to examine the accuracy of such prediction. Data collected on 1526 consecutive radical prostatectomy patients operated in a single center between 2003 and 2008 included: age, body mass index, preoperative prostate-specific antigen level, biopsy Gleason score, clinical stage, percentage of positive biopsy cores, and maximal core length (MCL) involvement. MPC was defined as < 5% of prostate volume involvement with organ-confined Gleason score < or = 6. Univariate and multivariate logistic regression analyses were used to define independent predictors of minimal disease. Classification and Regression Tree (CART) analysis was used to define cutoff values for the predictors and measure the accuracy of prediction. MPC was found in 241 patients (15.8%). Clinical stage, biopsy Gleason's score, percent of positive biopsy cores, and maximal involved core length were associated with minimal disease (OR 0.42, 0.1, 0.92, and 0.9, respectively). Independent predictors of MPC included: biopsy Gleason score, percent of positive cores and MCL (OR 0.21, 095 and 0.95, respectively). CART showed that when the MCL exceeded 11.5%, the likelihood of MPC was 3.8%. Conversely, when applying the most favorable preoperative conditions (Gleason < or = 6, < 20% positive cores, MCL < or = 11.5%) the chance of minimal disease was 41%. Biopsy Gleason score, the percent of positive cores and MCL are independently associated with MPC. While preoperative prediction of significant prostate cancer was accurate, clinical prediction of MPC was incorrect 59% of the time. Caution is necessary when

  8. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed

  9. Towards accurate cosmological predictions for rapidly oscillating scalar fields as dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ureña-López, L. Arturo; Gonzalez-Morales, Alma X., E-mail: lurena@ugto.mx, E-mail: alma.gonzalez@fisica.ugto.mx

    2016-07-01

    As we are entering the era of precision cosmology, it is necessary to count on accurate cosmological predictions from any proposed model of dark matter. In this paper we present a novel approach to the cosmological evolution of scalar fields that eases their analytic and numerical analysis at the background and at the linear order of perturbations. The new method makes use of appropriate angular variables that simplify the writing of the equations of motion, and which also show that the usual field variables play a secondary role in the cosmological dynamics. We apply the method to a scalar fieldmore » endowed with a quadratic potential and revisit its properties as dark matter. Some of the results known in the literature are recovered, and a better understanding of the physical properties of the model is provided. It is confirmed that there exists a Jeans wavenumber k {sub J} , directly related to the suppression of linear perturbations at wavenumbers k > k {sub J} , and which is verified to be k {sub J} = a √ mH . We also discuss some semi-analytical results that are well satisfied by the full numerical solutions obtained from an amended version of the CMB code CLASS. Finally we draw some of the implications that this new treatment of the equations of motion may have in the prediction of cosmological observables from scalar field dark matter models.« less

  10. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  11. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  12. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  13. How accurate are resting energy expenditure prediction equations in obese trauma and burn patients?

    PubMed

    Stucky, Chee-Chee H; Moncure, Michael; Hise, Mary; Gossage, Clint M; Northrop, David

    2008-01-01

    While the prevalence of obesity continues to increase in our society, outdated resting energy expenditure (REE) prediction equations may overpredict energy requirements in obese patients. Accurate feeding is essential since overfeeding has been demonstrated to adversely affect outcomes. The first objective was to compare REE calculated by prediction equations to the measured REE in obese trauma and burn patients. Our hypothesis was that an equation using fat-free mass would give a more accurate prediction. The second objective was to consider the effect of a commonly used injury factor on the predicted REE. A retrospective chart review was performed on 28 patients. REE was measured using indirect calorimetry and compared with the Harris-Benedict and Cunningham equations, and an equation using type II diabetes as a factor. Statistical analyses used were paired t test, +/-95% confidence interval, and the Bland-Altman method. Measured average REE in trauma and burn patients was 21.37 +/- 5.26 and 21.81 +/- 3.35 kcal/kg/d, respectively. Harris-Benedict underpredicted REE in trauma and burn patients to the least extent, while the Cunningham equation underpredicted REE in both populations to the greatest extent. Using an injury factor of 1.2, Cunningham continued to underestimate REE in both populations, while the Harris-Benedict and Diabetic equations overpredicted REE in both populations. The measured average REE is significantly less than current guidelines. This finding suggests that a hypocaloric regimen is worth considering for ICU patients. Also, if an injury factor of 1.2 is incorporated in certain equations, patients may be given too many calories.

  14. An Interpretable Machine Learning Model for Accurate Prediction of Sepsis in the ICU.

    PubMed

    Nemati, Shamim; Holder, Andre; Razmi, Fereshteh; Stanley, Matthew D; Clifford, Gari D; Buchman, Timothy G

    2018-04-01

    Sepsis is among the leading causes of morbidity, mortality, and cost overruns in critically ill patients. Early intervention with antibiotics improves survival in septic patients. However, no clinically validated system exists for real-time prediction of sepsis onset. We aimed to develop and validate an Artificial Intelligence Sepsis Expert algorithm for early prediction of sepsis. Observational cohort study. Academic medical center from January 2013 to December 2015. Over 31,000 admissions to the ICUs at two Emory University hospitals (development cohort), in addition to over 52,000 ICU patients from the publicly available Medical Information Mart for Intensive Care-III ICU database (validation cohort). Patients who met the Third International Consensus Definitions for Sepsis (Sepsis-3) prior to or within 4 hours of their ICU admission were excluded, resulting in roughly 27,000 and 42,000 patients within our development and validation cohorts, respectively. None. High-resolution vital signs time series and electronic medical record data were extracted. A set of 65 features (variables) were calculated on hourly basis and passed to the Artificial Intelligence Sepsis Expert algorithm to predict onset of sepsis in the proceeding T hours (where T = 12, 8, 6, or 4). Artificial Intelligence Sepsis Expert was used to predict onset of sepsis in the proceeding T hours and to produce a list of the most significant contributing factors. For the 12-, 8-, 6-, and 4-hour ahead prediction of sepsis, Artificial Intelligence Sepsis Expert achieved area under the receiver operating characteristic in the range of 0.83-0.85. Performance of the Artificial Intelligence Sepsis Expert on the development and validation cohorts was indistinguishable. Using data available in the ICU in real-time, Artificial Intelligence Sepsis Expert can accurately predict the onset of sepsis in an ICU patient 4-12 hours prior to clinical recognition. A prospective study is necessary to determine the

  15. Improving medical decisions for incapacitated persons: does focusing on "accurate predictions" lead to an inaccurate picture?

    PubMed

    Kim, Scott Y H

    2014-04-01

    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients' preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%-80% reliability of people's preferences for future medical decisions--a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of persons may not even have preferences to predict. Third, many, perhaps most, people express their autonomy just as much by entrusting their loved ones to exercise their judgment than by desiring to specifically control future decisions. Surrogate decision making faces none of these issues and, in fact, it may be more efficient, accurate, and authoritative than is commonly assumed.

  16. Towards more accurate vegetation mortality predictions

    DOE PAGES

    Sevanto, Sanna Annika; Xu, Chonggang

    2016-09-26

    Predicting the fate of vegetation under changing climate is one of the major challenges of the climate modeling community. Here, terrestrial vegetation dominates the carbon and water cycles over land areas, and dramatic changes in vegetation cover resulting from stressful environmental conditions such as drought feed directly back to local and regional climate, potentially leading to a vicious cycle where vegetation recovery after a disturbance is delayed or impossible.

  17. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-07

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  18. Automated antibody structure prediction using Accelrys tools: Results and best practices

    PubMed Central

    Fasnacht, Marc; Butenhof, Ken; Goupil-Lamy, Anne; Hernandez-Guzman, Francisco; Huang, Hongwei; Yan, Lisa

    2014-01-01

    We describe the methodology and results from our participation in the second Antibody Modeling Assessment experiment. During the experiment we predicted the structure of eleven unpublished antibody Fv fragments. Our prediction methods centered on template-based modeling; potential templates were selected from an antibody database based on their sequence similarity to the target in the framework regions. Depending on the quality of the templates, we constructed models of the antibody framework regions either using a single, chimeric or multiple template approach. The hypervariable loop regions in the initial models were rebuilt by grafting the corresponding regions from suitable templates onto the model. For the H3 loop region, we further refined models using ab initio methods. The final models were subjected to constrained energy minimization to resolve severe local structural problems. The analysis of the models submitted show that Accelrys tools allow for the construction of quite accurate models for the framework and the canonical CDR regions, with RMSDs to the X-ray structure on average below 1 Å for most of these regions. The results show that accurate prediction of the H3 hypervariable loops remains a challenge. Furthermore, model quality assessment of the submitted models show that the models are of quite high quality, with local geometry assessment scores similar to that of the target X-ray structures. Proteins 2014; 82:1583–1598. © 2014 The Authors. Proteins published by Wiley Periodicals, Inc. PMID:24833271

  19. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Kinetic approach to degradation mechanisms in polymer solar cells and their accurate lifetime predictions

    NASA Astrophysics Data System (ADS)

    Arshad, Muhammad Azeem; Maaroufi, AbdelKrim

    2018-07-01

    A beginning has been made in the present study regarding the accurate lifetime predictions of polymer solar cells. Certain reservations about the conventionally employed temperature accelerated lifetime measurements test for its unworthiness of predicting reliable lifetimes of polymer solar cells are brought into light. Critical issues concerning the accelerated lifetime testing include, assuming reaction mechanism instead of determining it, and relying solely on the temperature acceleration of a single property of material. An advanced approach comprising a set of theoretical models to estimate the accurate lifetimes of polymer solar cells is therefore suggested in order to suitably alternate the accelerated lifetime testing. This approach takes into account systematic kinetic modeling of various possible polymer degradation mechanisms under natural weathering conditions. The proposed kinetic approach is substantiated by its applications on experimental aging data-sets of polymer solar materials/solar cells including, P3HT polymer film, bulk heterojunction (MDMO-PPV:PCBM) and dye-sensitized solar cells. Based on the suggested approach, an efficacious lifetime determination formula for polymer solar cells is derived and tested on dye-sensitized solar cells. Some important merits of the proposed method are also pointed out and its prospective applications are discussed.

  1. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding

    PubMed Central

    Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.

    2016-01-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally—a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592

  2. Do dual-route models accurately predict reading and spelling performance in individuals with acquired alexia and agraphia?

    PubMed

    Rapcsak, Steven Z; Henry, Maya L; Teague, Sommer L; Carnahan, Susan D; Beeson, Pélagie M

    2007-06-18

    Coltheart and co-workers [Castles, A., Bates, T. C., & Coltheart, M. (2006). John Marshall and the developmental dyslexias. Aphasiology, 20, 871-892; Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204-256] have demonstrated that an equation derived from dual-route theory accurately predicts reading performance in young normal readers and in children with reading impairment due to developmental dyslexia or stroke. In this paper, we present evidence that the dual-route equation and a related multiple regression model also accurately predict both reading and spelling performance in adult neurological patients with acquired alexia and agraphia. These findings provide empirical support for dual-route theories of written language processing.

  3. Accurate load prediction by BEM with airfoil data from 3D RANS simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Marc S.; Nitzsche, Jens; Hennings, Holger

    2016-09-01

    In this paper, two methods for the extraction of airfoil coefficients from 3D CFD simulations of a wind turbine rotor are investigated, and these coefficients are used to improve the load prediction of a BEM code. The coefficients are extracted from a number of steady RANS simulations, using either averaging of velocities in annular sections, or an inverse BEM approach for determination of the induction factors in the rotor plane. It is shown that these 3D rotor polars are able to capture the rotational augmentation at the inner part of the blade as well as the load reduction by 3D effects close to the blade tip. They are used as input to a simple BEM code and the results of this BEM with 3D rotor polars are compared to the predictions of BEM with 2D airfoil coefficients plus common empirical corrections for stall delay and tip loss. While BEM with 2D airfoil coefficients produces a very different radial distribution of loads than the RANS simulation, the BEM with 3D rotor polars manages to reproduce the loads from RANS very accurately for a variety of load cases, as long as the blade pitch angle is not too different from the cases from which the polars were extracted.

  4. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model.

    PubMed

    Wang, Sheng; Sun, Siqi; Li, Zhen; Zhang, Renyu; Xu, Jinbo

    2017-01-01

    Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have

  5. Accurate multimodal probabilistic prediction of conversion to Alzheimer's disease in patients with mild cognitive impairment.

    PubMed

    Young, Jonathan; Modat, Marc; Cardoso, Manuel J; Mendelson, Alex; Cash, Dave; Ourselin, Sebastien

    2013-01-01

    Accurately identifying the patients that have mild cognitive impairment (MCI) who will go on to develop Alzheimer's disease (AD) will become essential as new treatments will require identification of AD patients at earlier stages in the disease process. Most previous work in this area has centred around the same automated techniques used to diagnose AD patients from healthy controls, by coupling high dimensional brain image data or other relevant biomarker data to modern machine learning techniques. Such studies can now distinguish between AD patients and controls as accurately as an experienced clinician. Models trained on patients with AD and control subjects can also distinguish between MCI patients that will convert to AD within a given timeframe (MCI-c) and those that remain stable (MCI-s), although differences between these groups are smaller and thus, the corresponding accuracy is lower. The most common type of classifier used in these studies is the support vector machine, which gives categorical class decisions. In this paper, we introduce Gaussian process (GP) classification to the problem. This fully Bayesian method produces naturally probabilistic predictions, which we show correlate well with the actual chances of converting to AD within 3 years in a population of 96 MCI-s and 47 MCI-c subjects. Furthermore, we show that GPs can integrate multimodal data (in this study volumetric MRI, FDG-PET, cerebrospinal fluid, and APOE genotype with the classification process through the use of a mixed kernel). The GP approach aids combination of different data sources by learning parameters automatically from training data via type-II maximum likelihood, which we compare to a more conventional method based on cross validation and an SVM classifier. When the resulting probabilities from the GP are dichotomised to produce a binary classification, the results for predicting MCI conversion based on the combination of all three types of data show a balanced accuracy

  6. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  7. Accurate prediction of bacterial type IV secreted effectors using amino acid composition and PSSM profiles.

    PubMed

    Zou, Lingyun; Nan, Chonghan; Hu, Fuquan

    2013-12-15

    Various human pathogens secret effector proteins into hosts cells via the type IV secretion system (T4SS). These proteins play important roles in the interaction between bacteria and hosts. Computational methods for T4SS effector prediction have been developed for screening experimental targets in several isolated bacterial species; however, widely applicable prediction approaches are still unavailable In this work, four types of distinctive features, namely, amino acid composition, dipeptide composition, .position-specific scoring matrix composition and auto covariance transformation of position-specific scoring matrix, were calculated from primary sequences. A classifier, T4EffPred, was developed using the support vector machine with these features and their different combinations for effector prediction. Various theoretical tests were performed in a newly established dataset, and the results were measured with four indexes. We demonstrated that T4EffPred can discriminate IVA and IVB effectors in benchmark datasets with positive rates of 76.7% and 89.7%, respectively. The overall accuracy of 95.9% shows that the present method is accurate for distinguishing the T4SS effector in unidentified sequences. A classifier ensemble was designed to synthesize all single classifiers. Notable performance improvement was observed using this ensemble system in benchmark tests. To demonstrate the model's application, a genome-scale prediction of effectors was performed in Bartonella henselae, an important zoonotic pathogen. A number of putative candidates were distinguished. A web server implementing the prediction method and the source code are both available at http://bioinfo.tmmu.edu.cn/T4EffPred.

  8. Accurate secondary structure prediction and fold recognition for circular dichroism spectroscopy

    PubMed Central

    Micsonai, András; Wien, Frank; Kernya, Linda; Lee, Young-Ho; Goto, Yuji; Réfrégiers, Matthieu; Kardos, József

    2015-01-01

    Circular dichroism (CD) spectroscopy is a widely used technique for the study of protein structure. Numerous algorithms have been developed for the estimation of the secondary structure composition from the CD spectra. These methods often fail to provide acceptable results on α/β-mixed or β-structure–rich proteins. The problem arises from the spectral diversity of β-structures, which has hitherto been considered as an intrinsic limitation of the technique. The predictions are less reliable for proteins of unusual β-structures such as membrane proteins, protein aggregates, and amyloid fibrils. Here, we show that the parallel/antiparallel orientation and the twisting of the β-sheets account for the observed spectral diversity. We have developed a method called β-structure selection (BeStSel) for the secondary structure estimation that takes into account the twist of β-structures. This method can reliably distinguish parallel and antiparallel β-sheets and accurately estimates the secondary structure for a broad range of proteins. Moreover, the secondary structure components applied by the method are characteristic to the protein fold, and thus the fold can be predicted to the level of topology in the CATH classification from a single CD spectrum. By constructing a web server, we offer a general tool for a quick and reliable structure analysis using conventional CD or synchrotron radiation CD (SRCD) spectroscopy for the protein science research community. The method is especially useful when X-ray or NMR techniques fail. Using BeStSel on data collected by SRCD spectroscopy, we investigated the structure of amyloid fibrils of various disease-related proteins and peptides. PMID:26038575

  9. A cross-race effect in metamemory: Predictions of face recognition are more accurate for members of our own race

    PubMed Central

    Hourihan, Kathleen L.; Benjamin, Aaron S.; Liu, Xiping

    2012-01-01

    The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness’s claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness. PMID:23162788

  10. PSSP-RFE: accurate prediction of protein structural class by recursive feature extraction from PSI-BLAST profile, physical-chemical property and functional annotations.

    PubMed

    Li, Liqi; Cui, Xiang; Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi

    2014-01-01

    Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets.

  11. Combining Structural Modeling with Ensemble Machine Learning to Accurately Predict Protein Fold Stability and Binding Affinity Effects upon Mutation

    PubMed Central

    Garcia Lopez, Sebastian; Kim, Philip M.

    2014-01-01

    Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403

  12. Toward accurate prediction of pKa values for internal protein residues: the importance of conformational relaxation and desolvation energy.

    PubMed

    Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K

    2011-12-01

    Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy. Copyright © 2011 Wiley-Liss, Inc.

  13. PredSTP: a highly accurate SVM based model to predict sequential cystine stabilized peptides.

    PubMed

    Islam, S M Ashiqul; Sajed, Tanvir; Kearney, Christopher Michel; Baker, Erich J

    2015-07-05

    Numerous organisms have evolved a wide range of toxic peptides for self-defense and predation. Their effective interstitial and macro-environmental use requires energetic and structural stability. One successful group of these peptides includes a tri-disulfide domain arrangement that offers toxicity and high stability. Sequential tri-disulfide connectivity variants create highly compact disulfide folds capable of withstanding a variety of environmental stresses. Their combination of toxicity and stability make these peptides remarkably valuable for their potential as bio-insecticides, antimicrobial peptides and peptide drug candidates. However, the wide sequence variation, sources and modalities of group members impose serious limitations on our ability to rapidly identify potential members. As a result, there is a need for automated high-throughput member classification approaches that leverage their demonstrated tertiary and functional homology. We developed an SVM-based model to predict sequential tri-disulfide peptide (STP) toxins from peptide sequences. One optimized model, called PredSTP, predicted STPs from training set with sensitivity, specificity, precision, accuracy and a Matthews correlation coefficient of 94.86%, 94.11%, 84.31%, 94.30% and 0.86, respectively, using 200 fold cross validation. The same model outperforms existing prediction approaches in three independent out of sample testsets derived from PDB. PredSTP can accurately identify a wide range of cystine stabilized peptide toxins directly from sequences in a species-agnostic fashion. The ability to rapidly filter sequences for potential bioactive peptides can greatly compress the time between peptide identification and testing structural and functional properties for possible antimicrobial and insecticidal candidates. A web interface is freely available to predict STP toxins from http://crick.ecs.baylor.edu/.

  14. MetaPSICOV: combining coevolution methods for accurate prediction of contacts and long range hydrogen bonding in proteins.

    PubMed

    Jones, David T; Singh, Tanya; Kosciolek, Tomasz; Tetchner, Stuart

    2015-04-01

    Recent developments of statistical techniques to infer direct evolutionary couplings between residue pairs have rendered covariation-based contact prediction a viable means for accurate 3D modelling of proteins, with no information other than the sequence required. To extend the usefulness of contact prediction, we have designed a new meta-predictor (MetaPSICOV) which combines three distinct approaches for inferring covariation signals from multiple sequence alignments, considers a broad range of other sequence-derived features and, uniquely, a range of metrics which describe both the local and global quality of the input multiple sequence alignment. Finally, we use a two-stage predictor, where the second stage filters the output of the first stage. This two-stage predictor is additionally evaluated on its ability to accurately predict the long range network of hydrogen bonds, including correctly assigning the donor and acceptor residues. Using the original PSICOV benchmark set of 150 protein families, MetaPSICOV achieves a mean precision of 0.54 for top-L predicted long range contacts-around 60% higher than PSICOV, and around 40% better than CCMpred. In de novo protein structure prediction using FRAGFOLD, MetaPSICOV is able to improve the TM-scores of models by a median of 0.05 compared with PSICOV. Lastly, for predicting long range hydrogen bonding, MetaPSICOV-HB achieves a precision of 0.69 for the top-L/10 hydrogen bonds compared with just 0.26 for the baseline MetaPSICOV. MetaPSICOV is available as a freely available web server at http://bioinf.cs.ucl.ac.uk/MetaPSICOV. Raw data (predicted contact lists and 3D models) and source code can be downloaded from http://bioinf.cs.ucl.ac.uk/downloads/MetaPSICOV. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  15. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  16. Accurate prediction of cation-π interaction energy using substituent effects.

    PubMed

    Sayyed, Fareed Bhasha; Suresh, Cherumuttathu H

    2012-06-14

    (M(+))' and ΔV(min). All the Φ-X···M(+) systems showed good agreement between the calculated and predicted E(M(+))() values, suggesting that the ΔV(min) approach to substituent effect is accurate and useful for predicting the interactive behavior of substituted π-systems with cations.

  17. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  18. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  19. Accurate disulfide-bonding network predictions improve ab initio structure prediction of cysteine-rich proteins

    PubMed Central

    Yang, Jing; He, Bao-Ji; Jang, Richard; Zhang, Yang; Shen, Hong-Bin

    2015-01-01

    Abstract Motivation: Cysteine-rich proteins cover many important families in nature but there are currently no methods specifically designed for modeling the structure of these proteins. The accuracy of disulfide connectivity pattern prediction, particularly for the proteins of higher-order connections, e.g. >3 bonds, is too low to effectively assist structure assembly simulations. Results: We propose a new hierarchical order reduction protocol called Cyscon for disulfide-bonding prediction. The most confident disulfide bonds are first identified and bonding prediction is then focused on the remaining cysteine residues based on SVR training. Compared with purely machine learning-based approaches, Cyscon improved the average accuracy of connectivity pattern prediction by 21.9%. For proteins with more than 5 disulfide bonds, Cyscon improved the accuracy by 585% on the benchmark set of PDBCYS. When applied to 158 non-redundant cysteine-rich proteins, Cyscon predictions helped increase (or decrease) the TM-score (or RMSD) of the ab initio QUARK modeling by 12.1% (or 14.4%). This result demonstrates a new avenue to improve the ab initio structure modeling for cysteine-rich proteins. Availability and implementation: http://www.csbio.sjtu.edu.cn/bioinf/Cyscon/ Contact: zhng@umich.edu or hbshen@sjtu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26254435

  20. A comparative study between experimental results and numerical predictions of multi-wall structural response to hypervelocity impact

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.; Peck, Jeffrey A.

    1992-01-01

    Over the last three decades, multiwall structures have been analyzed extensively, primarily through experiment, as a means of increasing the protection afforded to spacecraft structure. However, as structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under impact loading conditions. This paper presents the results of a preliminary numerical/experimental investigation of the hypervelocity impact response of multiwall structures. The results of experimental high-speed impact tests are compared against the predictions of the HULL hydrodynamic computer code. It is shown that the hypervelocity impact response characteristics of a specific system cannot be accurately predicted from a limited number of HULL code impact simulations. However, if a wide range of impact loadings conditions are considered, then the ballistic limit curve of the system based on the entire series of numerical simulations can be used as a relatively accurate indication of actual system response.

  1. Accurate and robust genomic prediction of celiac disease using statistical learning.

    PubMed

    Abraham, Gad; Tye-Din, Jason A; Bhalala, Oneil G; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael

    2014-02-01

    Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87-0.89) and in independent replication across cohorts (AUC of 0.86-0.9), despite differences in ethnicity. The models explained 30-35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases.

  2. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    PubMed

    Reynolds, Sheila M; Bilmes, Jeff A; Noble, William Stafford

    2010-07-08

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  3. Learning a Weighted Sequence Model of the Nucleosome Core and Linker Yields More Accurate Predictions in Saccharomyces cerevisiae and Homo sapiens

    PubMed Central

    Reynolds, Sheila M.; Bilmes, Jeff A.; Noble, William Stafford

    2010-01-01

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence—301 base pairs, centered at the position to be scored—with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  4. Modeling leaderless transcription and atypical genes results in more accurate gene prediction in prokaryotes.

    PubMed

    Lomsadze, Alexandre; Gemayel, Karl; Tang, Shiyuyun; Borodovsky, Mark

    2018-05-17

    In a conventional view of the prokaryotic genome organization, promoters precede operons and ribosome binding sites (RBSs) with Shine-Dalgarno consensus precede genes. However, recent experimental research suggesting a more diverse view motivated us to develop an algorithm with improved gene-finding accuracy. We describe GeneMarkS-2, an ab initio algorithm that uses a model derived by self-training for finding species-specific (native) genes, along with an array of precomputed "heuristic" models designed to identify harder-to-detect genes (likely horizontally transferred). Importantly, we designed GeneMarkS-2 to identify several types of distinct sequence patterns (signals) involved in gene expression control, among them the patterns characteristic for leaderless transcription as well as noncanonical RBS patterns. To assess the accuracy of GeneMarkS-2, we used genes validated by COG (Clusters of Orthologous Groups) annotation, proteomics experiments, and N-terminal protein sequencing. We observed that GeneMarkS-2 performed better on average in all accuracy measures when compared with the current state-of-the-art gene prediction tools. Furthermore, the screening of ∼5000 representative prokaryotic genomes made by GeneMarkS-2 predicted frequent leaderless transcription in both archaea and bacteria. We also observed that the RBS sites in some species with leadered transcription did not necessarily exhibit the Shine-Dalgarno consensus. The modeling of different types of sequence motifs regulating gene expression prompted a division of prokaryotic genomes into five categories with distinct sequence patterns around the gene starts. © 2018 Lomsadze et al.; Published by Cold Spring Harbor Laboratory Press.

  5. Accurate prediction of collapse temperature using optical coherence tomography-based freeze-drying microscopy.

    PubMed

    Greco, Kristyn; Mujat, Mircea; Galbally-Kinney, Kristin L; Hammer, Daniel X; Ferguson, R Daniel; Iftimia, Nicusor; Mulhall, Phillip; Sharma, Puneet; Kessler, William J; Pikal, Michael J

    2013-06-01

    The objective of this study was to assess the feasibility of developing and applying a laboratory tool that can provide three-dimensional product structural information during freeze-drying and which can accurately characterize the collapse temperature (Tc ) of pharmaceutical formulations designed for freeze-drying. A single-vial freeze dryer coupled with optical coherence tomography freeze-drying microscopy (OCT-FDM) was developed to investigate the structure and Tc of formulations in pharmaceutically relevant products containers (i.e., freeze-drying in vials). OCT-FDM was used to measure the Tc and eutectic melt of three formulations in freeze-drying vials. The Tc as measured by OCT-FDM was found to be predictive of freeze-drying with a batch of vials in a conventional laboratory freeze dryer. The freeze-drying cycles developed using OCT-FDM data, as compared with traditional light transmission freeze-drying microscopy (LT-FDM), resulted in a significant reduction in primary drying time, which could result in a substantial reduction of manufacturing costs while maintaining product quality. OCT-FDM provides quantitative data to justify freeze-drying at temperatures higher than the Tc measured by LT-FDM and provides a reliable upper limit to setting a product temperature in primary drying. Copyright © 2013 Wiley Periodicals, Inc.

  6. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  7. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  8. A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows

    NASA Astrophysics Data System (ADS)

    Bijleveld, H. A.; Veldman, A. E. P.

    2014-12-01

    A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.

  9. CodingQuarry: highly accurate hidden Markov model gene prediction in fungal genomes using RNA-seq transcripts.

    PubMed

    Testa, Alison C; Hane, James K; Ellwood, Simon R; Oliver, Richard P

    2015-03-11

    The impact of gene annotation quality on functional and comparative genomics makes gene prediction an important process, particularly in non-model species, including many fungi. Sets of homologous protein sequences are rarely complete with respect to the fungal species of interest and are often small or unreliable, especially when closely related species have not been sequenced or annotated in detail. In these cases, protein homology-based evidence fails to correctly annotate many genes, or significantly improve ab initio predictions. Generalised hidden Markov models (GHMM) have proven to be invaluable tools in gene annotation and, recently, RNA-seq has emerged as a cost-effective means to significantly improve the quality of automated gene annotation. As these methods do not require sets of homologous proteins, improving gene prediction from these resources is of benefit to fungal researchers. While many pipelines now incorporate RNA-seq data in training GHMMs, there has been relatively little investigation into additionally combining RNA-seq data at the point of prediction, and room for improvement in this area motivates this study. CodingQuarry is a highly accurate, self-training GHMM fungal gene predictor designed to work with assembled, aligned RNA-seq transcripts. RNA-seq data informs annotations both during gene-model training and in prediction. Our approach capitalises on the high quality of fungal transcript assemblies by incorporating predictions made directly from transcript sequences. Correct predictions are made despite transcript assembly problems, including those caused by overlap between the transcripts of adjacent gene loci. Stringent benchmarking against high-confidence annotation subsets showed CodingQuarry predicted 91.3% of Schizosaccharomyces pombe genes and 90.4% of Saccharomyces cerevisiae genes perfectly. These results are 4-5% better than those of AUGUSTUS, the next best performing RNA-seq driven gene predictor tested. Comparisons against

  10. Accurate thermoelastic tensor and acoustic velocities of NaCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less

  11. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction

    PubMed Central

    Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K.; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G.; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H.

    2017-01-01

    The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively. PMID:27899623

  12. Accurate First-Principles Spectra Predictions for Planetological and Astrophysical Applications at Various T-Conditions

    NASA Astrophysics Data System (ADS)

    Rey, M.; Nikitin, A. V.; Tyuterev, V.

    2014-06-01

    Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical

  13. Combining Mean and Standard Deviation of Hounsfield Unit Measurements from Preoperative CT Allows More Accurate Prediction of Urinary Stone Composition Than Mean Hounsfield Units Alone.

    PubMed

    Tailly, Thomas; Larish, Yaniv; Nadeau, Brandon; Violette, Philippe; Glickman, Leonard; Olvera-Posada, Daniel; Alenezi, Husain; Amann, Justin; Denstedt, John; Razvi, Hassan

    2016-04-01

    The mineral composition of a urinary stone may influence its surgical and medical treatment. Previous attempts at identifying stone composition based on mean Hounsfield Units (HUm) have had varied success. We aimed to evaluate the additional use of standard deviation of HU (HUsd) to more accurately predict stone composition. We identified patients from two centers who had undergone urinary stone treatment between 2006 and 2013 and had mineral stone analysis and a computed tomography (CT) available. HUm and HUsd of the stones were compared with ANOVA. Receiver operative characteristic analysis with area under the curve (AUC), Youden index, and likelihood ratio calculations were performed. Data were available for 466 patients. The major components were calcium oxalate monohydrate (COM), uric acid, hydroxyapatite, struvite, brushite, cystine, and CO dihydrate (COD) in 41.4%, 19.3%, 12.4%, 7.5%, 5.8%, 5.4%, and 4.7% of patients, respectively. The HUm of UA and Br was significantly lower and higher than the HUm of any other stone type, respectively. HUm and HUsd were most accurate in predicting uric acid with an AUC of 0.969 and 0.851, respectively. The combined use of HUm and HUsd resulted in increased positive predictive value and higher likelihood ratios for identifying a stone's mineral composition for all stone types but COM. To the best of our knowledge, this is the first report of CT data aiding in the prediction of brushite stone composition. Both HUm and HUsd can help predict stone composition and their combined use results in higher likelihood ratios influencing probability.

  14. Accurate prediction of X-ray pulse properties from a free-electron laser using machine learning

    DOE PAGES

    Sanchez-Gonzalez, A.; Micaelli, P.; Olivier, C.; ...

    2017-06-05

    Free-electron lasers providing ultra-short high-brightness pulses of X-ray radiation have great potential for a wide impact on science, and are a critical element for unravelling the structural dynamics of matter. To fully harness this potential, we must accurately know the X-ray properties: intensity, spectrum and temporal profile. Owing to the inherent fluctuations in free-electron lasers, this mandates a full characterization of the properties for each and every pulse. While diagnostics of these properties exist, they are often invasive and many cannot operate at a high-repetition rate. Here, we present a technique for circumventing this limitation. Employing a machine learning strategy,more » we can accurately predict X-ray properties for every shot using only parameters that are easily recorded at high-repetition rate, by training a model on a small set of fully diagnosed pulses. Lastly, this opens the door to fully realizing the promise of next-generation high-repetition rate X-ray lasers.« less

  15. Accurate prediction of X-ray pulse properties from a free-electron laser using machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Gonzalez, A.; Micaelli, P.; Olivier, C.

    Free-electron lasers providing ultra-short high-brightness pulses of X-ray radiation have great potential for a wide impact on science, and are a critical element for unravelling the structural dynamics of matter. To fully harness this potential, we must accurately know the X-ray properties: intensity, spectrum and temporal profile. Owing to the inherent fluctuations in free-electron lasers, this mandates a full characterization of the properties for each and every pulse. While diagnostics of these properties exist, they are often invasive and many cannot operate at a high-repetition rate. Here, we present a technique for circumventing this limitation. Employing a machine learning strategy,more » we can accurately predict X-ray properties for every shot using only parameters that are easily recorded at high-repetition rate, by training a model on a small set of fully diagnosed pulses. Lastly, this opens the door to fully realizing the promise of next-generation high-repetition rate X-ray lasers.« less

  16. A Deep Learning Framework for Robust and Accurate Prediction of ncRNA-Protein Interactions Using Evolutionary Information.

    PubMed

    Yi, Hai-Cheng; You, Zhu-Hong; Huang, De-Shuang; Li, Xiao; Jiang, Tong-Hai; Li, Li-Ping

    2018-06-01

    The interactions between non-coding RNAs (ncRNAs) and proteins play an important role in many biological processes, and their biological functions are primarily achieved by binding with a variety of proteins. High-throughput biological techniques are used to identify protein molecules bound with specific ncRNA, but they are usually expensive and time consuming. Deep learning provides a powerful solution to computationally predict RNA-protein interactions. In this work, we propose the RPI-SAN model by using the deep-learning stacked auto-encoder network to mine the hidden high-level features from RNA and protein sequences and feed them into a random forest (RF) model to predict ncRNA binding proteins. Stacked assembling is further used to improve the accuracy of the proposed method. Four benchmark datasets, including RPI2241, RPI488, RPI1807, and NPInter v2.0, were employed for the unbiased evaluation of five established prediction tools: RPI-Pred, IPMiner, RPISeq-RF, lncPro, and RPI-SAN. The experimental results show that our RPI-SAN model achieves much better performance than other methods, with accuracies of 90.77%, 89.7%, 96.1%, and 99.33%, respectively. It is anticipated that RPI-SAN can be used as an effective computational tool for future biomedical researches and can accurately predict the potential ncRNA-protein interacted pairs, which provides reliable guidance for biological research. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  17. Accurate prediction of protein-protein interactions by integrating potential evolutionary information embedded in PSSM profile and discriminative vector machine classifier.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Li, Li-Ping; Huang, De-Shuang; Yan, Gui-Ying; Nie, Ru; Huang, Yu-An

    2017-04-04

    Identification of protein-protein interactions (PPIs) is of critical importance for deciphering the underlying mechanisms of almost all biological processes of cell and providing great insight into the study of human disease. Although much effort has been devoted to identifying PPIs from various organisms, existing high-throughput biological techniques are time-consuming, expensive, and have high false positive and negative results. Thus it is highly urgent to develop in silico methods to predict PPIs efficiently and accurately in this post genomic era. In this article, we report a novel computational model combining our newly developed discriminative vector machine classifier (DVM) and an improved Weber local descriptor (IWLD) for the prediction of PPIs. Two components, differential excitation and orientation, are exploited to build evolutionary features for each protein sequence. The main characteristics of the proposed method lies in introducing an effective feature descriptor IWLD which can capture highly discriminative evolutionary information from position-specific scoring matrixes (PSSM) of protein data, and employing the powerful and robust DVM classifier. When applying the proposed method to Yeast and H. pylori data sets, we obtained excellent prediction accuracies as high as 96.52% and 91.80%, respectively, which are significantly better than the previous methods. Extensive experiments were then performed for predicting cross-species PPIs and the predictive results were also pretty promising. To further validate the performance of the proposed method, we compared it with the state-of-the-art support vector machine (SVM) classifier on Human data set. The experimental results obtained indicate that our method is highly effective for PPIs prediction and can be taken as a supplementary tool for future proteomics research.

  18. Accurate prediction of severe allergic reactions by a small set of environmental parameters (NDVI, temperature).

    PubMed

    Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.

  19. Accurate Prediction of Severe Allergic Reactions by a Small Set of Environmental Parameters (NDVI, Temperature)

    PubMed Central

    Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106

  20. Quokka: a comprehensive tool for rapid and accurate prediction of kinase family-specific phosphorylation sites in the human proteome.

    PubMed

    Li, Fuyi; Li, Chen; Marquez-Lago, Tatiana T; Leier, André; Akutsu, Tatsuya; Purcell, Anthony W; Smith, A Ian; Lithgow, Trevor; Daly, Roger J; Song, Jiangning; Chou, Kuo-Chen

    2018-06-27

    Kinase-regulated phosphorylation is a ubiquitous type of post-translational modification (PTM) in both eukaryotic and prokaryotic cells. Phosphorylation plays fundamental roles in many signalling pathways and biological processes, such as protein degradation and protein-protein interactions. Experimental studies have revealed that signalling defects caused by aberrant phosphorylation are highly associated with a variety of human diseases, especially cancers. In light of this, a number of computational methods aiming to accurately predict protein kinase family-specific or kinase-specific phosphorylation sites have been established, thereby facilitating phosphoproteomic data analysis. In this work, we present Quokka, a novel bioinformatics tool that allows users to rapidly and accurately identify human kinase family-regulated phosphorylation sites. Quokka was developed by using a variety of sequence scoring functions combined with an optimized logistic regression algorithm. We evaluated Quokka based on well-prepared up-to-date benchmark and independent test datasets, curated from the Phospho.ELM and UniProt databases, respectively. The independent test demonstrates that Quokka improves the prediction performance compared with state-of-the-art computational tools for phosphorylation prediction. In summary, our tool provides users with high-quality predicted human phosphorylation sites for hypothesis generation and biological validation. The Quokka webserver and datasets are freely available at http://quokka.erc.monash.edu/. Supplementary data are available at Bioinformatics online.

  1. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction.

    PubMed

    Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H

    2017-01-09

    The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  3. An Extrapolation of a Radical Equation More Accurately Predicts Shelf Life of Frozen Biological Matrices.

    PubMed

    De Vore, Karl W; Fatahi, Nadia M; Sass, John E

    2016-08-01

    Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.

  4. Does the emergency surgery score accurately predict outcomes in emergent laparotomies?

    PubMed

    Peponis, Thomas; Bohnen, Jordan D; Sangji, Naveen F; Nandan, Anirudh R; Han, Kelsey; Lee, Jarone; Yeh, D Dante; de Moya, Marc A; Velmahos, George C; Chang, David C; Kaafarani, Haytham M A

    2017-08-01

    The emergency surgery score is a mortality-risk calculator for emergency general operation patients. We sought to examine whether the emergency surgery score predicts 30-day morbidity and mortality in a high-risk group of patients undergoing emergent laparotomy. Using the 2011-2012 American College of Surgeons National Surgical Quality Improvement Program database, we identified all patients who underwent emergent laparotomy using (1) the American College of Surgeons National Surgical Quality Improvement Program definition of "emergent," and (2) all Current Procedural Terminology codes denoting a laparotomy, excluding aortic aneurysm rupture. Multivariable logistic regression analyses were performed to measure the correlation (c-statistic) between the emergency surgery score and (1) 30-day mortality, and (2) 30-day morbidity after emergent laparotomy. As sensitivity analyses, the correlation between the emergency surgery score and 30-day mortality was also evaluated in prespecified subgroups based on Current Procedural Terminology codes. A total of 26,410 emergent laparotomy patients were included. Thirty-day mortality and morbidity were 10.2% and 43.8%, respectively. The emergency surgery score correlated well with mortality (c-statistic = 0.84); scores of 1, 11, and 22 correlated with mortalities of 0.4%, 39%, and 100%, respectively. Similarly, the emergency surgery score correlated well with morbidity (c-statistic = 0.74); scores of 0, 7, and 11 correlated with complication rates of 13%, 58%, and 79%, respectively. The morbidity rates plateaued for scores higher than 11. Sensitivity analyses demonstrated that the emergency surgery score effectively predicts mortality in patients undergoing emergent (1) splenic, (2) gastroduodenal, (3) intestinal, (4) hepatobiliary, or (5) incarcerated ventral hernia operation. The emergency surgery score accurately predicts outcomes in all types of emergent laparotomy patients and may prove valuable as a bedside decision

  5. Hindered rotor models with variable kinetic functions for accurate thermodynamic and kinetic predictions

    NASA Astrophysics Data System (ADS)

    Reinisch, Guillaume; Leyssale, Jean-Marc; Vignoles, Gérard L.

    2010-10-01

    We present an extension of some popular hindered rotor (HR) models, namely, the one-dimensional HR (1DHR) and the degenerated two-dimensional HR (d2DHR) models, allowing for a simple and accurate treatment of internal rotations. This extension, based on the use of a variable kinetic function in the Hamiltonian instead of a constant reduced moment of inertia, is extremely suitable in the case of rocking/wagging motions involved in dissociation or atom transfer reactions. The variable kinetic function is first introduced in the framework of a classical 1DHR model. Then, an effective temperature and potential dependent constant is proposed in the cases of quantum 1DHR and classical d2DHR models. These methods are finally applied to the atom transfer reaction SiCl3+BCl3→SiCl4+BCl2. We show, for this particular case, that a proper accounting of internal rotations greatly improves the accuracy of thermodynamic and kinetic predictions. Moreover, our results confirm (i) that using a suitably defined kinetic function appears to be very adapted to such problems; (ii) that the separability assumption of independent rotations seems justified; and (iii) that a quantum mechanical treatment is not a substantial improvement with respect to a classical one.

  6. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  7. Can single empirical algorithms accurately predict inland shallow water quality status from high resolution, multi-sensor, multi-temporal satellite data?

    NASA Astrophysics Data System (ADS)

    Theologou, I.; Patelaki, M.; Karantzalos, K.

    2015-04-01

    Assessing and monitoring water quality status through timely, cost effective and accurate manner is of fundamental importance for numerous environmental management and policy making purposes. Therefore, there is a current need for validated methodologies which can effectively exploit, in an unsupervised way, the enormous amount of earth observation imaging datasets from various high-resolution satellite multispectral sensors. To this end, many research efforts are based on building concrete relationships and empirical algorithms from concurrent satellite and in-situ data collection campaigns. We have experimented with Landsat 7 and Landsat 8 multi-temporal satellite data, coupled with hyperspectral data from a field spectroradiometer and in-situ ground truth data with several physico-chemical and other key monitoring indicators. All available datasets, covering a 4 years period, in our case study Lake Karla in Greece, were processed and fused under a quantitative evaluation framework. The performed comprehensive analysis posed certain questions regarding the applicability of single empirical models across multi-temporal, multi-sensor datasets towards the accurate prediction of key water quality indicators for shallow inland systems. Single linear regression models didn't establish concrete relations across multi-temporal, multi-sensor observations. Moreover, the shallower parts of the inland system followed, in accordance with the literature, different regression patterns. Landsat 7 and 8 resulted in quite promising results indicating that from the recreation of the lake and onward consistent per-sensor, per-depth prediction models can be successfully established. The highest rates were for chl-a (r2=89.80%), dissolved oxygen (r2=88.53%), conductivity (r2=88.18%), ammonium (r2=87.2%) and pH (r2=86.35%), while the total phosphorus (r2=70.55%) and nitrates (r2=55.50%) resulted in lower correlation rates.

  8. NMRDSP: an accurate prediction of protein shape strings from NMR chemical shifts and sequence data.

    PubMed

    Mao, Wusong; Cong, Peisheng; Wang, Zhiheng; Lu, Longjian; Zhu, Zhongliang; Li, Tonghua

    2013-01-01

    Shape string is structural sequence and is an extremely important structure representation of protein backbone conformations. Nuclear magnetic resonance chemical shifts give a strong correlation with the local protein structure, and are exploited to predict protein structures in conjunction with computational approaches. Here we demonstrate a novel approach, NMRDSP, which can accurately predict the protein shape string based on nuclear magnetic resonance chemical shifts and structural profiles obtained from sequence data. The NMRDSP uses six chemical shifts (HA, H, N, CA, CB and C) and eight elements of structure profiles as features, a non-redundant set (1,003 entries) as the training set, and a conditional random field as a classification algorithm. For an independent testing set (203 entries), we achieved an accuracy of 75.8% for S8 (the eight states accuracy) and 87.8% for S3 (the three states accuracy). This is higher than only using chemical shifts or sequence data, and confirms that the chemical shift and the structure profile are significant features for shape string prediction and their combination prominently improves the accuracy of the predictor. We have constructed the NMRDSP web server and believe it could be employed to provide a solid platform to predict other protein structures and functions. The NMRDSP web server is freely available at http://cal.tongji.edu.cn/NMRDSP/index.jsp.

  9. Does mesenteric venous imaging assessment accurately predict pathologic invasion in localized pancreatic ductal adenocarcinoma?

    PubMed

    Clanton, Jesse; Oh, Stephen; Kaplan, Stephen J; Johnson, Emily; Ross, Andrew; Kozarek, Richard; Alseidi, Adnan; Biehl, Thomas; Picozzi, Vincent J; Helton, William S; Coy, David; Dorer, Russell; Rocha, Flavio G

    2018-05-09

    Accurate prediction of mesenteric venous involvement in pancreatic ductal adenocarcinoma (PDAC) is necessary for adequate staging and treatment. A retrospective cohort study was conducted in PDAC patients at a single institution. All patients with resected PDAC and staging CT and EUS between 2003 and 2014 were included and sub-divided into "upfront resected" and "neoadjuvant chemotherapy (NAC)" groups. Independent imaging re-review was correlated to venous resection and venous invasion. Sensitivity, specificity, positive and negative predictive values were then calculated. A total of 109 patients underwent analysis, 60 received upfront resection, and 49 NAC. Venous resection (30%) and vein invasion (13%) was less common in patients resected upfront than those who received NAC (53% and 16%, respectively). Both CT and EUS had poor sensitivity (14-44%) but high specificity (75-95%) for detecting venous resection and vein invasion in patients resected upfront, whereas sensitivity was high (84-100%) and specificity was low (27-44%) after NAC. Preoperative CT and EUS in PDAC have similar efficacy but different predictive capacity in assessing mesenteric venous involvement depending on whether patients are resected upfront or received NAC. Both modalities appear to significantly overestimate true vascular involvement and should be interpreted in the appropriate clinical context. Copyright © 2018 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.

  10. A link prediction approach to cancer drug sensitivity prediction.

    PubMed

    Turki, Turki; Wei, Zhi

    2017-10-03

    Predicting the response to a drug for cancer disease patients based on genomic information is an important problem in modern clinical oncology. This problem occurs in part because many available drug sensitivity prediction algorithms do not consider better quality cancer cell lines and the adoption of new feature representations; both lead to the accurate prediction of drug responses. By predicting accurate drug responses to cancer, oncologists gain a more complete understanding of the effective treatments for each patient, which is a core goal in precision medicine. In this paper, we model cancer drug sensitivity as a link prediction, which is shown to be an effective technique. We evaluate our proposed link prediction algorithms and compare them with an existing drug sensitivity prediction approach based on clinical trial data. The experimental results based on the clinical trial data show the stability of our link prediction algorithms, which yield the highest area under the ROC curve (AUC) and are statistically significant. We propose a link prediction approach to obtain new feature representation. Compared with an existing approach, the results show that incorporating the new feature representation to the link prediction algorithms has significantly improved the performance.

  11. Development of a New Model for Accurate Prediction of Cloud Water Deposition on Vegetation

    NASA Astrophysics Data System (ADS)

    Katata, G.; Nagai, H.; Wrzesinsky, T.; Klemm, O.; Eugster, W.; Burkard, R.

    2006-12-01

    Scarcity of water resources in arid and semi-arid areas is of great concern in the light of population growth and food shortages. Several experiments focusing on cloud (fog) water deposition on the land surface suggest that cloud water plays an important role in water resource in such regions. A one-dimensional vegetation model including the process of cloud water deposition on vegetation has been developed to better predict cloud water deposition on the vegetation. New schemes to calculate capture efficiency of leaf, cloud droplet size distribution, and gravitational flux of cloud water were incorporated in the model. Model calculations were compared with the data acquired at the Norway spruce forest at the Waldstein site, Germany. High performance of the model was confirmed by comparisons of calculated net radiation, sensible and latent heat, and cloud water fluxes over the forest with measurements. The present model provided a better prediction of measured turbulent and gravitational fluxes of cloud water over the canopy than the Lovett model, which is a commonly used cloud water deposition model. Detailed calculations of evapotranspiration and of turbulent exchange of heat and water vapor within the canopy and the modifications are necessary for accurate prediction of cloud water deposition. Numerical experiments to examine the dependence of cloud water deposition on the vegetation species (coniferous and broad-leaved trees, flat and cylindrical grasses) and structures (Leaf Area Index (LAI) and canopy height) are performed using the presented model. The results indicate that the differences of leaf shape and size have a large impact on cloud water deposition. Cloud water deposition also varies with the growth of vegetation and seasonal change of LAI. We found that the coniferous trees whose height and LAI are 24 m and 2.0 m2m-2, respectively, produce the largest amount of cloud water deposition in all combinations of vegetation species and structures in the

  12. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    NASA Astrophysics Data System (ADS)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  13. A gene expression biomarker accurately predicts estrogen ...

    EPA Pesticide Factsheets

    The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1 screening tests. The ToxCast program currently includes 18 HTS in vitro assays that evaluate the ability of chemicals to modulate estrogen receptor α (ERα), an important endocrine target. We propose microarray-based gene expression profiling as a complementary approach to predict ERα modulation and have developed computational methods to identify ERα modulators in an existing database of whole-genome microarray data. The ERα biomarker consisted of 46 ERα-regulated genes with consistent expression patterns across 7 known ER agonists and 3 known ER antagonists. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression data sets from experiments in MCF-7 cells. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% or 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) OECD ER reference chemicals including “very weak” agonists and replicated predictions based on 18 in vitro ER-associated HTS assays. For 114 chemicals present in both the HTS data and the MCF-7 c

  14. Moving Toward Integrating Gene Expression Profiling Into High-Throughput Testing: A Gene Expression Biomarker Accurately Predicts Estrogen Receptor α Modulation in a Microarray Compendium

    PubMed Central

    Ryan, Natalia; Chorley, Brian; Tice, Raymond R.; Judson, Richard; Corton, J. Christopher

    2016-01-01

    Microarray profiling of chemical-induced effects is being increasingly used in medium- and high-throughput formats. Computational methods are described here to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), often modulated by potential endocrine disrupting chemicals. ERα biomarker genes were identified by their consistent expression after exposure to 7 structurally diverse ERα agonists and 3 ERα antagonists in ERα-positive MCF-7 cells. Most of the biomarker genes were shown to be directly regulated by ERα as determined by ESR1 gene knockdown using siRNA as well as through chromatin immunoprecipitation coupled with DNA sequencing analysis of ERα-DNA interactions. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression datasets from experiments using MCF-7 cells, including those evaluating the transcriptional effects of hormones and chemicals. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% and 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) ER reference chemicals including “very weak” agonists. Importantly, the biomarker predictions accurately replicated predictions based on 18 in vitro high-throughput screening assays that queried different steps in ERα signaling. For 114 chemicals, the balanced accuracies were 95% and 98% for activation or suppression, respectively. These results demonstrate that the ERα gene expression biomarker can accurately identify ERα modulators in large collections of microarray data derived from MCF-7 cells. PMID:26865669

  15. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field

  16. Aqueous solubility, effects of salts on aqueous solubility, and partitioning behavior of hexafluorobenzene: experimental results and COSMO-RS predictions.

    PubMed

    Schröder, Bernd; Freire, Mara G; Varanda, Fatima R; Marrucho, Isabel M; Santos, Luís M N B F; Coutinho, João A P

    2011-07-01

    The aqueous solubility of hexafluorobenzene has been determined, at 298.15K, using a shake-flask method with a spectrophotometric quantification technique. Furthermore, the solubility of hexafluorobenzene in saline aqueous solutions, at distinct salt concentrations, has been measured. Both salting-in and salting-out effects were observed and found to be dependent on the nature of the cationic/anionic composition of the salt. COSMO-RS, the Conductor-like Screening Model for Real Solvents, has been used to predict the corresponding aqueous solubilities at conditions similar to those used experimentally. The prediction results showed that the COSMO-RS approach is suitable for the prediction of salting-in/-out effects. The salting-in/-out phenomena have been rationalized with the support of COSMO-RS σ-profiles. The prediction potential of COSMO-RS regarding aqueous solubilities and octanol-water partition coefficients has been compared with typically used QSPR-based methods. Up to now, the absence of accurate solubility data for hexafluorobenzene hampered the calculation of the respective partition coefficients. Combining available accurate vapor pressure data with the experimentally determined water solubility, a novel air-water partition coefficient has been derived. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Time-Accurate Numerical Prediction of Free Flight Aerodynamics of a Finned Projectile

    DTIC Science & Technology

    2005-09-01

    develop (with fewer dollars) more lethal and effective munitions. The munitions must stay abreast of the latest technology available to our...consuming. Computer simulations can and have provided an effective means of determining the unsteady aerodynamics and flight mechanics of guided projectile...Recently, the time-accurate technique was used to obtain improved results for Magnus moment and roll damping moment of a spinning projectile at transonic

  18. Accurate predictions of population-level changes in sequence and structural properties of HIV-1 Env using a volatility-controlled diffusion model

    PubMed Central

    DeLeon, Orlando; Hodis, Hagit; O’Malley, Yunxia; Johnson, Jacklyn; Salimi, Hamid; Zhai, Yinjie; Winter, Elizabeth; Remec, Claire; Eichelberger, Noah; Van Cleave, Brandon; Puliadi, Ramya; Harrington, Robert D.; Stapleton, Jack T.; Haim, Hillel

    2017-01-01

    The envelope glycoproteins (Envs) of HIV-1 continuously evolve in the host by random mutations and recombination events. The resulting diversity of Env variants circulating in the population and their continuing diversification process limit the efficacy of AIDS vaccines. We examined the historic changes in Env sequence and structural features (measured by integrity of epitopes on the Env trimer) in a geographically defined population in the United States. As expected, many Env features were relatively conserved during the 1980s. From this state, some features diversified whereas others remained conserved across the years. We sought to identify “clues” to predict the observed historic diversification patterns. Comparison of viruses that cocirculate in patients at any given time revealed that each feature of Env (sequence or structural) exists at a defined level of variance. The in-host variance of each feature is highly conserved among individuals but can vary between different HIV-1 clades. We designate this property “volatility” and apply it to model evolution of features as a linear diffusion process that progresses with increasing genetic distance. Volatilities of different features are highly correlated with their divergence in longitudinally monitored patients. Volatilities of features also correlate highly with their population-level diversification. Using volatility indices measured from a small number of patient samples, we accurately predict the population diversity that developed for each feature over the course of 30 years. Amino acid variants that evolved at key antigenic sites are also predicted well. Therefore, small “fluctuations” in feature values measured in isolated patient samples accurately describe their potential for population-level diversification. These tools will likely contribute to the design of population-targeted AIDS vaccines by effectively capturing the diversity of currently circulating strains and addressing properties

  19. A novel fibrosis index comprising a non-cholesterol sterol accurately predicts HCV-related liver cirrhosis.

    PubMed

    Ydreborg, Magdalena; Lisovskaja, Vera; Lagging, Martin; Brehm Christensen, Peer; Langeland, Nina; Buhl, Mads Rauning; Pedersen, Court; Mørch, Kristine; Wejstål, Rune; Norkrans, Gunnar; Lindh, Magnus; Färkkilä, Martti; Westin, Johan

    2014-01-01

    Diagnosis of liver cirrhosis is essential in the management of chronic hepatitis C virus (HCV) infection. Liver biopsy is invasive and thus entails a risk of complications as well as a potential risk of sampling error. Therefore, non-invasive diagnostic tools are preferential. The aim of the present study was to create a model for accurate prediction of liver cirrhosis based on patient characteristics and biomarkers of liver fibrosis, including a panel of non-cholesterol sterols reflecting cholesterol synthesis and absorption and secretion. We evaluated variables with potential predictive significance for liver fibrosis in 278 patients originally included in a multicenter phase III treatment trial for chronic HCV infection. A stepwise multivariate logistic model selection was performed with liver cirrhosis, defined as Ishak fibrosis stage 5-6, as the outcome variable. A new index, referred to as Nordic Liver Index (NoLI) in the paper, was based on the model: Log-odds (predicting cirrhosis) = -12.17+ (age × 0.11) + (BMI (kg/m(2)) × 0.23) + (D7-lathosterol (μg/100 mg cholesterol)×(-0.013)) + (Platelet count (x10(9)/L) × (-0.018)) + (Prothrombin-INR × 3.69). The area under the ROC curve (AUROC) for prediction of cirrhosis was 0.91 (95% CI 0.86-0.96). The index was validated in a separate cohort of 83 patients and the AUROC for this cohort was similar (0.90; 95% CI: 0.82-0.98). In conclusion, the new index may complement other methods in diagnosing cirrhosis in patients with chronic HCV infection.

  20. Predictive aging results in radiation environments

    NASA Astrophysics Data System (ADS)

    Gillen, Kenneth T.; Clough, Roger L.

    1993-06-01

    We have previously derived a time-temperature-dose rate superposition methodology, which, when applicable, can be used to predict polymer degradation versus dose rate, temperature and exposure time. This methodology results in predictive capabilities at the low dose rates and long time periods appropriate, for instance, to ambient nuclear power plant environments. The methodology was successfully applied to several polymeric cable materials and then verified for two of the materials by comparisons of the model predictions with 12 year, low-dose-rate aging data on these materials from a nuclear environment. In this paper, we provide a more detailed discussion of the methodology and apply it to data obtained on a number of additional nuclear power plant cable insulation (a hypalon, a silicone rubber and two ethylene-tetrafluoroethylenes) and jacket (a hypalon) materials. We then show that the predicted, low-dose-rate results for our materials are in excellent agreement with long-term (7-9 year) low-dose-rate results recently obtained for the same material types actually aged under bnuclear power plant conditions. Based on a combination of the modelling and long-term results, we find indications of reasonably similar degradation responses among several different commercial formulations for each of the following "generic" materials: hypalon, ethylene-tetrafluoroethylene, silicone rubber and PVC. If such "generic" behavior can be further substantiated through modelling and long-term results on additional formulations, predictions of cable life for other commercial materials of the same generic types would be greatly facilitated.

  1. Improvement of experimental testing and network training conditions with genome-wide microarrays for more accurate predictions of drug gene targets

    PubMed Central

    2014-01-01

    Background Genome-wide microarrays have been useful for predicting chemical-genetic interactions at the gene level. However, interpreting genome-wide microarray results can be overwhelming due to the vast output of gene expression data combined with off-target transcriptional responses many times induced by a drug treatment. This study demonstrates how experimental and computational methods can interact with each other, to arrive at more accurate predictions of drug-induced perturbations. We present a two-stage strategy that links microarray experimental testing and network training conditions to predict gene perturbations for a drug with a known mechanism of action in a well-studied organism. Results S. cerevisiae cells were treated with the antifungal, fluconazole, and expression profiling was conducted under different biological conditions using Affymetrix genome-wide microarrays. Transcripts were filtered with a formal network-based method, sparse simultaneous equation models and Lasso regression (SSEM-Lasso), under different network training conditions. Gene expression results were evaluated using both gene set and single gene target analyses, and the drug’s transcriptional effects were narrowed first by pathway and then by individual genes. Variables included: (i) Testing conditions – exposure time and concentration and (ii) Network training conditions – training compendium modifications. Two analyses of SSEM-Lasso output – gene set and single gene – were conducted to gain a better understanding of how SSEM-Lasso predicts perturbation targets. Conclusions This study demonstrates that genome-wide microarrays can be optimized using a two-stage strategy for a more in-depth understanding of how a cell manifests biological reactions to a drug treatment at the transcription level. Additionally, a more detailed understanding of how the statistical model, SSEM-Lasso, propagates perturbations through a network of gene regulatory interactions is achieved

  2. A NEW CLINICAL PREDICTION CRITERION ACCURATELY DETERMINES A SUBSET OF PATIENTS WITH BILATERAL PRIMARY ALDOSTERONISM BEFORE ADRENAL VENOUS SAMPLING.

    PubMed

    Kocjan, Tomaz; Janez, Andrej; Stankovic, Milenko; Vidmar, Gaj; Jensterle, Mojca

    2016-05-01

    Adrenal venous sampling (AVS) is the only available method to distinguish bilateral from unilateral primary aldosteronism (PA). AVS has several drawbacks, so it is reasonable to avoid this procedure when the results would not affect clinical management. Our objective was to identify a clinical criterion that can reliably predict nonlateralized AVS as a surrogate for bilateral PA that is not treated surgically. A retrospective diagnostic cross-sectional study conducted at Slovenian national endocrine referral center included 69 consecutive patients (mean age 56 ± 8 years, 21 females) with PA who underwent AVS. PA was confirmed with the saline infusion test (SIT). AVS was performed sequentially during continuous adrenocorticotrophic hormone (ACTH) infusion. The main outcome measures were variables associated with nonlateralized AVS to derive a clinical prediction rule. Sixty-seven (97%) patients had a successful AVS and were included in the statistical analysis. A total of 39 (58%) patients had nonlateralized AVS. The combined criterion of serum potassium ≥3.5 mmol/L, post-SIT aldosterone <18 ng/dL, and either no or bilateral tumor found on computed tomography (CT) imaging had perfect estimated specificity (and thus 100% positive predictive value) for bilateral PA, saving an estimated 16% of the patients (11/67) from unnecessary AVS. The best overall classification accuracy (50/67 = 75%) was achieved using the post-SIT aldosterone level <18 ng/dL alone, which yielded 74% sensitivity and 75% specificity for predicting nonlateralized AVS. Our clinical prediction criterion appears to accurately determine a subset of patients with bilateral PA who could avoid unnecessary AVS and immediately commence with medical treatment.

  3. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  4. Perceived Physician-informed Weight Status Predicts Accurate Weight Self-Perception and Weight Self-Regulation in Low-income, African American Women.

    PubMed

    Harris, Charlie L; Strayhorn, Gregory; Moore, Sandra; Goldman, Brian; Martin, Michelle Y

    2016-01-01

    Obese African American women under-appraise their body mass index (BMI) classification and report fewer weight loss attempts than women who accurately appraise their weight status. This cross-sectional study examined whether physician-informed weight status could predict weight self-perception and weight self-regulation strategies in obese women. A convenience sample of 118 low-income women completed a survey assessing demographic characteristics, comorbidities, weight self-perception, and weight self-regulation strategies. BMI was calculated during nurse triage. Binary logistic regression models were performed to test hypotheses. The odds of obese accurate appraisers having been informed about their weight status were six times greater than those of under-appraisers. The odds of those using an "approach" self-regulation strategy having been physician-informed were four times greater compared with those using an "avoidance" strategy. Physicians are uniquely positioned to influence accurate weight self-perception and adaptive weight self-regulation strategies in underserved women, reducing their risk for obesity-related morbidity.

  5. Raoult's law revisited: accurately predicting equilibrium relative humidity points for humidity control experiments.

    PubMed

    Bowler, Michael G; Bowler, David R; Bowler, Matthew W

    2017-04-01

    The humidity surrounding a sample is an important variable in scientific experiments. Biological samples in particular require not just a humid atmosphere but often a relative humidity (RH) that is in equilibrium with a stabilizing solution required to maintain the sample in the same state during measurements. The controlled dehydration of macromolecular crystals can lead to significant increases in crystal order, leading to higher diffraction quality. Devices that can accurately control the humidity surrounding crystals while monitoring diffraction have led to this technique being increasingly adopted, as the experiments become easier and more reproducible. Matching the RH to the mother liquor is the first step in allowing the stable mounting of a crystal. In previous work [Wheeler, Russi, Bowler & Bowler (2012). Acta Cryst. F 68 , 111-114], the equilibrium RHs were measured for a range of concentrations of the most commonly used precipitants in macromolecular crystallography and it was shown how these related to Raoult's law for the equilibrium vapour pressure of water above a solution. However, a discrepancy between the measured values and those predicted by theory could not be explained. Here, a more precise humidity control device has been used to determine equilibrium RH points. The new results are in agreement with Raoult's law. A simple argument in statistical mechanics is also presented, demonstrating that the equilibrium vapour pressure of a solvent is proportional to its mole fraction in an ideal solution: Raoult's law. The same argument can be extended to the case where the solvent and solute molecules are of different sizes, as is the case with polymers. The results provide a framework for the correct maintenance of the RH surrounding a sample.

  6. Predicted osteotomy planes are accurate when using patient-specific instrumentation for total knee arthroplasty in cadavers: a descriptive analysis.

    PubMed

    Kievit, A J; Dobbe, J G G; Streekstra, G J; Blankevoort, L; Schafroth, M U

    2018-06-01

    Malalignment of implants is a major source of failure during total knee arthroplasty. To achieve more accurate 3D planning and execution of the osteotomy cuts during surgery, the Signature (Biomet, Warsaw) patient-specific instrumentation (PSI) was used to produce pin guides for the positioning of the osteotomy blocks by means of computer-aided manufacture based on CT scan images. The research question of this study is: what is the transfer accuracy of osteotomy planes predicted by the Signature PSI system for preoperative 3D planning and intraoperative block-guided pin placement to perform total knee arthroplasty procedures? The transfer accuracy achieved by using the Signature PSI system was evaluated by comparing the osteotomy planes predicted preoperatively with the osteotomy planes seen intraoperatively in human cadaveric legs. Outcomes were measured in terms of translational and rotational errors (varus, valgus, flexion, extension and axial rotation) for both tibia and femur osteotomies. Average translational errors between the osteotomy planes predicted using the Signature system and the actual osteotomy planes achieved was 0.8 mm (± 0.5 mm) for the tibia and 0.7 mm (± 4.0 mm) for the femur. Average rotational errors in relation to predicted and achieved osteotomy planes were 0.1° (± 1.2°) of varus and 0.4° (± 1.7°) of anterior slope (extension) for the tibia, and 2.8° (± 2.0°) of varus and 0.9° (± 2.7°) of flexion and 1.4° (± 2.2°) of external rotation for the femur. The similarity between osteotomy planes predicted using the Signature system and osteotomy planes actually achieved was excellent for the tibia although some discrepancies were seen for the femur. The use of 3D system techniques in TKA surgery can provide accurate intraoperative guidance, especially for patients with deformed bone, tailored to individual patients and ensure better placement of the implant.

  7. Towards Accurate Ab Initio Predictions of the Spectrum of Methane

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    We have carried out extensive ab initio calculations of the electronic structure of methane, and these results are used to compute vibrational energy levels. We include basis set extrapolations, core-valence correlation, relativistic effects, and Born- Oppenheimer breakdown terms in our calculations. Our ab initio predictions of the lowest lying levels are superb.

  8. Variability in the Propagation Phase of CFD-Based Noise Prediction: Summary of Results From Category 8 of the BANC-III Workshop

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard; Redonnet, Stephane; Imamura, Taro; Ikeda, Tomoaki; Zawodny, Nikolas; Cunha, Guilherme

    2015-01-01

    The usage of Computational Fluid Dynamics (CFD) in noise prediction typically has been a two part process: accurately predicting the flow conditions in the near-field and then propagating the noise from the near-field to the observer. Due to the increase in computing power and the cost benefit when weighed against wind tunnel testing, the usage of CFD to estimate the local flow field of complex geometrical structures has become more routine. Recently, the Benchmark problems in Airframe Noise Computation (BANC) workshops have provided a community focus on accurately simulating the local flow field near the body with various CFD approaches. However, to date, little effort has been given into assessing the impact of the propagation phase of noise prediction. This paper includes results from the BANC-III workshop which explores variability in the propagation phase of CFD-based noise prediction. This includes two test cases: an analytical solution of a quadrupole source near a sphere and a computational solution around a nose landing gear. Agreement between three codes was very good for the analytic test case, but CFD-based noise predictions indicate that the propagation phase can introduce 3dB or more of variability in noise predictions.

  9. Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2002-01-01

    NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.

  10. Accurate prediction of cardiorespiratory fitness using cycle ergometry in minimally disabled persons with relapsing-remitting multiple sclerosis.

    PubMed

    Motl, Robert W; Fernhall, Bo

    2012-03-01

    To examine the accuracy of predicting peak oxygen consumption (VO(2peak)) primarily from peak work rate (WR(peak)) recorded during a maximal, incremental exercise test on a cycle ergometer among persons with relapsing-remitting multiple sclerosis (RRMS) who had minimal disability. Cross-sectional study. Clinical research laboratory. Women with RRMS (n=32) and sex-, age-, height-, and weight-matched healthy controls (n=16) completed an incremental exercise test on a cycle ergometer to volitional termination. Not applicable. Measured and predicted VO(2peak) and WR(peak). There were strong, statistically significant associations between measured and predicted VO(2peak) in the overall sample (R(2)=.89, standard error of the estimate=127.4 mL/min) and subsamples with (R(2)=.89, standard error of the estimate=131.3 mL/min) and without (R(2)=.85, standard error of the estimate=126.8 mL/min) multiple sclerosis (MS) based on the linear regression analyses. Based on the 95% confidence limits for worst-case errors, the equation predicted VO(2peak) within 10% of its true value in 95 of every 100 subjects with MS. Peak VO(2) can be accurately predicted in persons with RRMS who have minimal disability as it is in controls by using established equations and WR(peak) recorded from a maximal, incremental exercise test on a cycle ergometer. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. Shedding light on the variability of optical skin properties: finding a path towards more accurate prediction of light propagation in human cutaneous compartments

    PubMed Central

    Mignon, C.; Tobin, D. J.; Zeitouny, M.; Uzunbajakava, N. E.

    2018-01-01

    Finding a path towards a more accurate prediction of light propagation in human skin remains an aspiration of biomedical scientists working on cutaneous applications both for diagnostic and therapeutic reasons. The objective of this study was to investigate variability of the optical properties of human skin compartments reported in literature, to explore the underlying rational of this variability and to propose a dataset of values, to better represent an in vivo case and recommend a solution towards a more accurate prediction of light propagation through cutaneous compartments. To achieve this, we undertook a novel, logical yet simple approach. We first reviewed scientific articles published between 1981 and 2013 that reported on skin optical properties, to reveal the spread in the reported quantitative values. We found variations of up to 100-fold. Then we extracted the most trust-worthy datasets guided by a rule that the spectral properties should reflect the specific biochemical composition of each of the skin layers. This resulted in the narrowing of the spread in the calculated photon densities to 6-fold. We conclude with a recommendation to use the identified most robust datasets when estimating light propagation in human skin using Monte Carlo simulations. Alternatively, otherwise follow our proposed strategy to screen any new datasets to determine their biological relevance. PMID:29552418

  12. New analytical model for the ozone electronic ground state potential surface and accurate ab initio vibrational predictions at high energy range.

    PubMed

    Tyuterev, Vladimir G; Kochanov, Roman V; Tashkun, Sergey A; Holka, Filip; Szalay, Péter G

    2013-10-07

    An accurate description of the complicated shape of the potential energy surface (PES) and that of the highly excited vibration states is of crucial importance for various unsolved issues in the spectroscopy and dynamics of ozone and remains a challenge for the theory. In this work a new analytical representation is proposed for the PES of the ground electronic state of the ozone molecule in the range covering the main potential well and the transition state towards the dissociation. This model accounts for particular features specific to the ozone PES for large variations of nuclear displacements along the minimum energy path. The impact of the shape of the PES near the transition state (existence of the "reef structure") on vibration energy levels was studied for the first time. The major purpose of this work was to provide accurate theoretical predictions for ozone vibrational band centres at the energy range near the dissociation threshold, which would be helpful for understanding the very complicated high-resolution spectra and its analyses currently in progress. Extended ab initio electronic structure calculations were carried out enabling the determination of the parameters of a minimum energy path PES model resulting in a new set of theoretical vibrational levels of ozone. A comparison with recent high-resolution spectroscopic data on the vibrational levels gives the root-mean-square deviations below 1 cm(-1) for ozone band centres up to 90% of the dissociation energy. New ab initio vibrational predictions represent a significant improvement with respect to all previously available calculations.

  13. Do Skilled Elementary Teachers Hold Scientific Conceptions and Can They Accurately Predict the Type and Source of Students' Preconceptions of Electric Circuits?

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…

  14. Moving Toward Integrating Gene Expression Profiling Into High-Throughput Testing: A Gene Expression Biomarker Accurately Predicts Estrogen Receptor α Modulation in a Microarray Compendium.

    PubMed

    Ryan, Natalia; Chorley, Brian; Tice, Raymond R; Judson, Richard; Corton, J Christopher

    2016-05-01

    Microarray profiling of chemical-induced effects is being increasingly used in medium- and high-throughput formats. Computational methods are described here to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), often modulated by potential endocrine disrupting chemicals. ERα biomarker genes were identified by their consistent expression after exposure to 7 structurally diverse ERα agonists and 3 ERα antagonists in ERα-positive MCF-7 cells. Most of the biomarker genes were shown to be directly regulated by ERα as determined by ESR1 gene knockdown using siRNA as well as through chromatin immunoprecipitation coupled with DNA sequencing analysis of ERα-DNA interactions. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression datasets from experiments using MCF-7 cells, including those evaluating the transcriptional effects of hormones and chemicals. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% and 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) ER reference chemicals including "very weak" agonists. Importantly, the biomarker predictions accurately replicated predictions based on 18 in vitro high-throughput screening assays that queried different steps in ERα signaling. For 114 chemicals, the balanced accuracies were 95% and 98% for activation or suppression, respectively. These results demonstrate that the ERα gene expression biomarker can accurately identify ERα modulators in large collections of microarray data derived from MCF-7 cells. Published by Oxford University Press on behalf of the Society of Toxicology 2016. This work is written by US Government employees and is in the public domain in the US.

  15. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties

    NASA Astrophysics Data System (ADS)

    Xie, Tian; Grossman, Jeffrey C.

    2018-04-01

    The use of machine learning methods for accelerating the design of crystalline materials usually requires manually constructed feature vectors or complex transformation of atom coordinates to input the crystal structure, which either constrains the model to certain crystal types or makes it difficult to provide chemical insights. Here, we develop a crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials. Our method provides a highly accurate prediction of density functional theory calculated properties for eight different properties of crystals with various structure types and compositions after being trained with 1 04 data points. Further, our framework is interpretable because one can extract the contributions from local chemical environments to global properties. Using an example of perovskites, we show how this information can be utilized to discover empirical rules for materials design.

  16. Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim

    2014-11-01

    Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).

  17. Lower NIH stroke scale scores are required to accurately predict a good prognosis in posterior circulation stroke.

    PubMed

    Inoa, Violiza; Aron, Abraham W; Staff, Ilene; Fortunato, Gilbert; Sansing, Lauren H

    2014-01-01

    The NIH stroke scale (NIHSS) is an indispensable tool that aids in the determination of acute stroke prognosis and decision making. Patients with posterior circulation (PC) strokes often present with lower NIHSS scores, which may result in the withholding of thrombolytic treatment from these patients. However, whether these lower initial NIHSS scores predict better long-term prognoses is uncertain. We aimed to assess the utility of the NIHSS at presentation for predicting the functional outcome at 3 months in anterior circulation (AC) versus PC strokes. This was a retrospective analysis of a large prospectively collected database of adults with acute ischemic stroke. Univariate and multivariate analyses were conducted to identify factors associated with outcome. Additional analyses were performed to determine the receiver operating characteristic (ROC) curves for NIHSS scores and outcomes in AC and PC infarctions. Both the optimal cutoffs for maximal diagnostic accuracy and the cutoffs to obtain >80% sensitivity for poor outcomes were determined in AC and PC strokes. The analysis included 1,197 patients with AC stroke and 372 with PC stroke. The median initial NIHSS score for patients with AC strokes was 7 and for PC strokes it was 2. The majority (71%) of PC stroke patients had baseline NIHSS scores ≤4, and 15% of these 'minor' stroke patients had a poor outcome at 3 months. ROC analysis identified that the optimal NIHSS cutoff for outcome prediction after infarction in the AC was 8 and for infarction in the PC it was 4. To achieve >80% sensitivity for detecting patients with a subsequent poor outcome, the NIHSS cutoff for infarctions in the AC was 4 and for infarctions in the PC it was 2. The NIHSS cutoff that most accurately predicts outcomes is 4 points higher in AC compared to PC infarctions. There is potential for poor outcomes in patients with PC strokes and low NIHSS scores, suggesting that thrombolytic treatment should not be withheld from these patients

  18. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can bemore » rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.« less

  19. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    DOE PAGES

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; ...

    2017-03-03

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can bemore » rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.« less

  20. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory.

    PubMed

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S; Shirley, Eric L; Prendergast, David

    2017-03-03

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  1. Accurate prediction of acute fish toxicity of fragrance chemicals with the RTgill-W1 cell assay.

    PubMed

    Natsch, Andreas; Laue, Heike; Haupt, Tina; von Niederhäusern, Valentin; Sanders, Gordon

    2018-03-01

    Testing for acute fish toxicity is an integral part of the environmental safety assessment of chemicals. A true replacement of primary fish tissue was recently proposed using cell viability in a fish gill cell line (RTgill-W1) as a means of predicting acute toxicity, showing good predictivity on 35 chemicals. To promote regulatory acceptance, the predictivity and applicability domain of novel tests need to be carefully evaluated on chemicals with existing high-quality in vivo data. We applied the RTgill-W1 cell assay to 38 fragrance chemicals with a wide range of both physicochemical properties and median lethal concentration (LC50) values and representing a diverse range of chemistries. A strong correlation (R 2  = 0.90-0.94) between the logarithmic in vivo LC50 values, based on fish mortality, and the logarithmic in vitro median effect concentration (EC50) values based on cell viability was observed. A leave-one-out analysis illustrates a median under-/overprediction from in vitro EC50 values to in vivo LC50 values by a factor of 1.5. This assay offers a simple, accurate, and reliable alternative to in vivo acute fish toxicity testing for chemicals, presumably acting mainly by a narcotic mode of action. Furthermore, the present study provides validation of the predictivity of the RTgill-W1 assay on a completely independent set of chemicals that had not been previously tested and indicates that fragrance chemicals are clearly within the applicability domain. Environ Toxicol Chem 2018;37:931-941. © 2017 SETAC. © 2017 SETAC.

  2. Rapid and Accurate Evaluation of the Quality of Commercial Organic Fertilizers Using Near Infrared Spectroscopy

    PubMed Central

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers. PMID:24586313

  3. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    PubMed

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  4. Accurate and scalable social recommendation using mixed-membership stochastic block models.

    PubMed

    Godoy-Lorite, Antonia; Guimerà, Roger; Moore, Cristopher; Sales-Pardo, Marta

    2016-12-13

    With increasing amounts of information available, modeling and predicting user preferences-for books or articles, for example-are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users' ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user's and item's groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets.

  5. Accurate and scalable social recommendation using mixed-membership stochastic block models

    PubMed Central

    Godoy-Lorite, Antonia; Moore, Cristopher

    2016-01-01

    With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets. PMID:27911773

  6. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  7. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  8. Accurate genomic predictions for BCWD resistance in rainbow trout are achieved using low-density SNP panels: Evidence that long-range LD is a major contributing factor.

    PubMed

    Vallejo, Roger L; Silva, Rafael M O; Evenhuis, Jason P; Gao, Guangtu; Liu, Sixin; Parsons, James E; Martin, Kyle E; Wiens, Gregory D; Lourenco, Daniela A L; Leeds, Timothy D; Palti, Yniv

    2018-06-05

    Previously accurate genomic predictions for Bacterial cold water disease (BCWD) resistance in rainbow trout were obtained using a medium-density single nucleotide polymorphism (SNP) array. Here, the impact of lower-density SNP panels on the accuracy of genomic predictions was investigated in a commercial rainbow trout breeding population. Using progeny performance data, the accuracy of genomic breeding values (GEBV) using 35K, 10K, 3K, 1K, 500, 300 and 200 SNP panels as well as a panel with 70 quantitative trait loci (QTL)-flanking SNP was compared. The GEBVs were estimated using the Bayesian method BayesB, single-step GBLUP (ssGBLUP) and weighted ssGBLUP (wssGBLUP). The accuracy of GEBVs remained high despite the sharp reductions in SNP density, and even with 500 SNP accuracy was higher than the pedigree-based prediction (0.50-0.56 versus 0.36). Furthermore, the prediction accuracy with the 70 QTL-flanking SNP (0.65-0.72) was similar to the panel with 35K SNP (0.65-0.71). Genomewide linkage disequilibrium (LD) analysis revealed strong LD (r 2  ≥ 0.25) spanning on average over 1 Mb across the rainbow trout genome. This long-range LD likely contributed to the accurate genomic predictions with the low-density SNP panels. Population structure analysis supported the hypothesis that long-range LD in this population may be caused by admixture. Results suggest that lower-cost, low-density SNP panels can be used for implementing genomic selection for BCWD resistance in rainbow trout breeding programs. © 2018 The Authors. This article is a U.S. Government work and is in the public domain in the USA. Journal of Animal Breeding and Genetics published by Blackwell Verlag GmbH.

  9. Accurate RNA 5-methylcytosine site prediction based on heuristic physical-chemical properties reduction and classifier ensemble.

    PubMed

    Zhang, Ming; Xu, Yan; Li, Lei; Liu, Zi; Yang, Xibei; Yu, Dong-Jun

    2018-06-01

    RNA 5-methylcytosine (m 5 C) is an important post-transcriptional modification that plays an indispensable role in biological processes. The accurate identification of m 5 C sites from primary RNA sequences is especially useful for deeply understanding the mechanisms and functions of m 5 C. Due to the difficulty and expensive costs of identifying m 5 C sites with wet-lab techniques, developing fast and accurate machine-learning-based prediction methods is urgently needed. In this study, we proposed a new m 5 C site predictor, called M5C-HPCR, by introducing a novel heuristic nucleotide physicochemical property reduction (HPCR) algorithm and classifier ensemble. HPCR extracts multiple reducts of physical-chemical properties for encoding discriminative features, while the classifier ensemble is applied to integrate multiple base predictors, each of which is trained based on a separate reduct of the physical-chemical properties obtained from HPCR. Rigorous jackknife tests on two benchmark datasets demonstrate that M5C-HPCR outperforms state-of-the-art m 5 C site predictors, with the highest values of MCC (0.859) and AUC (0.962). We also implemented the webserver of M5C-HPCR, which is freely available at http://cslab.just.edu.cn:8080/M5C-HPCR/. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. The NAFLD Index: A Simple and Accurate Screening Tool for the Prediction of Non-Alcoholic Fatty Liver Disease.

    PubMed

    Ichino, Naohiro; Osakabe, Keisuke; Sugimoto, Keiko; Suzuki, Koji; Yamada, Hiroya; Takai, Hiroji; Sugiyama, Hiroko; Yukitake, Jun; Inoue, Takashi; Ohashi, Koji; Hata, Tadayoshi; Hamajima, Nobuyuki; Nishikawa, Toru; Hashimoto, Senju; Kawabe, Naoto; Yoshioka, Kentaro

    2015-01-01

    Non-alcoholic fatty liver disease (NAFLD) is a common debilitating condition in many industrialized countries that increases the risk of cardiovascular disease. The aim of this study was to derive a simple and accurate screening tool for the prediction of NAFLD in the Japanese population. A total of 945 participants, 279 men and 666 women living in Hokkaido, Japan, were enrolled among residents who attended a health check-up program from 2010 to 2014. Participants with an alcohol consumption > 20 g/day and/or a chronic liver disease, such as chronic hepatitis B, chronic hepatitis C or autoimmune hepatitis, were excluded from this study. Clinical and laboratory data were examined to identify predictive markers of NAFLD. A new predictive index for NAFLD, the NAFLD index, was constructed for men and for women. The NAFLD index for men = -15.5693+0.3264 [BMI] +0.0134 [triglycerides (mg/dl)], and for women = -31.4686+0.3683 [BMI] +2.5699 [albumin (g/dl)] +4.6740[ALT/AST] -0.0379 [HDL cholesterol (mg/dl)]. The AUROC of the NAFLD index for men and for women was 0.87(95% CI 0.88-1.60) and 0.90 (95% CI 0.66-1.02), respectively. The cut-off point of -5.28 for men predicted NAFLD with an accuracy of 82.8%. For women, the cut-off point of -7.65 predicted NAFLD with an accuracy of 87.7%. A new index for the non-invasive prediction of NAFLD, the NAFLD index, was constructed using available clinical and laboratory data. This index is a simple screening tool to predict the presence of NAFLD.

  11. How Accurately Can We Predict Eclipses for Algol? (Poster abstract)

    NASA Astrophysics Data System (ADS)

    Turner, D.

    2016-06-01

    (Abstract only) beta Persei, or Algol, is a very well known eclipsing binary system consisting of a late B-type dwarf that is regularly eclipsed by a GK subgiant every 2.867 days. Eclipses, which last about 8 hours, are regular enough that predictions for times of minima are published in various places, Sky & Telescope magazine and The Observer's Handbook, for example. But eclipse minimum lasts for less than a half hour, whereas subtle mistakes in the current ephemeris for the star can result in predictions that are off by a few hours or more. The Algol system is fairly complex, with the Algol A and Algol B eclipsing system also orbited by Algol C with an orbital period of nearly 2 years. Added to that are complex long-term O-C variations with a periodicity of almost two centuries that, although suggested by Hoffmeister to be spurious, fit the type of light travel time variations expected for a fourth star also belonging to the system. The AB sub-system also undergoes mass transfer events that add complexities to its O-C behavior. Is it actually possible to predict precise times of eclipse minima for Algol months in advance given such complications, or is it better to encourage ongoing observations of the star so that O-C variations can be tracked in real time?

  12. Knotty: Efficient and Accurate Prediction of Complex RNA Pseudoknot Structures.

    PubMed

    Jabbari, Hosna; Wark, Ian; Montemagno, Carlo; Will, Sebastian

    2018-06-01

    The computational prediction of RNA secondary structure by free energy minimization has become an important tool in RNA research. However in practice, energy minimization is mostly limited to pseudoknot-free structures or rather simple pseudoknots, not covering many biologically important structures such as kissing hairpins. Algorithms capable of predicting sufficiently complex pseudoknots (for sequences of length n) used to have extreme complexities, e.g. Pknots (Rivas and Eddy, 1999) has O(n6) time and O(n4) space complexity. The algorithm CCJ (Chen et al., 2009) dramatically improves the asymptotic run time for predicting complex pseudoknots (handling almost all relevant pseudoknots, while being slightly less general than Pknots), but this came at the cost of large constant factors in space and time, which strongly limited its practical application (∼200 bases already require 256GB space). We present a CCJ-type algorithm, Knotty, that handles the same comprehensive pseudoknot class of structures as CCJ with improved space complexity of Θ(n3 + Z)-due to the applied technique of sparsification, the number of "candidates", Z, appears to grow significantly slower than n4 on our benchmark set (which include pseudoknotted RNAs up to 400 nucleotides). In terms of run time over this benchmark, Knotty clearly outperforms Pknots and the original CCJ implementation, CCJ 1.0; Knotty's space consumption fundamentally improves over CCJ 1.0, being on a par with the space-economic Pknots. By comparing to CCJ 2.0, our unsparsified Knotty variant, we demonstrate the isolated effect of sparsification. Moreover, Knotty employs the state-of-the-art energy model of "HotKnots DP09", which results in superior prediction accuracy over Pknots. Our software is available at https://github.com/HosnaJabbari/Knotty. will@tbi.unvie.ac.at. Supplementary data are available at Bioinformatics online.

  13. Accurate and Reliable Prediction of the Binding Affinities of Macrocycles to Their Protein Targets.

    PubMed

    Yu, Haoyu S; Deng, Yuqing; Wu, Yujie; Sindhikara, Dan; Rask, Amy R; Kimura, Takayuki; Abel, Robert; Wang, Lingle

    2017-12-12

    Macrocycles have been emerging as a very important drug class in the past few decades largely due to their expanded chemical diversity benefiting from advances in synthetic methods. Macrocyclization has been recognized as an effective way to restrict the conformational space of acyclic small molecule inhibitors with the hope of improving potency, selectivity, and metabolic stability. Because of their relatively larger size as compared to typical small molecule drugs and the complexity of the structures, efficient sampling of the accessible macrocycle conformational space and accurate prediction of their binding affinities to their target protein receptors poses a great challenge of central importance in computational macrocycle drug design. In this article, we present a novel method for relative binding free energy calculations between macrocycles with different ring sizes and between the macrocycles and their corresponding acyclic counterparts. We have applied the method to seven pharmaceutically interesting data sets taken from recent drug discovery projects including 33 macrocyclic ligands covering a diverse chemical space. The predicted binding free energies are in good agreement with experimental data with an overall root-mean-square error (RMSE) of 0.94 kcal/mol. This is to our knowledge the first time where the free energy of the macrocyclization of linear molecules has been directly calculated with rigorous physics-based free energy calculation methods, and we anticipate the outstanding accuracy demonstrated here across a broad range of target classes may have significant implications for macrocycle drug discovery.

  14. A hybrid method for accurate star tracking using star sensor and gyros.

    PubMed

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  15. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement

  16. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    PubMed

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t

  17. Towards more accurate and reliable predictions for nuclear applications

    NASA Astrophysics Data System (ADS)

    Goriely, Stephane; Hilaire, Stephane; Dubray, Noel; Lemaître, Jean-François

    2017-09-01

    The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. Nowadays mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenological inputs in the evaluation of nuclear data. The latest achievements to determine nuclear masses within the non-relativistic HFB approach, including the related uncertainties in the model predictions, are discussed. Similarly, recent efforts to determine fission observables within the mean-field approach are described and compared with more traditional existing models.

  18. Towards accurate ab initio predictions of the vibrational spectrum of methane

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.

    2002-01-01

    We have carried out extensive ab initio calculations of the electronic structure of methane, and these results are used to compute vibrational energy levels. We include basis set extrapolations, core-valence correlation, relativistic effects, and Born-Oppenheimer breakdown terms in our calculations. Our ab initio predictions of the lowest lying levels are superb.

  19. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    PubMed

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  20. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting

    PubMed Central

    Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.

    2016-01-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  1. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.

    PubMed

    Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T

    2016-03-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.

  2. Accurate electrical prediction of memory array through SEM-based edge-contour extraction using SPICE simulation

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya

    2009-03-01

    The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.

  3. Estimating energy expenditure in vascular surgery patients: Are predictive equations accurate enough?

    PubMed

    Suen, J; Thomas, J M; Delaney, C L; Spark, J I; Miller, M D

    2016-12-01

    Malnutrition is prevalent in vascular surgical patients who commonly seek tertiary care at advanced stages of disease. Adjunct nutrition support is therefore pertinent to optimise patient outcomes. To negate consequences related to excessive or suboptimal dietary energy intake, it is essential to accurately determine energy expenditure and subsequent requirements. This study aims to compare resting energy expenditure (REE) measured by indirect calorimetry, a commonly used comparator, to REE estimated by predictive equations (Schofield, Harris-Benedict equations and Miller equation) to determine the most suitable equation for vascular surgery patients. Data were collected from four studies that measured REE in 77 vascular surgery patients. Bland-Altman analyses were conducted to explore agreement. Presence of fixed or proportional bias was assessed by linear regression analyses. In comparison to measured REE, on average REE was overestimated when Schofield (+857 kJ/day), Harris-Benedict (+801 kJ/day) and Miller (+71 kJ/day) equations were used. Wide limits of agreement led to an over or underestimation from 1552 to 1755 kJ. Proportional bias was absent in Schofield (R 2  = 0.005, p = 0.54) and Harris-Benedict equations (R 2  = 0.045, p = 0.06) but was present in the Miller equation (R 2  = 0.210, p < 0.01) even after logarithmic transformation (R 2  = 0.213, p < 0.01). Whilst the Miller equation tended to overestimate resting energy expenditure and was affected by proportional bias, the limits of agreement and mean bias were smaller compared to Schofield and Harris-Benedict equations. This suggested that it is the preferred predictive equation for vascular surgery patients. Future research to refine the Miller equation to improve its overall accuracy will better inform the provision of nutritional support for vascular surgery patients and subsequently improve outcomes. Alternatively, an equation might be developed specifically for use with

  4. Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei

    2017-10-01

    To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.

  5. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-05

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1). Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Computed tomography screening for lung cancer: results of ten years of annual screening and validation of cosmos prediction model.

    PubMed

    Veronesi, G; Maisonneuve, P; Rampinelli, C; Bertolotti, R; Petrella, F; Spaggiari, L; Bellomi, M

    2013-12-01

    It is unclear how long low-dose computed tomographic (LDCT) screening should continue in populations at high risk of lung cancer. We assessed outcomes and the predictive ability of the COSMOS prediction model in volunteers screened for 10 years. Smokers and former smokers (>20 pack-years), >50 years, were enrolled over one year (2000-2001), receiving annual LDCT for 10 years. The frequency of screening-detected lung cancers was compared with COSMOS and Bach risk model estimates. Among 1035 recruited volunteers (71% men, mean age 58 years) compliance was 65% at study end. Seventy-one (6.95%) lung cancers were diagnosed, 12 at baseline. Disease stage was: IA in 48 (66.6%); IB in 6; IIA in 5; IIB in 2; IIIA in 5; IIIB in 1; IV in 5; and limited small cell cancer in 3. Five- and ten-year survival were 64% and 57%, respectively, 84% and 65% for stage I. Ten (12.1%) received surgery for a benign lesion. The number of lung cancers detected during the first two screening rounds was close to that predicted by the COSMOS model, while the Bach model accurately predicted frequency from the third year on. Neither cancer frequency nor proportion at stage I decreased over 10 years, indicating that screening should not be discontinued. Most cancers were early stage, and overall survival was high. Only a limited number of invasive procedures for benign disease were performed. The Bach model - designed to predict symptomatic cancers - accurately predicted cancer frequency from the third year, suggesting that overdiagnosis is a minor problem in lung cancer screening. The COSMOS model - designed to estimate screening-detected lung cancers - accurately predicted cancer frequency at baseline and second screening round. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    PubMed

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  8. Accurate prediction of pregnancy viability by means of a simple scoring system.

    PubMed

    Bottomley, Cecilia; Van Belle, Vanya; Kirk, Emma; Van Huffel, Sabine; Timmerman, Dirk; Bourne, Tom

    2013-01-01

    What is the performance of a simple scoring system to predict whether women will have an ongoing viable intrauterine pregnancy beyond the first trimester? A simple scoring system using demographic and initial ultrasound variables accurately predicts pregnancy viability beyond the first trimester with an area under the curve (AUC) in a receiver operating characteristic curve of 0.924 [95% confidence interval (CI) 0.900-0.947] on an independent test set. Individual demographic and ultrasound factors, such as maternal age, vaginal bleeding and gestational sac size, are strong predictors of miscarriage. Previous mathematical models have combined individual risk factors with reasonable performance. A simple scoring system derived from a mathematical model that can be easily implemented in clinical practice has not previously been described for the prediction of ongoing viability. This was a prospective observational study in a single early pregnancy assessment centre during a 9-month period. A cohort of 1881 consecutive women undergoing transvaginal ultrasound scan at a gestational age <84 days were included. Women were excluded if the first trimester outcome was not known. Demographic features, symptoms and ultrasound variables were tested for their influence on ongoing viability. Logistic regression was used to determine the influence on first trimester viability from demographics and symptoms alone, ultrasound findings alone and then from all the variables combined. Each model was developed on a training data set, and a simple scoring system was derived from this. This scoring system was tested on an independent test data set. The final outcome based on a total of 1435 participants was an ongoing viable pregnancy in 885 (61.7%) and early pregnancy loss in 550 (38.3%) women. The scoring system using significant demographic variables alone (maternal age and amount of bleeding) to predict ongoing viability gave an AUC of 0.724 (95% CI = 0.692-0.756) in the training set

  9. Experimental evaluation of radiosity for room sound-field prediction.

    PubMed

    Hodgson, Murray; Nosal, Eva-Marie

    2006-08-01

    An acoustical radiosity model was evaluated for how it performs in predicting real room sound fields. This was done by comparing radiosity predictions with experimental results for three existing rooms--a squash court, a classroom, and an office. Radiosity predictions were also compared with those by ray tracing--a "reference" prediction model--for both specular and diffuse surface reflection. Comparisons were made for detailed and discretized echograms, sound-decay curves, sound-propagation curves, and the variations with frequency of four room-acoustical parameters--EDT, RT, D50, and C80. In general, radiosity and diffuse ray tracing gave very similar predictions. Predictions by specular ray tracing were often very different. Radiosity agreed well with experiment in some cases, less well in others. Definitive conclusions regarding the accuracy with which the rooms were modeled, or the accuracy of the radiosity approach, were difficult to draw. The results suggest that radiosity predicts room sound fields with some accuracy, at least as well as diffuse ray tracing and, in general, better than specular ray tracing. The predictions of detailed echograms are less accurate, those of derived room-acoustical parameters more accurate. The results underline the need to develop experimental methods for accurately characterizing the absorptive and reflective characteristics of room surfaces, possible including phase.

  10. Accurate van der Waals force field for gas adsorption in porous materials.

    PubMed

    Sun, Lei; Yang, Li; Zhang, Ya-Dong; Shi, Qi; Lu, Rui-Feng; Deng, Wei-Qiao

    2017-09-05

    An accurate van der Waals force field (VDW FF) was derived from highly precise quantum mechanical (QM) calculations. Small molecular clusters were used to explore van der Waals interactions between gas molecules and porous materials. The parameters of the accurate van der Waals force field were determined by QM calculations. To validate the force field, the prediction results from the VDW FF were compared with standard FFs, such as UFF, Dreiding, Pcff, and Compass. The results from the VDW FF were in excellent agreement with the experimental measurements. This force field can be applied to the prediction of the gas density (H 2 , CO 2 , C 2 H 4 , CH 4 , N 2 , O 2 ) and adsorption performance inside porous materials, such as covalent organic frameworks (COFs), zeolites and metal organic frameworks (MOFs), consisting of H, B, N, C, O, S, Si, Al, Zn, Mg, Ni, and Co. This work provides a solid basis for studying gas adsorption in porous materials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Accurate perception of negative emotions predicts functional capacity in schizophrenia.

    PubMed

    Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J

    2014-04-30

    Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Control surface hinge moment prediction using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Simpson, Christopher David

    The following research determines the feasibility of predicting control surface hinge moments using various computational methods. A detailed analysis is conducted using a 2D GA(W)-1 airfoil with a 20% plain flap. Simple hinge moment prediction methods are tested, including empirical Datcom relations and XFOIL. Steady-state and time-accurate turbulent, viscous, Navier-Stokes solutions are computed using Fun3D. Hinge moment coefficients are computed. Mesh construction techniques are discussed. An adjoint-based mesh adaptation case is also evaluated. An NACA 0012 45-degree swept horizontal stabilizer with a 25% elevator is also evaluated using Fun3D. Results are compared with experimental wind-tunnel data obtained from references. Finally, the costs of various solution methods are estimated. Results indicate that while a steady-state Navier-Stokes solution can accurately predict control surface hinge moments for small angles of attack and deflection angles, a time-accurate solution is necessary to accurately predict hinge moments in the presence of flow separation. The ability to capture the unsteady vortex shedding behavior present in moderate to large control surface deflections is found to be critical to hinge moment prediction accuracy. Adjoint-based mesh adaptation is shown to give hinge moment predictions similar to a globally-refined mesh for a steady-state 2D simulation.

  13. Unprecedently Large-Scale Kinase Inhibitor Set Enabling the Accurate Prediction of Compound–Kinase Activities: A Way toward Selective Promiscuity by Design?

    PubMed Central

    2016-01-01

    Drug discovery programs frequently target members of the human kinome and try to identify small molecule protein kinase inhibitors, primarily for cancer treatment, additional indications being increasingly investigated. One of the challenges is controlling the inhibitors degree of selectivity, assessed by in vitro profiling against panels of protein kinases. We manually extracted, compiled, and standardized such profiles published in the literature: we collected 356 908 data points corresponding to 482 protein kinases, 2106 inhibitors, and 661 patents. We then analyzed this data set in terms of kinome coverage, results reproducibility, popularity, and degree of selectivity of both kinases and inhibitors. We used the data set to create robust proteochemometric models capable of predicting kinase activity (the ligand–target space was modeled with an externally validated RMSE of 0.41 ± 0.02 log units and R02 0.74 ± 0.03), in order to account for missing or unreliable measurements. The influence on the prediction quality of parameters such as number of measurements, Murcko scaffold frequency or inhibitor type was assessed. Interpretation of the models enabled to highlight inhibitors and kinases properties correlated with higher affinities, and an analysis in the context of kinases crystal structures was performed. Overall, the models quality allows the accurate prediction of kinase-inhibitor activities and their structural interpretation, thus paving the way for the rational design of compounds with a targeted selectivity profile. PMID:27482722

  14. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    PubMed Central

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  15. Predicting β-turns and their types using predicted backbone dihedral angles and secondary structures

    PubMed Central

    2010-01-01

    Background β-turns are secondary structure elements usually classified as coil. Their prediction is important, because of their role in protein folding and their frequent occurrence in protein chains. Results We have developed a novel method that predicts β-turns and their types using information from multiple sequence alignments, predicted secondary structures and, for the first time, predicted dihedral angles. Our method uses support vector machines, a supervised classification technique, and is trained and tested on three established datasets of 426, 547 and 823 protein chains. We achieve a Matthews correlation coefficient of up to 0.49, when predicting the location of β-turns, the highest reported value to date. Moreover, the additional dihedral information improves the prediction of β-turn types I, II, IV, VIII and "non-specific", achieving correlation coefficients up to 0.39, 0.33, 0.27, 0.14 and 0.38, respectively. Our results are more accurate than other methods. Conclusions We have created an accurate predictor of β-turns and their types. Our method, called DEBT, is available online at http://comp.chem.nottingham.ac.uk/debt/. PMID:20673368

  16. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    PubMed

    El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  17. ROCK I Has More Accurate Prognostic Value than MET in Predicting Patient Survival in Colorectal Cancer.

    PubMed

    Li, Jian; Bharadwaj, Shruthi S; Guzman, Grace; Vishnubhotla, Ramana; Glover, Sarah C

    2015-06-01

    Colorectal cancer remains the second leading cause of death in the United States despite improvements in incidence rates and advancements in screening. The present study evaluated the prognostic value of two tumor markers, MET and ROCK I, which have been noted in other cancers to provide more accurate prognoses of patient outcomes than tumor staging alone. We constructed a tissue microarray from surgical specimens of adenocarcinomas from 108 colorectal cancer patients. Using immunohistochemistry, we examined the expression levels of tumor markers MET and ROCK I, with a pathologist blinded to patient identities and clinical outcomes providing the scoring of MET and ROCK I expression. We then used retrospective analysis of patients' survival data to provide correlations with expression levels of MET and ROCK I. Both MET and ROCK I were significantly over-expressed in colorectal cancer tissues, relative to the unaffected adjacent mucosa. Kaplan-Meier survival analysis revealed that patients' 5-year survival was inversely correlated with levels of expression of ROCK I. In contrast, MET was less strongly correlated with five-year survival. ROCK I provides better efficacy in predicting patient outcomes, compared to either tumor staging or MET expression. As a result, ROCK I may provide a less invasive method of assessing patient prognoses and directing therapeutic interventions. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  18. Inverse and Predictive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, Ellen Marie

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less

  19. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  20. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  1. Motor system contribution to action prediction: Temporal accuracy depends on motor experience.

    PubMed

    Stapel, Janny C; Hunnius, Sabine; Meyer, Marlene; Bekkering, Harold

    2016-03-01

    Predicting others' actions is essential for well-coordinated social interactions. In two experiments including an infant population, this study addresses to what extent motor experience of an observer determines prediction accuracy for others' actions. Results show that infants who were proficient crawlers but inexperienced walkers predicted crawling more accurately than walking, whereas age groups mastering both skills (i.e. toddlers and adults) were equally accurate in predicting walking and crawling. Regardless of experience, human movements were predicted more accurately by all age groups than non-human movement control stimuli. This suggests that for predictions to be accurate, the observed act needs to be established in the motor repertoire of the observer. Through the acquisition of new motor skills, we also become better at predicting others' actions. The findings thus stress the relevance of motor experience for social-cognitive development. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Accurate pan-specific prediction of peptide-MHC class II binding affinity with improved binding core identification.

    PubMed

    Andreatta, Massimo; Karosiene, Edita; Rasmussen, Michael; Stryhn, Anette; Buus, Søren; Nielsen, Morten

    2015-11-01

    A key event in the generation of a cellular response against malicious organisms through the endocytic pathway is binding of peptidic antigens by major histocompatibility complex class II (MHC class II) molecules. The bound peptide is then presented on the cell surface where it can be recognized by T helper lymphocytes. NetMHCIIpan is a state-of-the-art method for the quantitative prediction of peptide binding to any human or mouse MHC class II molecule of known sequence. In this paper, we describe an updated version of the method with improved peptide binding register identification. Binding register prediction is concerned with determining the minimal core region of nine residues directly in contact with the MHC binding cleft, a crucial piece of information both for the identification and design of CD4(+) T cell antigens. When applied to a set of 51 crystal structures of peptide-MHC complexes with known binding registers, the new method NetMHCIIpan-3.1 significantly outperformed the earlier 3.0 version. We illustrate the impact of accurate binding core identification for the interpretation of T cell cross-reactivity using tetramer double staining with a CMV epitope and its variants mapped to the epitope binding core. NetMHCIIpan is publicly available at http://www.cbs.dtu.dk/services/NetMHCIIpan-3.1 .

  3. Accurate electrostatic and van der Waals pull-in prediction for fully clamped nano/micro-beams using linear universal graphs of pull-in instability

    NASA Astrophysics Data System (ADS)

    Tahani, Masoud; Askari, Amir R.

    2014-09-01

    In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.

  4. The prediction of drug metabolism, tissue distribution, and bioavailability of 50 structurally diverse compounds in rat using mechanism-based absorption, distribution, and metabolism prediction tools.

    PubMed

    De Buck, Stefan S; Sinha, Vikash K; Fenu, Luca A; Gilissen, Ron A; Mackie, Claire E; Nijsen, Marjoleen J

    2007-04-01

    The aim of this study was to assess a physiologically based modeling approach for predicting drug metabolism, tissue distribution, and bioavailability in rat for a structurally diverse set of neutral and moderate-to-strong basic compounds (n = 50). Hepatic blood clearance (CL(h)) was projected using microsomal data and shown to be well predicted, irrespective of the type of hepatic extraction model (80% within 2-fold). Best predictions of CL(h) were obtained disregarding both plasma and microsomal protein binding, whereas strong bias was seen using either blood binding only or both plasma and microsomal protein binding. Two mechanistic tissue composition-based equations were evaluated for predicting volume of distribution (V(dss)) and tissue-to-plasma partitioning (P(tp)). A first approach, which accounted for ionic interactions with acidic phospholipids, resulted in accurate predictions of V(dss) (80% within 2-fold). In contrast, a second approach, which disregarded ionic interactions, was a poor predictor of V(dss) (60% within 2-fold). The first approach also yielded accurate predictions of P(tp) in muscle, heart, and kidney (80% within 3-fold), whereas in lung, liver, and brain, predictions ranged from 47% to 62% within 3-fold. Using the second approach, P(tp) prediction accuracy in muscle, heart, and kidney was on average 70% within 3-fold, and ranged from 24% to 54% in all other tissues. Combining all methods for predicting V(dss) and CL(h) resulted in accurate predictions of the in vivo half-life (70% within 2-fold). Oral bioavailability was well predicted using CL(h) data and Gastroplus Software (80% within 2-fold). These results illustrate that physiologically based prediction tools can provide accurate predictions of rat pharmacokinetics.

  5. Metabolite signal identification in accurate mass metabolomics data with MZedDB, an interactive m/z annotation tool utilising predicted ionisation behaviour 'rules'

    PubMed Central

    Draper, John; Enot, David P; Parker, David; Beckmann, Manfred; Snowdon, Stuart; Lin, Wanchang; Zubair, Hassan

    2009-01-01

    Background Metabolomics experiments using Mass Spectrometry (MS) technology measure the mass to charge ratio (m/z) and intensity of ionised molecules in crude extracts of complex biological samples to generate high dimensional metabolite 'fingerprint' or metabolite 'profile' data. High resolution MS instruments perform routinely with a mass accuracy of < 5 ppm (parts per million) thus providing potentially a direct method for signal putative annotation using databases containing metabolite mass information. Most database interfaces support only simple queries with the default assumption that molecules either gain or lose a single proton when ionised. In reality the annotation process is confounded by the fact that many ionisation products will be not only molecular isotopes but also salt/solvent adducts and neutral loss fragments of original metabolites. This report describes an annotation strategy that will allow searching based on all potential ionisation products predicted to form during electrospray ionisation (ESI). Results Metabolite 'structures' harvested from publicly accessible databases were converted into a common format to generate a comprehensive archive in MZedDB. 'Rules' were derived from chemical information that allowed MZedDB to generate a list of adducts and neutral loss fragments putatively able to form for each structure and calculate, on the fly, the exact molecular weight of every potential ionisation product to provide targets for annotation searches based on accurate mass. We demonstrate that data matrices representing populations of ionisation products generated from different biological matrices contain a large proportion (sometimes > 50%) of molecular isotopes, salt adducts and neutral loss fragments. Correlation analysis of ESI-MS data features confirmed the predicted relationships of m/z signals. An integrated isotope enumerator in MZedDB allowed verification of exact isotopic pattern distributions to corroborate experimental data

  6. Accurate Monitoring and Fault Detection in Wind Measuring Devices through Wireless Sensor Networks

    PubMed Central

    Khan, Komal Saifullah; Tariq, Muhammad

    2014-01-01

    Many wind energy projects report poor performance as low as 60% of the predicted performance. The reason for this is poor resource assessment and the use of new untested technologies and systems in remote locations. Predictions about the potential of an area for wind energy projects (through simulated models) may vary from the actual potential of the area. Hence, introducing accurate site assessment techniques will lead to accurate predictions of energy production from a particular area. We solve this problem by installing a Wireless Sensor Network (WSN) to periodically analyze the data from anemometers installed in that area. After comparative analysis of the acquired data, the anemometers transmit their readings through a WSN to the sink node for analysis. The sink node uses an iterative algorithm which sequentially detects any faulty anemometer and passes the details of the fault to the central system or main station. We apply the proposed technique in simulation as well as in practical implementation and study its accuracy by comparing the simulation results with experimental results to analyze the variation in the results obtained from both simulation model and implemented model. Simulation results show that the algorithm indicates faulty anemometers with high accuracy and low false alarm rate when as many as 25% of the anemometers become faulty. Experimental analysis shows that anemometers incorporating this solution are better assessed and performance level of implemented projects is increased above 86% of the simulated models. PMID:25421739

  7. Prediction of penicillin resistance in Staphylococcus aureus isolates from dairy cows with mastitis, based on prior test results.

    PubMed

    Grinberg, A; Lopez-Villalobos, N; Lawrence, K; Nulsen, M

    2005-10-01

    To gauge how well prior laboratory test results predict in vitro penicillin resistance of Staphylococcus aureus isolates from dairy cows with mastitis. Population-based data on the farm of origin (n=79), genotype based on pulsed-field gel electrophoresis (PFGE) results, and the penicillin-resistance status of Staph. aureus isolates (n=115) from milk samples collected from dairy cows with mastitis submitted to two diagnostic laboratories over a 6-month period were used. Data were mined stochastically using the all-possible-pairs method, binomial modelling and bootstrap simulation, to test whether prior test results enhance the accuracy of prediction of penicillin resistance on farms. Of all Staph. aureus isolates tested, 38% were penicillin resistant. A significant aggregation of penicillin-resistance status was evident within farms. The probability of random pairs of isolates from the same farm having the same penicillin-resistance status was 76%, compared with 53% for random pairings of samples across all farms. Thus, the resistance status of randomly selected isolates was 1.43 times more likely to correctly predict the status of other isolates from the same farm than the random population pairwise concordance probability (p=0.011). This effect was likely due to the clonal relationship of isolates within farms, as the predictive fraction attributable to prior test results was close to nil when the effect of within-farm clonal infections was withdrawn from the model. Knowledge of the penicillin-resistance status of a prior Staph. aureus isolate significantly enhanced the predictive capability of other isolates from the same farm. In the time and space frame of this study, clinicians using previous information from a farm would have more accurately predicted the penicillin-resistance status of an isolate than they would by chance alone on farms infected with clonal Staph. aureus isolates, but not on farms infected with highly genetically heterogeneous bacterial

  8. Predicting Gene Structure Changes Resulting from Genetic Variants via Exon Definition Features.

    PubMed

    Majoros, William H; Holt, Carson; Campbell, Michael S; Ware, Doreen; Yandell, Mark; Reddy, Timothy E

    2018-04-25

    Genetic variation that disrupts gene function by altering gene splicing between individuals can substantially influence traits and disease. In those cases, accurately predicting the effects of genetic variation on splicing can be highly valuable for investigating the mechanisms underlying those traits and diseases. While methods have been developed to generate high quality computational predictions of gene structures in reference genomes, the same methods perform poorly when used to predict the potentially deleterious effects of genetic changes that alter gene splicing between individuals. Underlying that discrepancy in predictive ability are the common assumptions by reference gene finding algorithms that genes are conserved, well-formed, and produce functional proteins. We describe a probabilistic approach for predicting recent changes to gene structure that may or may not conserve function. The model is applicable to both coding and noncoding genes, and can be trained on existing gene annotations without requiring curated examples of aberrant splicing. We apply this model to the problem of predicting altered splicing patterns in the genomes of individual humans, and we demonstrate that performing gene-structure prediction without relying on conserved coding features is feasible. The model predicts an unexpected abundance of variants that create de novo splice sites, an observation supported by both simulations and empirical data from RNA-seq experiments. While these de novo splice variants are commonly misinterpreted by other tools as coding or noncoding variants of little or no effect, we find that in some cases they can have large effects on splicing activity and protein products, and we propose that they may commonly act as cryptic factors in disease. The software is available from geneprediction.org/SGRF. bmajoros@duke.edu. Supplementary information is available at Bioinformatics online.

  9. Molecular Dynamics in Mixed Solvents Reveals Protein-Ligand Interactions, Improves Docking, and Allows Accurate Binding Free Energy Predictions.

    PubMed

    Arcon, Juan Pablo; Defelipe, Lucas A; Modenutti, Carlos P; López, Elias D; Alvarez-Garcia, Daniel; Barril, Xavier; Turjanski, Adrián G; Martí, Marcelo A

    2017-04-24

    One of the most important biological processes at the molecular level is the formation of protein-ligand complexes. Therefore, determining their structure and underlying key interactions is of paramount relevance and has direct applications in drug development. Because of its low cost relative to its experimental sibling, molecular dynamics (MD) simulations in the presence of different solvent probes mimicking specific types of interactions have been increasingly used to analyze protein binding sites and reveal protein-ligand interaction hot spots. However, a systematic comparison of different probes and their real predictive power from a quantitative and thermodynamic point of view is still missing. In the present work, we have performed MD simulations of 18 different proteins in pure water as well as water mixtures of ethanol, acetamide, acetonitrile and methylammonium acetate, leading to a total of 5.4 μs simulation time. For each system, we determined the corresponding solvent sites, defined as space regions adjacent to the protein surface where the probability of finding a probe atom is higher than that in the bulk solvent. Finally, we compared the identified solvent sites with 121 different protein-ligand complexes and used them to perform molecular docking and ligand binding free energy estimates. Our results show that combining solely water and ethanol sites allows sampling over 70% of all possible protein-ligand interactions, especially those that coincide with ligand-based pharmacophoric points. Most important, we also show how the solvent sites can be used to significantly improve ligand docking in terms of both accuracy and precision, and that accurate predictions of ligand binding free energies, along with relative ranking of ligand affinity, can be performed.

  10. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  11. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  12. Predicting beta-turns and their types using predicted backbone dihedral angles and secondary structures.

    PubMed

    Kountouris, Petros; Hirst, Jonathan D

    2010-07-31

    Beta-turns are secondary structure elements usually classified as coil. Their prediction is important, because of their role in protein folding and their frequent occurrence in protein chains. We have developed a novel method that predicts beta-turns and their types using information from multiple sequence alignments, predicted secondary structures and, for the first time, predicted dihedral angles. Our method uses support vector machines, a supervised classification technique, and is trained and tested on three established datasets of 426, 547 and 823 protein chains. We achieve a Matthews correlation coefficient of up to 0.49, when predicting the location of beta-turns, the highest reported value to date. Moreover, the additional dihedral information improves the prediction of beta-turn types I, II, IV, VIII and "non-specific", achieving correlation coefficients up to 0.39, 0.33, 0.27, 0.14 and 0.38, respectively. Our results are more accurate than other methods. We have created an accurate predictor of beta-turns and their types. Our method, called DEBT, is available online at http://comp.chem.nottingham.ac.uk/debt/.

  13. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    PubMed Central

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  14. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformationalmore » changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.« less

  15. Do measures of surgical effectiveness at 1 year after lumbar spine surgery accurately predict 2-year outcomes?

    PubMed

    Adogwa, Owoicho; Elsamadicy, Aladine A; Han, Jing L; Cheng, Joseph; Karikari, Isaac; Bagley, Carlos A

    2016-12-01

    OBJECTIVE With the recent passage of the Patient Protection and Affordable Care Act, there has been a dramatic shift toward critical analyses of quality and longitudinal assessment of subjective and objective outcomes after lumbar spine surgery. Accordingly, the emergence and routine use of real-world institutional registries have been vital to the longitudinal assessment of quality. However, prospectively obtaining longitudinal outcomes for patients at 24 months after spine surgery remains a challenge. The aim of this study was to assess if 12-month measures of treatment effectiveness accurately predict long-term outcomes (24 months). METHODS A nationwide, multiinstitutional, prospective spine outcomes registry was used for this study. Enrollment criteria included available demographic, surgical, and clinical outcomes data. All patients had prospectively collected outcomes measures and a minimum 2-year follow-up. Patient-reported outcomes instruments (Oswestry Disability Index [ODI], SF-36, and visual analog scale [VAS]-back pain/leg pain) were completed before surgery and then at 3, 6, 12, and 24 months after surgery. The Health Transition Index of the SF-36 was used to determine the 1- and 2-year minimum clinically important difference (MCID), and logistic regression modeling was performed to determine if achieving MCID at 1 year adequately predicted improvement and achievement of MCID at 24 months. RESULTS The study group included 969 patients: 300 patients underwent anterior lumbar interbody fusion (ALIF), 606 patients underwent transforaminal lumbar interbody fusion (TLIF), and 63 patients underwent lateral interbody fusion (LLIF). There was a significant correlation between the 12- and 24-month ODI (r = 0.82; p < 0.0001), SF-36 Physical Component Summary score (r = 0.89; p < 0.0001), VAS-back pain (r = 0.90; p < 0.0001), and VAS-leg pain (r = 0.85; p < 0.0001). For the ALIF cohort, patients achieving MCID thresholds for ODI at 12 months were 13-fold (p < 0

  16. Predicting vapor-liquid phase equilibria with augmented ab initio interatomic potentials

    NASA Astrophysics Data System (ADS)

    Vlasiuk, Maryna; Sadus, Richard J.

    2017-06-01

    The ability of ab initio interatomic potentials to accurately predict vapor-liquid phase equilibria is investigated. Monte Carlo simulations are reported for the vapor-liquid equilibria of argon and krypton using recently developed accurate ab initio interatomic potentials. Seventeen interatomic potentials are studied, formulated from different combinations of two-body plus three-body terms. The simulation results are compared to either experimental or reference data for conditions ranging from the triple point to the critical point. It is demonstrated that the use of ab initio potentials enables systematic improvements to the accuracy of predictions via the addition of theoretically based terms. The contribution of three-body interactions is accounted for using the Axilrod-Teller-Muto plus other multipole contributions and the effective Marcelli-Wang-Sadus potentials. The results indicate that the predictive ability of recent interatomic potentials, obtained from quantum chemical calculations, is comparable to that of accurate empirical models. It is demonstrated that the Marcelli-Wang-Sadus potential can be used in combination with accurate two-body ab initio models for the computationally inexpensive and accurate estimation of vapor-liquid phase equilibria.

  17. Predicting vapor-liquid phase equilibria with augmented ab initio interatomic potentials.

    PubMed

    Vlasiuk, Maryna; Sadus, Richard J

    2017-06-28

    The ability of ab initio interatomic potentials to accurately predict vapor-liquid phase equilibria is investigated. Monte Carlo simulations are reported for the vapor-liquid equilibria of argon and krypton using recently developed accurate ab initio interatomic potentials. Seventeen interatomic potentials are studied, formulated from different combinations of two-body plus three-body terms. The simulation results are compared to either experimental or reference data for conditions ranging from the triple point to the critical point. It is demonstrated that the use of ab initio potentials enables systematic improvements to the accuracy of predictions via the addition of theoretically based terms. The contribution of three-body interactions is accounted for using the Axilrod-Teller-Muto plus other multipole contributions and the effective Marcelli-Wang-Sadus potentials. The results indicate that the predictive ability of recent interatomic potentials, obtained from quantum chemical calculations, is comparable to that of accurate empirical models. It is demonstrated that the Marcelli-Wang-Sadus potential can be used in combination with accurate two-body ab initio models for the computationally inexpensive and accurate estimation of vapor-liquid phase equilibria.

  18. Accurate prediction of retention in hydrophilic interaction chromatography by back calculation of high pressure liquid chromatography gradient profiles.

    PubMed

    Wang, Nu; Boswell, Paul G

    2017-10-20

    Gradient retention times are difficult to project from the underlying retention factor (k) vs. solvent composition (φ) relationships. A major reason for this difficulty is that gradients produced by HPLC pumps are imperfect - gradient delay, gradient dispersion, and solvent mis-proportioning are all difficult to account for in calculations. However, we recently showed that a gradient "back-calculation" methodology can measure these imperfections and take them into account. In RPLC, when the back-calculation methodology was used, error in projected gradient retention times is as low as could be expected based on repeatability in the k vs. φ relationships. HILIC, however, presents a new challenge: the selectivity of HILIC columns drift strongly over time. Retention is repeatable in short time, but selectivity frequently drifts over the course of weeks. In this study, we set out to understand if the issue of selectivity drift can be avoid by doing our experiments quickly, and if there any other factors that make it difficult to predict gradient retention times from isocratic k vs. φ relationships when gradient imperfections are taken into account with the back-calculation methodology. While in past reports, the accuracy of retention projections was >5%, the back-calculation methodology brought our error down to ∼1%. This result was 6-43 times more accurate than projections made using ideal gradients and 3-5 times more accurate than the same retention projections made using offset gradients (i.e., gradients that only took gradient delay into account). Still, the error remained higher in our HILIC projections than in RPLC. Based on the shape of the back-calculated gradients, we suspect the higher error is a result of prominent gradient distortion caused by strong, preferential water uptake from the mobile phase into the stationary phase during the gradient - a factor our model did not properly take into account. It appears that, at least with the stationary phase

  19. Correlation of clinical predictions and surgical results in maxillary superior repositioning.

    PubMed

    Tabrizi, Reza; Zamiri, Barbad; Kazemi, Hamidreza

    2014-05-01

    This is a prospective study to evaluate the accuracy of clinical predictions related to surgical results in subjects who underwent maxillary superior repositioning without anterior-posterior movement. Surgeons' predictions according to clinical (tooth show at rest and at the maximum smile) and cephalometric evaluation were documented for the amount of maxillary superior repositioning. Overcorrection or undercorrection was documented for every subject 1 year after the operations. Receiver operating characteristic curve test was used to find a cutoff point in prediction errors and to determine positive predictive value (PPV) and negative predictive value. Forty subjects (14 males and 26 females) were studied. Results showed a significant difference between changes in the tooth show at rest and at the maximum smile line before and after surgery. Analysis of the data demonstrated no correlation between the predictive data and the surgical results. The incidence of undercorrection (25%) was more common than overcorrection (7.5%). The cutoff point for errors in predictions was 5 mm for tooth show at rest and 15 mm at the maximum smile. When the amount of the presurgical tooth show at rest was more than 5 mm, 50.5% of clinical predictions did not match the clinical results (PPV), and 75% of clinical predictions showed the same results when the tooth show was less than 5 mm (negative predictive value). When the amount of presurgical tooth shown in the maximum smile line was more than 15 mm, 75% of clinical predictions did not match with clinical results (PPV), and 25% of the predictions had the same results because the tooth show at the maximum smile was lower than 15 mm. Clinical predictions according to the tooth show at rest and at the maximum smile have a poor correlation with clinical results in maxillary superior repositioning for vertical maxillary excess. The risk of errors in predictions increased when the amount of superior repositioning of the maxilla increased

  20. Accurately predicting the structure, density, and hydrostatic compression of crystalline β-1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane based on its wave-function-based potential

    NASA Astrophysics Data System (ADS)

    Song, H.-J.; Huang, F.

    2011-09-01

    A wave-function-based intermolecular potential of the β phase 1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane (HMX) molecule has been constructed from first principles using the Williams-Stone-Misquitta method and the symmetry-adapted perturbation theory. Using the potential and its derivatives, we have accurately predicted not only the structure and lattice energy of the crystalline β-HMX at 0 K, but also its densities at temperatures of 0-403 K within an accuracy of 1% of density. The calculated densities at pressures within 0-6 GPa excellently agree with the results from the experiments on hydrostatic compression.

  1. Using radiance predicted by the P3 approximation in a spherical geometry to predict tissue optical properties

    NASA Astrophysics Data System (ADS)

    Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John

    2001-01-01

    For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.

  2. Accurate density functional prediction of molecular electron affinity with the scaling corrected Kohn–Sham frontier orbital energies

    NASA Astrophysics Data System (ADS)

    Zhang, DaDi; Yang, Xiaolong; Zheng, Xiao; Yang, Weitao

    2018-04-01

    Electron affinity (EA) is the energy released when an additional electron is attached to an atom or a molecule. EA is a fundamental thermochemical property, and it is closely pertinent to other important properties such as electronegativity and hardness. However, accurate prediction of EA is difficult with density functional theory methods. The somewhat large error of the calculated EAs originates mainly from the intrinsic delocalisation error associated with the approximate exchange-correlation functional. In this work, we employ a previously developed non-empirical global scaling correction approach, which explicitly imposes the Perdew-Parr-Levy-Balduz condition to the approximate functional, and achieve a substantially improved accuracy for the calculated EAs. In our approach, the EA is given by the scaling corrected Kohn-Sham lowest unoccupied molecular orbital energy of the neutral molecule, without the need to carry out the self-consistent-field calculation for the anion.

  3. Combining first-principles and data modeling for the accurate prediction of the refractive index of organic polymers

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes

    2018-06-01

    Organic materials with a high index of refraction (RI) are attracting considerable interest due to their potential application in optic and optoelectronic devices. However, most of these applications require an RI value of 1.7 or larger, while typical carbon-based polymers only exhibit values in the range of 1.3-1.5. This paper introduces an efficient computational protocol for the accurate prediction of RI values in polymers to facilitate in silico studies that can guide the discovery and design of next-generation high-RI materials. Our protocol is based on the Lorentz-Lorenz equation and is parametrized by the polarizability and number density values of a given candidate compound. In the proposed scheme, we compute the former using first-principles electronic structure theory and the latter using an approximation based on van der Waals volumes. The critical parameter in the number density approximation is the packing fraction of the bulk polymer, for which we have devised a machine learning model. We demonstrate the performance of the proposed RI protocol by testing its predictions against the experimentally known RI values of 112 optical polymers. Our approach to combine first-principles and data modeling emerges as both a successful and a highly economical path to determining the RI values for a wide range of organic polymers.

  4. Coating Life Prediction

    NASA Technical Reports Server (NTRS)

    Nesbitt, J. A.; Gedwill, M. A.

    1984-01-01

    Hot-section gas-turbine components typically require some form of coating for oxidation and corrosion protection. Efficient use of coatings requires reliable and accurate predictions of the protective life of the coating. Currently engine inspections and component replacements are often made on a conservative basis. As a result, there is a constant need to improve and develop the life-prediction capability of metallic coatings for use in various service environments. The purpose of this present work is aimed at developing of an improved methodology for predicting metallic coating lives in an oxidizing environment and in a corrosive environment.

  5. A transcriptomics data-driven gene space accurately predicts liver cytopathology and drug-induced liver injury

    PubMed Central

    Kohonen, Pekka; Parkkinen, Juuso A.; Willighagen, Egon L.; Ceder, Rebecca; Wennerberg, Krister; Kaski, Samuel; Grafström, Roland C.

    2017-01-01

    Predicting unanticipated harmful effects of chemicals and drug molecules is a difficult and costly task. Here we utilize a ‘big data compacting and data fusion’—concept to capture diverse adverse outcomes on cellular and organismal levels. The approach generates from transcriptomics data set a ‘predictive toxicogenomics space’ (PTGS) tool composed of 1,331 genes distributed over 14 overlapping cytotoxicity-related gene space components. Involving ∼2.5 × 108 data points and 1,300 compounds to construct and validate the PTGS, the tool serves to: explain dose-dependent cytotoxicity effects, provide a virtual cytotoxicity probability estimate intrinsic to omics data, predict chemically-induced pathological states in liver resulting from repeated dosing of rats, and furthermore, predict human drug-induced liver injury (DILI) from hepatocyte experiments. Analysing 68 DILI-annotated drugs, the PTGS tool outperforms and complements existing tests, leading to a hereto-unseen level of DILI prediction accuracy. PMID:28671182

  6. Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong; Malik, Waqar; Jung, Yoon C.

    2016-01-01

    Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.

  7. An accurate and efficient method to predict the electronic excitation energies of BODIPY fluorescent dyes.

    PubMed

    Wang, Jia-Nan; Jin, Jun-Ling; Geng, Yun; Sun, Shi-Ling; Xu, Hong-Liang; Lu, Ying-Hua; Su, Zhong-Min

    2013-03-15

    Recently, the extreme learning machine neural network (ELMNN) as a valid computing method has been proposed to predict the nonlinear optical property successfully (Wang et al., J. Comput. Chem. 2012, 33, 231). In this work, first, we follow this line of work to predict the electronic excitation energies using the ELMNN method. Significantly, the root mean square deviation of the predicted electronic excitation energies of 90 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene (BODIPY) derivatives between the predicted and experimental values has been reduced to 0.13 eV. Second, four groups of molecule descriptors are considered when building the computing models. The results show that the quantum chemical descriptions have the closest intrinsic relation with the electronic excitation energy values. Finally, a user-friendly web server (EEEBPre: Prediction of electronic excitation energies for BODIPY dyes), which is freely accessible to public at the web site: http://202.198.129.218, has been built for prediction. This web server can return the predicted electronic excitation energy values of BODIPY dyes that are high consistent with the experimental values. We hope that this web server would be helpful to theoretical and experimental chemists in related research. Copyright © 2012 Wiley Periodicals, Inc.

  8. Predicting DNA hybridization kinetics from sequence

    NASA Astrophysics Data System (ADS)

    Zhang, Jinny X.; Fang, John Z.; Duan, Wei; Wu, Lucia R.; Zhang, Angela W.; Dalchau, Neil; Yordanov, Boyan; Petersen, Rasmus; Phillips, Andrew; Zhang, David Yu

    2018-01-01

    Hybridization is a key molecular process in biology and biotechnology, but so far there is no predictive model for accurately determining hybridization rate constants based on sequence information. Here, we report a weighted neighbour voting (WNV) prediction algorithm, in which the hybridization rate constant of an unknown sequence is predicted based on similarity reactions with known rate constants. To construct this algorithm we first performed 210 fluorescence kinetics experiments to observe the hybridization kinetics of 100 different DNA target and probe pairs (36 nt sub-sequences of the CYCS and VEGF genes) at temperatures ranging from 28 to 55 °C. Automated feature selection and weighting optimization resulted in a final six-feature WNV model, which can predict hybridization rate constants of new sequences to within a factor of 3 with ∼91% accuracy, based on leave-one-out cross-validation. Accurate prediction of hybridization kinetics allows the design of efficient probe sequences for genomics research.

  9. Predictability of the 2012 Great Arctic Cyclone on medium-range timescales

    NASA Astrophysics Data System (ADS)

    Yamagami, Akio; Matsueda, Mio; Tanaka, Hiroshi L.

    2018-03-01

    Arctic Cyclones (ACs) can have a significant impact on the Arctic region. Therefore, the accurate prediction of ACs is important in anticipating their associated environmental and societal costs. This study investigates the predictability of the 2012 Great Arctic Cyclone (AC12) that exhibited a minimum central pressure of 964 hPa on 6 August 2012, using five medium-range ensemble forecasts. We show that the development and position of AC12 were better predicted in forecasts initialized on and after 4 August 2012. In addition, the position of AC12 was more predictable than its development. A comparison of ensemble members, classified by the error in predictability of the development and position of AC12, revealed that an accurate prediction of upper-level fields, particularly temperature, was important for the prediction of this event. The predicted position of AC12 was influenced mainly by the prediction of the polar vortex, whereas the predicted development of AC12 was dependent primarily on the prediction of the merging of upper-level warm cores. Consequently, an accurate prediction of the polar vortex position and the development of the warm core through merging resulted in better prediction of AC12.

  10. A rapid and accurate approach for prediction of interactomes from co-elution data (PrInCE).

    PubMed

    Stacey, R Greg; Skinnider, Michael A; Scott, Nichollas E; Foster, Leonard J

    2017-10-23

    An organism's protein interactome, or complete network of protein-protein interactions, defines the protein complexes that drive cellular processes. Techniques for studying protein complexes have traditionally applied targeted strategies such as yeast two-hybrid or affinity purification-mass spectrometry to assess protein interactions. However, given the vast number of protein complexes, more scalable methods are necessary to accelerate interaction discovery and to construct whole interactomes. We recently developed a complementary technique based on the use of protein correlation profiling (PCP) and stable isotope labeling in amino acids in cell culture (SILAC) to assess chromatographic co-elution as evidence of interacting proteins. Importantly, PCP-SILAC is also capable of measuring protein interactions simultaneously under multiple biological conditions, allowing the detection of treatment-specific changes to an interactome. Given the uniqueness and high dimensionality of co-elution data, new tools are needed to compare protein elution profiles, control false discovery rates, and construct an accurate interactome. Here we describe a freely available bioinformatics pipeline, PrInCE, for the analysis of co-elution data. PrInCE is a modular, open-source library that is computationally inexpensive, able to use label and label-free data, and capable of detecting tens of thousands of protein-protein interactions. Using a machine learning approach, PrInCE offers greatly reduced run time, more predicted interactions at the same stringency, prediction of protein complexes, and greater ease of use over previous bioinformatics tools for co-elution data. PrInCE is implemented in Matlab (version R2017a). Source code and standalone executable programs for Windows and Mac OSX are available at https://github.com/fosterlab/PrInCE , where usage instructions can be found. An example dataset and output are also provided for testing purposes. PrInCE is the first fast and easy

  11. Hounsfield unit density accurately predicts ESWL success.

    PubMed

    Magnuson, William J; Tomera, Kevin M; Lance, Raymond S

    2005-01-01

    Extracorporeal shockwave lithotripsy (ESWL) is a commonly used non-invasive treatment for urolithiasis. Helical CT scans provide much better and detailed imaging of the patient with urolithiasis including the ability to measure density of urinary stones. In this study we tested the hypothesis that density of urinary calculi as measured by CT can predict successful ESWL treatment. 198 patients were treated at Alaska Urological Associates with ESWL between January 2002 and April 2004. Of these 101 met study inclusion with accessible CT scans and stones ranging from 5-15 mm. Follow-up imaging demonstrated stone freedom in 74.2%. The overall mean Houndsfield density value for stone-free compared to residual stone groups were significantly different ( 93.61 vs 122.80 p < 0.0001). We determined by receiver operator curve (ROC) that HDV of 93 or less carries a 90% or better chance of stone freedom following ESWL for upper tract calculi between 5-15mm.

  12. Prediction of Combustion Gas Deposit Compositions

    NASA Technical Reports Server (NTRS)

    Kohl, F. J.; Mcbride, B. J.; Zeleznik, F. J.; Gordon, S.

    1985-01-01

    Demonstrated procedure used to predict accurately chemical compositions of complicated deposit mixtures. NASA Lewis Research Center's Computer Program for Calculation of Complex Chemical Equilibrium Compositions (CEC) used in conjunction with Computer Program for Calculation of Ideal Gas Thermodynamic Data (PAC) and resulting Thermodynamic Data Base (THDATA) to predict deposit compositions from metal or mineral-seeded combustion processes.

  13. Accurate Prediction of Inducible Transcription Factor Binding Intensities In Vivo

    PubMed Central

    Siepel, Adam; Lis, John T.

    2012-01-01

    DNA sequence and local chromatin landscape act jointly to determine transcription factor (TF) binding intensity profiles. To disentangle these influences, we developed an experimental approach, called protein/DNA binding followed by high-throughput sequencing (PB–seq), that allows the binding energy landscape to be characterized genome-wide in the absence of chromatin. We applied our methods to the Drosophila Heat Shock Factor (HSF), which inducibly binds a target DNA sequence element (HSE) following heat shock stress. PB–seq involves incubating sheared naked genomic DNA with recombinant HSF, partitioning the HSF–bound and HSF–free DNA, and then detecting HSF–bound DNA by high-throughput sequencing. We compared PB–seq binding profiles with ones observed in vivo by ChIP–seq and developed statistical models to predict the observed departures from idealized binding patterns based on covariates describing the local chromatin environment. We found that DNase I hypersensitivity and tetra-acetylation of H4 were the most influential covariates in predicting changes in HSF binding affinity. We also investigated the extent to which DNA accessibility, as measured by digital DNase I footprinting data, could be predicted from MNase–seq data and the ChIP–chip profiles for many histone modifications and TFs, and found GAGA element associated factor (GAF), tetra-acetylation of H4, and H4K16 acetylation to be the most predictive covariates. Lastly, we generated an unbiased model of HSF binding sequences, which revealed distinct biophysical properties of the HSF/HSE interaction and a previously unrecognized substructure within the HSE. These findings provide new insights into the interplay between the genomic sequence and the chromatin landscape in determining transcription factor binding intensity. PMID:22479205

  14. FragBag, an accurate representation of protein structure, retrieves structural neighbors from the entire PDB quickly and accurately.

    PubMed

    Budowski-Tal, Inbal; Nov, Yuval; Kolodny, Rachel

    2010-02-23

    Fast identification of protein structures that are similar to a specified query structure in the entire Protein Data Bank (PDB) is fundamental in structure and function prediction. We present FragBag: An ultrafast and accurate method for comparing protein structures. We describe a protein structure by the collection of its overlapping short contiguous backbone segments, and discretize this set using a library of fragments. Then, we succinctly represent the protein as a "bags-of-fragments"-a vector that counts the number of occurrences of each fragment-and measure the similarity between two structures by the similarity between their vectors. Our representation has two additional benefits: (i) it can be used to construct an inverted index, for implementing a fast structural search engine of the entire PDB, and (ii) one can specify a structure as a collection of substructures, without combining them into a single structure; this is valuable for structure prediction, when there are reliable predictions only of parts of the protein. We use receiver operating characteristic curve analysis to quantify the success of FragBag in identifying neighbor candidate sets in a dataset of over 2,900 structures. The gold standard is the set of neighbors found by six state of the art structural aligners. Our best FragBag library finds more accurate candidate sets than the three other filter methods: The SGM, PRIDE, and a method by Zotenko et al. More interestingly, FragBag performs on a par with the computationally expensive, yet highly trusted structural aligners STRUCTAL and CE.

  15. Proton dissociation properties of arylphosphonates: Determination of accurate Hammett equation parameters.

    PubMed

    Dargó, Gergő; Bölcskei, Adrienn; Grün, Alajos; Béni, Szabolcs; Szántó, Zoltán; Lopata, Antal; Keglevich, György; Balogh, György T

    2017-09-05

    Determination of the proton dissociation constants of several arylphosphonic acid derivatives was carried out to investigate the accuracy of the Hammett equations available for this family of compounds. For the measurement of the pK a values modern, accurate methods, such as the differential potentiometric titration and NMR-pH titration were used. We found our results significantly different from the pK a values reported before (pK a1 : MAE = 0.16 pK a2 : MAE=0.59). Based on our recently measured pK a values, refined Hammett equations were determined that might be used for predicting highly accurate ionization constants of newly synthesized compounds (pK a1 =1.70-0.894σ, pK a2 =6.92-0.934σ). Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Accurate interatomic force fields via machine learning with covariant kernels

    NASA Astrophysics Data System (ADS)

    Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro

    2017-06-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.

  17. Glucose Meters: A Review of Technical Challenges to Obtaining Accurate Results

    PubMed Central

    Tonyushkina, Ksenia; Nichols, James H.

    2009-01-01

    , anemia, hypotension, and other disease states. This article reviews the challenges involved in obtaining accurate glucose meter results. PMID:20144348

  18. Folding molecular dynamics simulations accurately predict the effect of mutations on the stability and structure of a vammin-derived peptide.

    PubMed

    Koukos, Panagiotis I; Glykos, Nicholas M

    2014-08-28

    Folding molecular dynamics simulations amounting to a grand total of 4 μs of simulation time were performed on two peptides (with native and mutated sequences) derived from loop 3 of the vammin protein and the results compared with the experimentally known peptide stabilities and structures. The simulations faithfully and accurately reproduce the major experimental findings and show that (a) the native peptide is mostly disordered in solution, (b) the mutant peptide has a well-defined and stable structure, and (c) the structure of the mutant is an irregular β-hairpin with a non-glycine β-bulge, in excellent agreement with the peptide's known NMR structure. Additionally, the simulations also predict the presence of a very small β-hairpin-like population for the native peptide but surprisingly indicate that this population is structurally more similar to the structure of the native peptide as observed in the vammin protein than to the NMR structure of the isolated mutant peptide. We conclude that, at least for the given system, force field, and simulation protocol, folding molecular dynamics simulations appear to be successful in reproducing the experimentally accessible physical reality to a satisfactory level of detail and accuracy.

  19. Accurate Prediction of Protein Contact Maps by Coupling Residual Two-Dimensional Bidirectional Long Short-Term Memory with Convolutional Neural Networks.

    PubMed

    Hanson, Jack; Paliwal, Kuldip; Litfin, Thomas; Yang, Yuedong; Zhou, Yaoqi

    2018-06-19

    Accurate prediction of a protein contact map depends greatly on capturing as much contextual information as possible from surrounding residues for a target residue pair. Recently, ultra-deep residual convolutional networks were found to be state-of-the-art in the latest Critical Assessment of Structure Prediction techniques (CASP12, (Schaarschmidt et al., 2018)) for protein contact map prediction by attempting to provide a protein-wide context at each residue pair. Recurrent neural networks have seen great success in recent protein residue classification problems due to their ability to propagate information through long protein sequences, especially Long Short-Term Memory (LSTM) cells. Here we propose a novel protein contact map prediction method by stacking residual convolutional networks with two-dimensional residual bidirectional recurrent LSTM networks, and using both one-dimensional sequence-based and two-dimensional evolutionary coupling-based information. We show that the proposed method achieves a robust performance over validation and independent test sets with the Area Under the receiver operating characteristic Curve (AUC)>0.95 in all tests. When compared to several state-of-the-art methods for independent testing of 228 proteins, the method yields an AUC value of 0.958, whereas the next-best method obtains an AUC of 0.909. More importantly, the improvement is over contacts at all sequence-position separations. Specifically, a 8.95%, 5.65% and 2.84% increase in precision were observed for the top L∕10 predictions over the next best for short, medium and long-range contacts, respectively. This confirms the usefulness of ResNets to congregate the short-range relations and 2D-BRLSTM to propagate the long-range dependencies throughout the entire protein contact map 'image'. SPOT-Contact server url: http://sparks-lab.org/jack/server/SPOT-Contact/. Supplementary data is available at Bioinformatics online.

  20. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  1. Urinary Squamous Epithelial Cells Do Not Accurately Predict Urine Culture Contamination, but May Predict Urinalysis Performance in Predicting Bacteriuria.

    PubMed

    Mohr, Nicholas M; Harland, Karisa K; Crabb, Victoria; Mutnick, Rachel; Baumgartner, David; Spinosi, Stephanie; Haarstad, Michael; Ahmed, Azeemuddin; Schweizer, Marin; Faine, Brett

    2016-03-01

    The presence of squamous epithelial cells (SECs) has been advocated to identify urinary contamination despite a paucity of evidence supporting this practice. We sought to determine the value of using quantitative SECs as a predictor of urinalysis contamination. Retrospective cross-sectional study of adults (≥18 years old) presenting to a tertiary academic medical center who had urinalysis with microscopy and urine culture performed. Patients with missing or implausible demographic data were excluded (2.5% of total sample). The primary analysis aimed to determine an SEC threshold that predicted urine culture contamination using receiver operating characteristics (ROC) curve analysis. The a priori secondary analysis explored how demographic variables (age, sex, body mass index) may modify the SEC test performance and whether SECs impacted traditional urinalysis indicators of bacteriuria. A total of 19,328 records were included. ROC curve analysis demonstrated that SEC count was a poor predictor of urine culture contamination (area under the ROC curve = 0.680, 95% confidence interval [CI] = 0.671 to 0.689). In secondary analysis, the positive likelihood ratio (LR+) of predicting bacteriuria via urinalysis among noncontaminated specimens was 4.98 (95% CI = 4.59 to 5.40) in the absence of SECs, but the LR+ fell to 2.35 (95% CI = 2.17 to 2.54) for samples with more than 8 SECs/low-powered field (lpf). In an independent validation cohort, urinalysis samples with fewer than 8 SECs/lpf predicted bacteriuria better (sensitivity = 75%, specificity = 84%) than samples with more than 8 SECs/lpf (sensitivity = 86%, specificity = 70%; diagnostic odds ratio = 17.5 [14.9 to 20.7] vs. 8.7 [7.3 to 10.5]). Squamous epithelial cells are a poor predictor of urine culture contamination, but may predict poor predictive performance of traditional urinalysis measures. © 2016 by the Society for Academic Emergency Medicine.

  2. Prediction, experimental results and analysis of the ITER TF insert coil quench propagation tests, using the 4C code

    NASA Astrophysics Data System (ADS)

    Zanino, R.; Bonifetto, R.; Brighenti, A.; Isono, T.; Ozeki, H.; Savoldi, L.

    2018-07-01

    The ITER toroidal field insert (TFI) coil is a single-layer Nb3Sn solenoid tested in 2016-2017 at the National Institutes for Quantum and Radiological Science and Technology (former JAEA) in Naka, Japan. The TFI, the last in a series of ITER insert coils, was tested in operating conditions relevant for the actual ITER TF coils, inserting it in the borehole of the central solenoid model coil, which provided the background magnetic field. In this paper, we consider the five quench propagation tests that were performed using one or two inductive heaters (IHs) as drivers; out of these, three used just one IH but with increasing delay times, up to 7.5 s, between the quench detection and the TFI current dump. The results of the 4C code prediction of the quench propagation up to the current dump are presented first, based on simulations performed before the tests. We then describe the experimental results, showing good reproducibility. Finally, we compare the 4C code predictions with the measurements, confirming the 4C code capability to accurately predict the quench propagation, and the evolution of total and local voltages, as well as of the hot spot temperature. To the best of our knowledge, such a predictive validation exercise is performed here for the first time for the quench of a Nb3Sn coil. Discrepancies between prediction and measurement are found in the evolution of the jacket temperatures, in the He pressurization and quench acceleration in the late phase of the transient before the dump, as well as in the early evolution of the inlet and outlet He mass flow rate. Based on the lessons learned in the predictive exercise, the model is then refined to try and improve a posteriori (i.e. in interpretive, as opposed to predictive mode) the agreement between simulation and experiment.

  3. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  4. Large-scale structure prediction by improved contact predictions and model quality assessment.

    PubMed

    Michel, Mirco; Menéndez Hurtado, David; Uziela, Karolis; Elofsson, Arne

    2017-07-15

    Accurate contact predictions can be used for predicting the structure of proteins. Until recently these methods were limited to very big protein families, decreasing their utility. However, recent progress by combining direct coupling analysis with machine learning methods has made it possible to predict accurate contact maps for smaller families. To what extent these predictions can be used to produce accurate models of the families is not known. We present the PconsFold2 pipeline that uses contact predictions from PconsC3, the CONFOLD folding algorithm and model quality estimations to predict the structure of a protein. We show that the model quality estimation significantly increases the number of models that reliably can be identified. Finally, we apply PconsFold2 to 6379 Pfam families of unknown structure and find that PconsFold2 can, with an estimated 90% specificity, predict the structure of up to 558 Pfam families of unknown structure. Out of these, 415 have not been reported before. Datasets as well as models of all the 558 Pfam families are available at http://c3.pcons.net/ . All programs used here are freely available. arne@bioinfo.se. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  6. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions

    PubMed Central

    Brezovský, Jan

    2016-01-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools’ predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations

  7. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    PubMed

    Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan

    2016-05-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To

  8. Accurate in silico prediction of species-specific methylation sites based on information gain feature optimization.

    PubMed

    Wen, Ping-Ping; Shi, Shao-Ping; Xu, Hao-Dong; Wang, Li-Na; Qiu, Jian-Ding

    2016-10-15

    As one of the most important reversible types of post-translational modification, protein methylation catalyzed by methyltransferases carries many pivotal biological functions as well as many essential biological processes. Identification of methylation sites is prerequisite for decoding methylation regulatory networks in living cells and understanding their physiological roles. Experimental methods are limitations of labor-intensive and time-consuming. While in silicon approaches are cost-effective and high-throughput manner to predict potential methylation sites, but those previous predictors only have a mixed model and their prediction performances are not fully satisfactory now. Recently, with increasing availability of quantitative methylation datasets in diverse species (especially in eukaryotes), there is a growing need to develop a species-specific predictor. Here, we designed a tool named PSSMe based on information gain (IG) feature optimization method for species-specific methylation site prediction. The IG method was adopted to analyze the importance and contribution of each feature, then select the valuable dimension feature vectors to reconstitute a new orderly feature, which was applied to build the finally prediction model. Finally, our method improves prediction performance of accuracy about 15% comparing with single features. Furthermore, our species-specific model significantly improves the predictive performance compare with other general methylation prediction tools. Hence, our prediction results serve as useful resources to elucidate the mechanism of arginine or lysine methylation and facilitate hypothesis-driven experimental design and validation. The tool online service is implemented by C# language and freely available at http://bioinfo.ncu.edu.cn/PSSMe.aspx CONTACT: jdqiu@ncu.edu.cnSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights

  9. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  10. Accurate prediction of RNA-binding protein residues with two discriminative structural descriptors.

    PubMed

    Sun, Meijian; Wang, Xia; Zou, Chuanxin; He, Zenghui; Liu, Wei; Li, Honglin

    2016-06-07

    RNA-binding proteins participate in many important biological processes concerning RNA-mediated gene regulation, and several computational methods have been recently developed to predict the protein-RNA interactions of RNA-binding proteins. Newly developed discriminative descriptors will help to improve the prediction accuracy of these prediction methods and provide further meaningful information for researchers. In this work, we designed two structural features (residue electrostatic surface potential and triplet interface propensity) and according to the statistical and structural analysis of protein-RNA complexes, the two features were powerful for identifying RNA-binding protein residues. Using these two features and other excellent structure- and sequence-based features, a random forest classifier was constructed to predict RNA-binding residues. The area under the receiver operating characteristic curve (AUC) of five-fold cross-validation for our method on training set RBP195 was 0.900, and when applied to the test set RBP68, the prediction accuracy (ACC) was 0.868, and the F-score was 0.631. The good prediction performance of our method revealed that the two newly designed descriptors could be discriminative for inferring protein residues interacting with RNAs. To facilitate the use of our method, a web-server called RNAProSite, which implements the proposed method, was constructed and is freely available at http://lilab.ecust.edu.cn/NABind .

  11. Accurate analytical modeling of junctionless DG-MOSFET by green's function approach

    NASA Astrophysics Data System (ADS)

    Nandi, Ashutosh; Pandey, Nilesh

    2017-11-01

    An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.

  12. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  13. Predicting Next Year's Resources--Short-Term Enrollment Forecasting for Accurate Budget Planning. AIR Forum Paper 1978.

    ERIC Educational Resources Information Center

    Salley, Charles D.

    Accurate enrollment forecasts are a prerequisite for reliable budget projections. This is because tuition payments make up a significant portion of a university's revenue, and anticipated revenue is the immediate constraint on current operating expenditures. Accurate forecasts are even more critical to revenue projections when a university's…

  14. An adaptive data-driven method for accurate prediction of remaining useful life of rolling bearings

    NASA Astrophysics Data System (ADS)

    Peng, Yanfeng; Cheng, Junsheng; Liu, Yanfei; Li, Xuejun; Peng, Zhihua

    2018-06-01

    A novel data-driven method based on Gaussian mixture model (GMM) and distance evaluation technique (DET) is proposed to predict the remaining useful life (RUL) of rolling bearings. The data sets are clustered by GMM to divide all data sets into several health states adaptively and reasonably. The number of clusters is determined by the minimum description length principle. Thus, either the health state of the data sets or the number of the states is obtained automatically. Meanwhile, the abnormal data sets can be recognized during the clustering process and removed from the training data sets. After obtaining the health states, appropriate features are selected by DET for increasing the classification and prediction accuracy. In the prediction process, each vibration signal is decomposed into several components by empirical mode decomposition. Some common statistical parameters of the components are calculated first and then the features are clustered using GMM to divide the data sets into several health states and remove the abnormal data sets. Thereafter, appropriate statistical parameters of the generated components are selected using DET. Finally, least squares support vector machine is utilized to predict the RUL of rolling bearings. Experimental results indicate that the proposed method reliably predicts the RUL of rolling bearings.

  15. Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations

    DTIC Science & Technology

    2011-09-30

    System via Accurate Light Calculations Curtis D. Mobley Sequoia Scientific, Inc. 2700 Richards Road, Suite 107 Bellevue, WA 98005 phone: 425...7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Sequoia Scientific, Inc,2700 Richards Road, Suite 107,Bellevue,WA,98005 8. PERFORMING...EcoLight-S 1.0 Users’ Guide and Technical Documentation. Sequoia Scientific, Inc., Bellevue, WA, 38 pages. Mobley, C. D., 2011. Fast light calculations

  16. Diagnostic accuracy of uriSed automated urine microscopic sediment analyzer and dipstick parameters in predicting urine culture test results.

    PubMed

    Huysal, Kağan; Budak, Yasemin U; Karaca, Ayse Ulusoy; Aydos, Murat; Kahvecioğlu, Serdar; Bulut, Mehtap; Polat, Murat

    2013-01-01

    Urinary tract infection (UTI) is one of the most common types of infection. Currently, diagnosis is primarily based on microbiologic culture, which is time- and labor-consuming. The aim of this study was to assess the diagnostic accuracy of urinalysis results from UriSed (77 Electronica, Budapest, Hungary), an automated microscopic image-based sediment analyzer, in predicting positive urine cultures. We examined a total of 384 urine specimens from hospitalized patients and outpatients attending our hospital on the same day for urinalysis, dipstick tests and semi-quantitative urine culture. The urinalysis results were compared with those of conventional semiquantitative urine culture. Of 384 urinary specimens, 68 were positive for bacteriuria by culture, and were thus considered true positives. Comparison of these results with those obtained from the UriSed analyzer indicated that the analyzer had a specificity of 91.1%, a sensitivity of 47.0%, a positive predictive value (PPV) of 53.3% (95% confidence interval (CI) = 40.8-65.3), and a negative predictive value (NPV) of 88.8% (95% CI = 85.0-91.8%). The accuracy was 83.3% when the urine leukocyte parameter was used, 76.8% when bacteriuria analysis of urinary sediment was used, and 85.1% when the bacteriuria and leukocyturia parameters were combined. The presence of nitrite was the best indicator of culture positivity (99.3% specificity) but had a negative likelihood ratio of 0.7, indicating that it was not a reliable clinical test. Although the specificity of the UriSed analyzer was within acceptable limits, the sensitivity value was low. Thus, UriSed urinalysis resuIts do not accurately predict the outcome of culture.

  17. Protein docking prediction using predicted protein-protein interface.

    PubMed

    Li, Bin; Kihara, Daisuke

    2012-01-10

    Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  18. Towards seasonal Arctic shipping route predictions

    NASA Astrophysics Data System (ADS)

    Melia, N.; Haines, K.; Hawkins, E.; Day, J. J.

    2017-08-01

    The continuing decline in Arctic sea-ice will likely lead to increased human activity and opportunities for shipping in the region, suggesting that seasonal predictions of route openings will become ever more important. Here we present results from a set of ‘perfect model’ experiments to assess the predictability characteristics of the opening of Arctic sea routes. We find skilful predictions of the upcoming summer shipping season can be made from as early as January, although typically forecasts show lower skill before a May ‘predictability barrier’. We demonstrate that in forecasts started from January, predictions of route opening date are twice as uncertain as predicting the closing date and that the Arctic shipping season is becoming longer due to climate change, with later closing dates mostly responsible. We find that predictive skill is state dependent with predictions for high or low ice years exhibiting greater skill than medium ice years. Forecasting the fastest open water route through the Arctic is accurate to within 200 km when predicted from July, a six-fold increase in accuracy compared to forecasts initialised from the previous November, which are typically no better than climatology. Finally we find that initialisation of accurate summer sea-ice thickness information is crucial to obtain skilful forecasts, further motivating investment into sea-ice thickness observations, climate models, and assimilation systems.

  19. Predicting Football Matches Results using Bayesian Networks for English Premier League (EPL)

    NASA Astrophysics Data System (ADS)

    Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab

    2017-08-01

    The issues of modeling asscoiation football prediction model has become increasingly popular in the last few years and many different approaches of prediction models have been proposed with the point of evaluating the attributes that lead a football team to lose, draw or win the match. There are three types of approaches has been considered for predicting football matches results which include statistical approaches, machine learning approaches and Bayesian approaches. Lately, many studies regarding football prediction models has been produced using Bayesian approaches. This paper proposes a Bayesian Networks (BNs) to predict the results of football matches in term of home win (H), away win (A) and draw (D). The English Premier League (EPL) for three seasons of 2010-2011, 2011-2012 and 2012-2013 has been selected and reviewed. K-fold cross validation has been used for testing the accuracy of prediction model. The required information about the football data is sourced from a legitimate site at http://www.football-data.co.uk. BNs achieved predictive accuracy of 75.09% in average across three seasons. It is hoped that the results could be used as the benchmark output for future research in predicting football matches results.

  20. PSORTb 3.0: improved protein subcellular localization prediction with refined localization subcategories and predictive capabilities for all prokaryotes.

    PubMed

    Yu, Nancy Y; Wagner, James R; Laird, Matthew R; Melli, Gabor; Rey, Sébastien; Lo, Raymond; Dao, Phuong; Sahinalp, S Cenk; Ester, Martin; Foster, Leonard J; Brinkman, Fiona S L

    2010-07-01

    PSORTb has remained the most precise bacterial protein subcellular localization (SCL) predictor since it was first made available in 2003. However, the recall needs to be improved and no accurate SCL predictors yet make predictions for archaea, nor differentiate important localization subcategories, such as proteins targeted to a host cell or bacterial hyperstructures/organelles. Such improvements should preferably be encompassed in a freely available web-based predictor that can also be used as a standalone program. We developed PSORTb version 3.0 with improved recall, higher proteome-scale prediction coverage, and new refined localization subcategories. It is the first SCL predictor specifically geared for all prokaryotes, including archaea and bacteria with atypical membrane/cell wall topologies. It features an improved standalone program, with a new batch results delivery system complementing its web interface. We evaluated the most accurate SCL predictors using 5-fold cross validation plus we performed an independent proteomics analysis, showing that PSORTb 3.0 is the most accurate but can benefit from being complemented by Proteome Analyst predictions. http://www.psort.org/psortb (download open source software or use the web interface). psort-mail@sfu.ca Supplementary data are available at Bioinformatics online.

  1. A Simple and Accurate Rate-Driven Infiltration Model

    NASA Astrophysics Data System (ADS)

    Cui, G.; Zhu, J.

    2017-12-01

    In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.

  2. Predictive aging results for cable materials in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, K.T.; Clough, R.L.

    1990-11-01

    In this report, we provide a detailed discussion of methodology of predicting cable degradation versus dose rate, temperature, and exposure time and its application to data obtained on a number of additional nuclear power plant cable insulation (a hypalon, a silicon rubber and two ethylenetetrafluoroethylenes) and jacket (a hypalon) materials. We then show that the predicted, low-dose-rate results for our materials are in excellent agreement with long-term (7 to 9 years), low dose-rate results recently obtained for the same material types actually aged under nuclear power plant conditions. Based on a combination of the modelling and long-term results, we findmore » indications of reasonably similar degradation responses among several different commercial formulations for each of the following generic'' materials: hypalon, ethylenetetrafluoroethylene, silicone rubber and PVC. If such generic'' behavior can be further substantiated through modelling and long-term results on additional formulations, predictions of cable life for other commercial materials of the same generic types would be greatly facilitated. Finally, to aid utilities in their cable life extension decisions, we utilize our modelling results to generate lifetime prediction curves for the materials modelled to data. These curves plot expected material lifetime versus dose rate and temperature down to the levels of interest to nuclear power plant aging. 18 refs., 30 figs., 3 tabs.« less

  3. Comparison of integrated clustering methods for accurate and stable prediction of building energy consumption data

    DOE PAGES

    Hsu, David

    2015-09-27

    Clustering methods are often used to model energy consumption for two reasons. First, clustering is often used to process data and to improve the predictive accuracy of subsequent energy models. Second, stable clusters that are reproducible with respect to non-essential changes can be used to group, target, and interpret observed subjects. However, it is well known that clustering methods are highly sensitive to the choice of algorithms and variables. This can lead to misleading assessments of predictive accuracy and mis-interpretation of clusters in policymaking. This paper therefore introduces two methods to the modeling of energy consumption in buildings: clusterwise regression,more » also known as latent class regression, which integrates clustering and regression simultaneously; and cluster validation methods to measure stability. Using a large dataset of multifamily buildings in New York City, clusterwise regression is compared to common two-stage algorithms that use K-means and model-based clustering with linear regression. Predictive accuracy is evaluated using 20-fold cross validation, and the stability of the perturbed clusters is measured using the Jaccard coefficient. These results show that there seems to be an inherent tradeoff between prediction accuracy and cluster stability. This paper concludes by discussing which clustering methods may be appropriate for different analytical purposes.« less

  4. Improved patient size estimates for accurate dose calculations in abdomen computed tomography

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Lae

    2017-07-01

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  5. Pneumococcal pneumonia - Are the new severity scores more accurate in predicting adverse outcomes?

    PubMed

    Ribeiro, C; Ladeira, I; Gaio, A R; Brito, M C

    2013-01-01

    The site-of-care decision is one of the most important factors in the management of patients with community-acquired pneumonia. The severity scores are validated prognostic tools for community-acquired pneumonia mortality and treatment site decision. The aim of this paper was to compare the discriminatory power of four scores - the classic PSI and CURB65 ant the most recent SCAP and SMART-COP - in predicting major adverse events: death, ICU admission, need for invasive mechanical ventilation or vasopressor support in patients admitted with pneumococcal pneumonia. A five year retrospective study of patients admitted for pneumococcal pneumonia. Patients were stratified based on admission data and assigned to low-, intermediate-, and high-risk classes for each score. Results were obtained comparing low versus non-low risk classes. We studied 142 episodes of hospitalization with 2 deaths and 10 patients needing mechanical ventilation and vasopressor support. The majority of patients were classified as low risk by all scores - we found high negative predictive values for all adverse events studied, the most negative value corresponding to the SCAP score. The more recent scores showed better accuracy for predicting ICU admission and need for ventilation or vasopressor support (mostly for the SCAP score with higher AUC values for all adverse events). The rate of all adverse outcomes increased directly with increasing risk class in all scores. The new gravity scores appear to have a higher discriminatory power in all adverse events in our study, particularly, the SCAP score. Copyright © 2012 Sociedade Portuguesa de Pneumologia. Published by Elsevier España. All rights reserved.

  6. New and Accurate Predictive Model for the Efficacy of Extracorporeal Shock Wave Therapy in Managing Patients With Chronic Plantar Fasciitis.

    PubMed

    Yin, Mengchen; Chen, Ni; Huang, Quan; Marla, Anastasia Sulindro; Ma, Junming; Ye, Jie; Mo, Wen

    2017-12-01

    Youden index was .4243, .3003, and .7189, respectively. The Hosmer-Lemeshow test showed a good fitting of the predictive model, with an overall accuracy of 89.6%. This study establishes a new and accurate predictive model for the efficacy of ESWT in managing patients with chronic plantar fasciitis. The use of these parameters, in the form of a predictive model for ESWT efficacy, has the potential to improve decision-making in the application of ESWT. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  7. Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories

    NASA Technical Reports Server (NTRS)

    Green, S.; Grace, M.; Williams, D.

    1999-01-01

    The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major

  8. Metamemory monitoring in mild cognitive impairment: Evidence of a less accurate episodic feeling-of-knowing.

    PubMed

    Perrotin, Audrey; Belleville, Sylvie; Isingrini, Michel

    2007-09-20

    This study aimed at exploring metamemory and specifically the accuracy of memory monitoring in mild cognitive impairment (MCI) using an episodic memory feeling-of-knowing (FOK) procedure. To this end, 20 people with MCI and 20 matched control participants were compared on the episodic FOK task. Results showed that the MCI group made less accurate FOK predictions than the control group by overestimating their memory performance on a recognition task. The MCI overestimation behavior was found to be critically related to the severity of their cognitive decline. In the light of recent neuroanatomical models showing the involvement of a temporal-frontal network underlying accurate FOK predictions, the role of memory and executive processes was evaluated. Thus, participants were also administered memory and executive neuropsychological tests. Correlation analysis revealed a between-group differential pattern indicating that FOK accuracy was primarily related to memory abilities in people with MCI, whereas it was specifically related to executive functioning in control participants. The lesser ability of people with MCI to assess their memory status accurately on an episodic FOK task is discussed in relation to both their subjective memory complaints and to their actual memory deficits which might be mediated by the brain vulnerability of their hippocampus and medial temporal system. It is suggested that their memory weakness may lead people with MCI to use other less reliable forms of memory monitoring.

  9. Probability-based collaborative filtering model for predicting gene-disease associations.

    PubMed

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-12-28

    Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.

  10. Residual Strength Prediction of Fuselage Structures with Multiple Site Damage

    NASA Technical Reports Server (NTRS)

    Chen, Chuin-Shan; Wawrzynek, Paul A.; Ingraffea, Anthony R.

    1999-01-01

    This paper summarizes recent results on simulating full-scale pressure tests of wide body, lap-jointed fuselage panels with multiple site damage (MSD). The crack tip opening angle (CTOA) fracture criterion and the FRANC3D/STAGS software program were used to analyze stable crack growth under conditions of general yielding. The link-up of multiple cracks and residual strength of damaged structures were predicted. Elastic-plastic finite element analysis based on the von Mises yield criterion and incremental flow theory with small strain assumption was used. A global-local modeling procedure was employed in the numerical analyses. Stress distributions from the numerical simulations are compared with strain gage measurements. Analysis results show that accurate representation of the load transfer through the rivets is crucial for the model to predict the stress distribution accurately. Predicted crack growth and residual strength are compared with test data. Observed and predicted results both indicate that the occurrence of small MSD cracks substantially reduces the residual strength. Modeling fatigue closure is essential to capture the fracture behavior during the early stable crack growth. Breakage of a tear strap can have a major influence on residual strength prediction.

  11. Reliability Prediction Analysis: Airborne System Results and Best Practices

    NASA Astrophysics Data System (ADS)

    Silva, Nuno; Lopes, Rui

    2013-09-01

    This article presents the results of several reliability prediction analysis for aerospace components, made by both methodologies, the 217F and the 217Plus. Supporting and complementary activities are described, as well as the differences concerning the results and the applications of both methodologies that are summarized in a set of lessons learned that are very useful for RAMS and Safety Prediction practitioners.The effort that is required for these activities is also an important point that is discussed, as is the end result and their interpretation/impact on the system design.The article concludes while positioning these activities and methodologies in an overall process for space and aeronautics equipment/components certification, and highlighting their advantages. Some good practices have also been summarized and some reuse rules have been laid down.

  12. Predicting plant biomass accumulation from image-derived parameters

    PubMed Central

    Chen, Dijun; Shi, Rongli; Pape, Jean-Michel; Neumann, Kerstin; Graner, Andreas; Chen, Ming; Klukas, Christian

    2018-01-01

    Abstract Background Image-based high-throughput phenotyping technologies have been rapidly developed in plant science recently, and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologists. However, it is a great challenge to find a predictive biomass model across experiments. Results In the present study, we constructed 4 predictive models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to 3 consecutive barley (Hordeum vulgare) experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model will contribute to relieving the phenotyping bottleneck in biomass measurement in breeding applications. The prediction performance is still relatively high across experiments under similar conditions. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of the plant biomass outcome. Furthermore, methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass. Conclusions We have developed quantitative models to accurately predict plant biomass accumulation from image data. We anticipate that the analysis results will be useful to advance our views of the phenotypic determinants of plant biomass outcome, and the statistical methods can be broadly used for other plant species. PMID:29346559

  13. An instrument for rapid, accurate, determination of fuel moisture content

    Treesearch

    Stephen S. Sackett

    1980-01-01

    Moisture contents of dead and living fuels are key variables in fire behavior. Accurate, real-time fuel moisture data are required for prescribed burning and wildfire behavior predictions. The convection oven method has become the standard for direct fuel moisture content determination. Efforts to quantify fuel moisture through indirect methods have not been...

  14. CUFID-query: accurate network querying through random walk based network flow estimation.

    PubMed

    Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun

    2017-12-28

    Functional modules in biological networks consist of numerous biomolecules and their complicated interactions. Recent studies have shown that biomolecules in a functional module tend to have similar interaction patterns and that such modules are often conserved across biological networks of different species. As a result, such conserved functional modules can be identified through comparative analysis of biological networks. In this work, we propose a novel network querying algorithm based on the CUFID (Comparative network analysis Using the steady-state network Flow to IDentify orthologous proteins) framework combined with an efficient seed-and-extension approach. The proposed algorithm, CUFID-query, can accurately detect conserved functional modules as small subnetworks in the target network that are expected to perform similar functions to the given query functional module. The CUFID framework was recently developed for probabilistic pairwise global comparison of biological networks, and it has been applied to pairwise global network alignment, where the framework was shown to yield accurate network alignment results. In the proposed CUFID-query algorithm, we adopt the CUFID framework and extend it for local network alignment, specifically to solve network querying problems. First, in the seed selection phase, the proposed method utilizes the CUFID framework to compare the query and the target networks and to predict the probabilistic node-to-node correspondence between the networks. Next, the algorithm selects and greedily extends the seed in the target network by iteratively adding nodes that have frequent interactions with other nodes in the seed network, in a way that the conductance of the extended network is maximally reduced. Finally, CUFID-query removes irrelevant nodes from the querying results based on the personalized PageRank vector for the induced network that includes the fully extended network and its neighboring nodes. Through extensive

  15. When high working memory capacity is and is not beneficial for predicting nonlinear processes.

    PubMed

    Fischer, Helen; Holt, Daniel V

    2017-04-01

    Predicting the development of dynamic processes is vital in many areas of life. Previous findings are inconclusive as to whether higher working memory capacity (WMC) is always associated with using more accurate prediction strategies, or whether higher WMC can also be associated with using overly complex strategies that do not improve accuracy. In this study, participants predicted a range of systematically varied nonlinear processes based on exponential functions where prediction accuracy could or could not be enhanced using well-calibrated rules. Results indicate that higher WMC participants seem to rely more on well-calibrated strategies, leading to more accurate predictions for processes with highly nonlinear trajectories in the prediction region. Predictions of lower WMC participants, in contrast, point toward an increased use of simple exemplar-based prediction strategies, which perform just as well as more complex strategies when the prediction region is approximately linear. These results imply that with respect to predicting dynamic processes, working memory capacity limits are not generally a strength or a weakness, but that this depends on the process to be predicted.

  16. An Accurate ab initio Quartic Force Field and Vibrational Frequencies for CH4 and Isotopomers

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Martin, Jan M. L.; Taylor, Peter R.

    1995-01-01

    A very accurate ab initio quartic force field for CH4 and its isotopomers is presented. The quartic force field was determined with the singles and doubles coupled-cluster procedure that includes a quasiperturbative estimate of the effects of connected triple excitations, CCSD(T), using the correlation consistent polarized valence triple zeta, cc-pVTZ, basis set. Improved quadratic force constants were evaluated with the correlation consistent polarized valence quadruple zeta, cc-pVQZ, basis set. Fundamental vibrational frequencies are determined using second-order perturbation theory anharmonic analyses. All fundamentals of CH4 and isotopomers for which accurate experimental values exist and for which there is not a large Fermi resonance, are predicted to within +/- 6 cm(exp -1). It is thus concluded that our predictions for the harmonic frequencies and the anharmonic constants are the most accurate estimates available. It is also shown that using cubic and quartic force constants determined with the correlation consistent polarized double zeta, cc-pVDZ, basis set in conjunction with the cc-pVQZ quadratic force constants and equilibrium geometry leads to accurate predictions for the fundamental vibrational frequencies of methane, suggesting that this approach may be a viable alternative for larger molecules. Using CCSD(T), core correlation is found to reduce the CH4 r(e), by 0.0015 A. Our best estimate for r, is 1.0862 +/- 0.0005 A.

  17. Effects of Prediction and Contextual Support on Lexical Processing: Prediction takes Precedence

    PubMed Central

    Brothers, Trevor; Swaab, Tamara Y.; Traxler, Matthew J.

    2014-01-01

    Readers may use contextual information to anticipate and pre-activate specific lexical items during reading. However, prior studies have not clearly dissociated the effects of accurate lexical prediction from other forms of contextual facilitation such as plausibility or semantic priming. In this study, we measured electrophysiological responses to predicted and unpredicted target words in passages providing varying levels of contextual support. This method was used to isolate the neural effects of prediction from other potential contextual influences on lexical processing. While both prediction and discourse context influenced ERP amplitudes within the time range of the N400, the effects of prediction occurred much more rapidly, preceding contextual facilitation by approximately 100ms. In addition, a frontal, post-N400 positivity (PNP) was modulated by both prediction accuracy and the overall plausibility of the preceding passage. These results suggest a unique temporal primacy for prediction in facilitating lexical access. They also suggest that the frontal PNP may index the costs of revising discourse representations following an incorrect lexical prediction. PMID:25497522

  18. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  19. A Course Specific Perspective in the Prediction of Academic Success.

    ERIC Educational Resources Information Center

    Beaulieu, R. P.

    1990-01-01

    Students (N=94) enrolled in a senior-level management course over six semesters were used to investigate the ability of four measures from two industrial tests to predict course performance. The resulting multiple regression equation with four predictors could accurately predict achievement of males, but not of females. (Author/TE)

  20. Neural network and SVM classifiers accurately predict lipid binding proteins, irrespective of sequence homology.

    PubMed

    Bakhtiarizadeh, Mohammad Reza; Moradi-Shahrbabak, Mohammad; Ebrahimi, Mansour; Ebrahimie, Esmaeil

    2014-09-07

    Due to the central roles of lipid binding proteins (LBPs) in many biological processes, sequence based identification of LBPs is of great interest. The major challenge is that LBPs are diverse in sequence, structure, and function which results in low accuracy of sequence homology based methods. Therefore, there is a need for developing alternative functional prediction methods irrespective of sequence similarity. To identify LBPs from non-LBPs, the performances of support vector machine (SVM) and neural network were compared in this study. Comprehensive protein features and various techniques were employed to create datasets. Five-fold cross-validation (CV) and independent evaluation (IE) tests were used to assess the validity of the two methods. The results indicated that SVM outperforms neural network. SVM achieved 89.28% (CV) and 89.55% (IE) overall accuracy in identification of LBPs from non-LBPs and 92.06% (CV) and 92.90% (IE) (in average) for classification of different LBPs classes. Increasing the number and the range of extracted protein features as well as optimization of the SVM parameters significantly increased the efficiency of LBPs class prediction in comparison to the only previous report in this field. Altogether, the results showed that the SVM algorithm can be run on broad, computationally calculated protein features and offers a promising tool in detection of LBPs classes. The proposed approach has the potential to integrate and improve the common sequence alignment based methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Absolute Measurements of Macrophage Migration Inhibitory Factor and Interleukin-1-β mRNA Levels Accurately Predict Treatment Response in Depressed Patients.

    PubMed

    Cattaneo, Annamaria; Ferrari, Clarissa; Uher, Rudolf; Bocchio-Chiavetto, Luisella; Riva, Marco Andrea; Pariante, Carmine M

    2016-10-01

    Increased levels of inflammation have been associated with a poorer response to antidepressants in several clinical samples, but these findings have had been limited by low reproducibility of biomarker assays across laboratories, difficulty in predicting response probability on an individual basis, and unclear molecular mechanisms. Here we measured absolute mRNA values (a reliable quantitation of number of molecules) of Macrophage Migration Inhibitory Factor and interleukin-1β in a previously published sample from a randomized controlled trial comparing escitalopram vs nortriptyline (GENDEP) as well as in an independent, naturalistic replication sample. We then used linear discriminant analysis to calculate mRNA values cutoffs that best discriminated between responders and nonresponders after 12 weeks of antidepressants. As Macrophage Migration Inhibitory Factor and interleukin-1β might be involved in different pathways, we constructed a protein-protein interaction network by the Search Tool for the Retrieval of Interacting Genes/Proteins. We identified cutoff values for the absolute mRNA measures that accurately predicted response probability on an individual basis, with positive predictive values and specificity for nonresponders of 100% in both samples (negative predictive value=82% to 85%, sensitivity=52% to 61%). Using network analysis, we identified different clusters of targets for these 2 cytokines, with Macrophage Migration Inhibitory Factor interacting predominantly with pathways involved in neurogenesis, neuroplasticity, and cell proliferation, and interleukin-1β interacting predominantly with pathways involved in the inflammasome complex, oxidative stress, and neurodegeneration. We believe that these data provide a clinically suitable approach to the personalization of antidepressant therapy: patients who have absolute mRNA values above the suggested cutoffs could be directed toward earlier access to more assertive antidepressant strategies

  2. A composite score combining waist circumference and body mass index more accurately predicts body fat percentage in 6- to 13-year-old children.

    PubMed

    Aeberli, I; Gut-Knabenhans, M; Kusche-Ammann, R S; Molinari, L; Zimmermann, M B

    2013-02-01

    Body mass index (BMI) and waist circumference (WC) are widely used to predict % body fat (BF) and classify degrees of pediatric adiposity. However, both measures have limitations. The aim of this study was to evaluate whether a combination of WC and BMI would more accurately predict %BF than either alone. In a nationally representative sample of 2,303 6- to 13-year-old Swiss children, weight, height, and WC were measured, and %BF was determined from multiple skinfold thicknesses. Regression and receiver operating characteristic (ROC) curves were used to evaluate the combination of WC and BMI in predicting %BF against WC or BMI alone. An optimized composite score (CS) was generated. A quadratic polynomial combination of WC and BMI led to a better prediction of %BF (r (2) = 0.68) compared with the two measures alone (r (2) = 0.58-0.62). The areas under the ROC curve for the CS [0.6 * WC-SDS + 0.4 * BMI-SDS] ranged from 0.962 ± 0.0053 (overweight girls) to 0.982 ± 0.0046 (obese boys) and were somewhat greater than the AUCs for either BMI or WC alone. At a given specificity, the sensitivity of the prediction of overweight and obesity based on the CS was higher than that based on either WC or BMI alone, although the improvement was small. Both BMI and WC are good predictors of %BF in primary school children. However, a composite score incorporating both measures increased sensitivity at a constant specificity as compared to the individual measures. It may therefore be a useful tool for clinical and epidemiological studies of pediatric adiposity.

  3. Can computerized tomography accurately stage childhood renal tumors?

    PubMed

    Abdelhalim, Ahmed; Helmy, Tamer E; Harraz, Ahmed M; Abou-El-Ghar, Mohamed E; Dawaba, Mohamed E; Hafez, Ashraf T

    2014-07-01

    Staging of childhood renal tumors is crucial for treatment planning and outcome prediction. We sought to identify whether computerized tomography could accurately predict the local stage of childhood renal tumors. We retrospectively reviewed our database for patients diagnosed with childhood renal tumors and treated surgically between 1990 and 2013. Inability to retrieve preoperative computerized tomography, intraoperative tumor spillage and nonWilms childhood renal tumors were exclusion criteria. Local computerized tomography stage was assigned by a single experienced pediatric radiologist blinded to the pathological stage, using a consensus similar to the Children's Oncology Group Wilms tumor staging system. Tumors were stratified into up-front surgery and preoperative chemotherapy groups. The radiological stage of each tumor was compared to the pathological stage. A total of 189 tumors in 179 patients met inclusion criteria. Computerized tomography staging matched pathological staging in 68% of up-front surgery (70 of 103), 31.8% of pre-chemotherapy (21 of 66) and 48.8% of post-chemotherapy scans (42 of 86). Computerized tomography over staged 21.4%, 65.2% and 46.5% of tumors in the up-front surgery, pre-chemotherapy and post-chemotherapy scans, respectively, and under staged 10.7%, 3% and 4.7%. Computerized tomography staging was more accurate in tumors managed by up-front surgery (p <0.001) and those without extracapsular extension (p <0.001). The validity of computerized tomography staging of childhood renal tumors remains doubtful. This staging is more accurate for tumors treated with up-front surgery and those without extracapsular extension. Preoperative computerized tomography can help to exclude capsular breach. Treatment strategy should be based on surgical and pathological staging to avoid the hazards of inaccurate staging. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  4. Calibration and prediction of removal function in magnetorheological finishing.

    PubMed

    Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng

    2010-01-20

    A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.

  5. Preschoolers can make highly accurate judgments of learning.

    PubMed

    Lipowski, Stacy L; Merriman, William E; Dunlosky, John

    2013-08-01

    Preschoolers' ability to make judgments of learning (JOLs) was examined in 3 experiments in which they were taught proper names for animals. In Experiment 1, when judgments were made immediately after studying, nearly every child predicted subsequent recall of every name. When judgments were made after a delay, fewer showed this response tendency. The delayed JOLs of those who predicted at least 1 recall failure were still overconfident, however, and were not correlated with final recall. In Experiment 2, children received a second study trial with feedback, made JOLs after a delay, and completed an additional forced-choice judgment task. In this task, an animal whose name had been recalled was pitted against an animal whose name had not been recalled, and the children chose the one they were more likely to remember later. Compared with Experiment 1, more children predicted at least 1 recall failure and predictions were moderately accurate. In the forced-choice task, animal names that had just been successfully recalled were typically chosen over ones that had not. Experiment 3 examined the effect of providing an additional retrieval attempt on delayed JOLs. Half of the children received a single study session, and half received an additional study session with feedback. Children in the practice group showed less overconfidence than those in the no-practice group. Taken together, the results suggest that, with minimal task experience, most preschoolers understand that they will not remember everything and that if they cannot recall something at present, they are unlikely to recall it in the future. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  6. Limb Dominance Results from Asymmetries in Predictive and Impedance Control Mechanisms

    PubMed Central

    Yadav, Vivek; Sainburg, Robert L.

    2014-01-01

    Handedness is a pronounced feature of human motor behavior, yet the underlying neural mechanisms remain unclear. We hypothesize that motor lateralization results from asymmetries in predictive control of task dynamics and in control of limb impedance. To test this hypothesis, we present an experiment with two different force field environments, a field with a predictable magnitude that varies with the square of velocity, and a field with a less predictable magnitude that varies linearly with velocity. These fields were designed to be compatible with controllers that are specialized in predicting limb and task dynamics, and modulating position and velocity dependent impedance, respectively. Because the velocity square field does not change the form of the equations of motion for the reaching arm, we reasoned that a forward dynamic-type controller should perform well in this field, while control of linear damping and stiffness terms should be less effective. In contrast, the unpredictable linear field should be most compatible with impedance control, but incompatible with predictive dynamics control. We measured steady state final position accuracy and 3 trajectory features during exposure to these fields: Mean squared jerk, Straightness, and Movement time. Our results confirmed that each arm made straighter, smoother, and quicker movements in its compatible field. Both arms showed similar final position accuracies, which were achieved using more extensive corrective sub-movements when either arm performed in its incompatible field. Finally, each arm showed limited adaptation to its incompatible field. Analysis of the dependence of trajectory errors on field magnitude suggested that dominant arm adaptation occurred by prediction of the mean field, thus exploiting predictive mechanisms for adaptation to the unpredictable field. Overall, our results support the hypothesis that motor lateralization reflects asymmetries in specific motor control mechanisms associated

  7. Summary of Data from the First AIAA CFD Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Levy, David W.; Zickuhr, Tom; Vassberg, John; Agrawal, Shreekant; Wahls, Richard A.; Pirzadeh, Shahyar; Hemsch, Michael J.

    2002-01-01

    The results from the first AIAA CFD Drag Prediction Workshop are summarized. The workshop was designed specifically to assess the state-of-the-art of computational fluid dynamics methods for force and moment prediction. An impartial forum was provided to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify areas needing additional research and development. The subject of the study was the DLR-F4 wing-body configuration, which is representative of transport aircraft designed for transonic flight. Specific test cases were required so that valid comparisons could be made. Optional test cases included constant-C(sub L) drag-rise predictions typically used in airplane design by industry. Results are compared to experimental data from three wind tunnel tests. A total of 18 international participants using 14 different codes submitted data to the workshop. No particular grid type or turbulence model was more accurate, when compared to each other, or to wind tunnel data. Most of the results overpredicted C(sub Lo) and C(sub Do), but induced drag (dC(sub D)/dC(sub L)(exp 2)) agreed fairly well. Drag rise at high Mach number was underpredicted, however, especially at high C(sub L). On average, the drag data were fairly accurate, but the scatter was greater than desired. The results show that well-validated Reynolds-Averaged Navier-Stokes CFD methods are sufficiently accurate to make design decisions based on predicted drag.

  8. Predictive sensor method and apparatus

    NASA Technical Reports Server (NTRS)

    Cambridge, Vivien J.; Koger, Thomas L.

    1993-01-01

    A microprocessor and electronics package employing predictive methodology was developed to accelerate the response time of slowly responding hydrogen sensors. The system developed improved sensor response time from approximately 90 seconds to 8.5 seconds. The microprocessor works in real-time providing accurate hydrogen concentration corrected for fluctuations in sensor output resulting from changes in atmospheric pressure and temperature. Following the successful development of the hydrogen sensor system, the system and predictive methodology was adapted to a commercial medical thermometer probe. Results of the experiment indicate that, with some customization of hardware and software, response time improvements are possible for medical thermometers as well as other slowly responding sensors.

  9. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    PubMed

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  10. Streamlined system for purifying and quantifying a diverse library of compounds and the effect of compound concentration measurements on the accurate interpretation of biological assay results.

    PubMed

    Popa-Burke, Ioana G; Issakova, Olga; Arroway, James D; Bernasconi, Paul; Chen, Min; Coudurier, Louis; Galasinski, Scott; Jadhav, Ajit P; Janzen, William P; Lagasca, Dennis; Liu, Darren; Lewis, Roderic S; Mohney, Robert P; Sepetov, Nikolai; Sparkman, Darren A; Hodge, C Nicholas

    2004-12-15

    As part of an overall systems approach to generating highly accurate screening data across large numbers of compounds and biological targets, we have developed and implemented streamlined methods for purifying and quantitating compounds at various stages of the screening process, coupled with automated "traditional" storage methods (DMSO, -20 degrees C). Specifically, all of the compounds in our druglike library are purified by LC/MS/UV and are then controlled for identity and concentration in their respective DMSO stock solutions by chemiluminescent nitrogen detection (CLND)/evaporative light scattering detection (ELSD) and MS/UV. In addition, the compound-buffer solutions used in the various biological assays are quantitated by LC/UV/CLND to determine the concentration of compound actually present during screening. Our results show that LC/UV/CLND/ELSD/MS is a widely applicable method that can be used to purify, quantitate, and identify most small organic molecules from compound libraries. The LC/UV/CLND technique is a simple and sensitive method that can be easily and cost-effectively employed to rapidly determine the concentrations of even small amounts of any N-containing compound in aqueous solution. We present data to establish error limits for concentration determination that are well within the overall variability of the screening process. This study demonstrates that there is a significant difference between the predicted amount of soluble compound from stock DMSO solutions following dilution into assay buffer and the actual amount present in assay buffer solutions, even at the low concentrations employed for the assays. We also demonstrate that knowledge of the concentrations of compounds to which the biological target is exposed is critical for accurate potency determinations. Accurate potency values are in turn particularly important for drug discovery, for understanding structure-activity relationships, and for building useful empirical models of

  11. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE PAGES

    An, Zhe; Rey, Daniel; Ye, Jingxin; ...

    2017-01-16

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  12. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Zhe; Rey, Daniel; Ye, Jingxin

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  13. A multi-objective optimization approach accurately resolves protein domain architectures

    PubMed Central

    Bernardes, J.S.; Vieira, F.R.J.; Zaverucha, G.; Carbone, A.

    2016-01-01

    Motivation: Given a protein sequence and a number of potential domains matching it, what are the domain content and the most likely domain architecture for the sequence? This problem is of fundamental importance in protein annotation, constituting one of the main steps of all predictive annotation strategies. On the other hand, when potential domains are several and in conflict because of overlapping domain boundaries, finding a solution for the problem might become difficult. An accurate prediction of the domain architecture of a multi-domain protein provides important information for function prediction, comparative genomics and molecular evolution. Results: We developed DAMA (Domain Annotation by a Multi-objective Approach), a novel approach that identifies architectures through a multi-objective optimization algorithm combining scores of domain matches, previously observed multi-domain co-occurrence and domain overlapping. DAMA has been validated on a known benchmark dataset based on CATH structural domain assignments and on the set of Plasmodium falciparum proteins. When compared with existing tools on both datasets, it outperforms all of them. Availability and implementation: DAMA software is implemented in C++ and the source code can be found at http://www.lcqb.upmc.fr/DAMA. Contact: juliana.silva_bernardes@upmc.fr or alessandra.carbone@lip6.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26458889

  14. Results from raw milk microbiological tests do not predict the shelf-life performance of commercially pasteurized fluid milk.

    PubMed

    Martin, N H; Ranieri, M L; Murphy, S C; Ralyea, R D; Wiedmann, M; Boor, K J

    2011-03-01

    Analytical tools that accurately predict the performance of raw milk following its manufacture into commercial food products are of economic interest to the dairy industry. To evaluate the ability of currently applied raw milk microbiological tests to predict the quality of commercially pasteurized fluid milk products, samples of raw milk and 2% fat pasteurized milk were obtained from 4 New York State fluid milk processors for a 1-yr period. Raw milk samples were examined using a variety of tests commonly applied to raw milk, including somatic cell count, standard plate count, psychrotrophic bacteria count, ropy milk test, coliform count, preliminary incubation count, laboratory pasteurization count, and spore pasteurization count. Differential and selective media were used to identify groups of bacteria present in raw milk. Pasteurized milk samples were held at 6°C for 21 d and evaluated for standard plate count, coliform count, and sensory quality throughout shelf-life. Bacterial isolates from select raw and pasteurized milk tests were identified using 16S ribosomal DNA sequencing. Linear regression analysis of raw milk test results versus results reflecting pasteurized milk quality consistently showed low R(2) values (<0.45); the majority of R(2) values were <0.25, indicating small relationship between the results from the raw milk tests and results from tests used to evaluate pasteurized milk quality. Our findings suggest the need for new raw milk tests that measure the specific biological barriers that limit shelf-life and quality of fluid milk products. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Individuals Achieve More Accurate Results with Meters That Are Codeless and Employ Dynamic Electrochemistry

    PubMed Central

    Rao, Anoop; Wiley, Meg; Iyengar, Sridhar; Nadeau, Dan; Carnevale, Julie

    2010-01-01

    Background Studies have shown that controlling blood glucose can reduce the onset and progression of the long-term microvascular and neuropathic complications associated with the chronic course of diabetes mellitus. Improved glycemic control can be achieved by frequent testing combined with changes in medication, exercise, and diet. Technological advancements have enabled improvements in analytical accuracy of meters, and this paper explores two such parameters to which that accuracy can be attributed. Methods Four blood glucose monitoring systems (with or without dynamic electrochemistry algorithms, codeless or requiring coding prior to testing) were evaluated and compared with respect to their accuracy. Results Altogether, 108 blood glucose values were obtained for each system from 54 study participants and compared with the reference values. The analysis depicted in the International Organization for Standardization table format indicates that the devices with dynamic electrochemistry and the codeless feature had the highest proportion of acceptable results overall (System A, 101/103). Results were significant when compared at the 10% bias level with meters that were codeless and utilized static electrochemistry (p = .017) or systems that had static electrochemistry but needed coding (p = .008). Conclusions Analytical performance of these blood glucose meters differed significantly depending on their technologic features. Meters that utilized dynamic electrochemistry and did not require coding were more accurate than meters that used static electrochemistry or required coding. PMID:20167178

  16. Highway noise measurements for verification of prediction models

    DOT National Transportation Integrated Search

    1978-01-01

    Accurate prediction of highway noise has been a major problem for state highway departments. Many noise models have been proposed to alleviate this problem. Results contained in this report will be used to analyze some of these models, and to determi...

  17. Accurate experimental and theoretical comparisons between superconductor-insulator-superconductor mixers showing weak and strong quantum effects

    NASA Technical Reports Server (NTRS)

    Mcgrath, W. R.; Richards, P. L.; Face, D. W.; Prober, D. E.; Lloyd, F. L.

    1988-01-01

    A systematic study of the gain and noise in superconductor-insulator-superconductor mixers employing Ta based, Nb based, and Pb-alloy based tunnel junctions was made. These junctions displayed both weak and strong quantum effects at a signal frequency of 33 GHz. The effects of energy gap sharpness and subgap current were investigated and are quantitatively related to mixer performance. Detailed comparisons are made of the mixing results with the predictions of a three-port model approximation to the Tucker theory. Mixer performance was measured with a novel test apparatus which is accurate enough to allow for the first quantitative tests of theoretical noise predictions. It is found that the three-port model of the Tucker theory underestimates the mixer noise temperature by a factor of about 2 for all of the mixers. In addition, predicted values of available mixer gain are in reasonable agreement with experiment when quantum effects are weak. However, as quantum effects become strong, the predicted available gain diverges to infinity, which is in sharp contrast to the experimental results. Predictions of coupled gain do not always show such divergences.

  18. Existing equations to estimate lean body mass are not accurate in the critically ill: Results of a multicenter observational study.

    PubMed

    Moisey, Lesley L; Mourtzakis, Marina; Kozar, Rosemary A; Compher, Charlene; Heyland, Daren K

    2017-12-01

    Lean body mass (LBM), quantified using computed tomography (CT), is a significant predictor of clinical outcomes in the critically ill. While CT analysis is precise and accurate in measuring body composition, it may not be practical or readily accessible to all patients in the intensive care unit (ICU). Here, we assessed the agreement between LBM measured by CT and four previously developed equations that predict LBM using variables (i.e. age, sex, weight, height) commonly recorded in the ICU. LBM was calculated in 327 critically ill adults using CT scans, taken at ICU admission, and 4 predictive equations (E1-4) that were derived from non-critically adults since there are no ICU-specific equations. Agreement was assessed using paired t-tests, Pearson's correlation coefficients and Bland-Altman plots. Median LBM calculated by CT was 45 kg (IQR 37-53 kg) and was significantly different (p < 0.001) from E1 (52.5 kg; IQR: 42-61 kg), E2 (55 kg; IQR 45-64 kg), E3 (55 kg; IQR 44-64 kg), and E4 (54 kg; IQR 49-61 kg). Pearson correlation coefficients suggested moderate correlation (r = 0.739, 0.756, 0.732, and 0.680, p < 0.001, respectively). Each of the equations overestimated LBM (error ranged from 7.5 to 9.9 kg), compared with LBM calculated by CT, suggesting insufficient agreement. Our data indicates a large bias is present between the calculation of LBM by CT imaging and the predictive equations that have been compared here. This underscores the need for future research toward the development of ICU-specific equations that reliably estimate LBM in a practical and cost-effective manner. Copyright © 2016 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  19. A Micromechanics-Based Method for Multiscale Fatigue Prediction

    NASA Astrophysics Data System (ADS)

    Moore, John Allan

    An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.

  20. Correlation of chemical shifts predicted by molecular dynamics simulations for partially disordered proteins.

    PubMed

    Karp, Jerome M; Eryilmaz, Ertan; Erylimaz, Ertan; Cowburn, David

    2015-01-01

    There has been a longstanding interest in being able to accurately predict NMR chemical shifts from structural data. Recent studies have focused on using molecular dynamics (MD) simulation data as input for improved prediction. Here we examine the accuracy of chemical shift prediction for intein systems, which have regions of intrinsic disorder. We find that using MD simulation data as input for chemical shift prediction does not consistently improve prediction accuracy over use of a static X-ray crystal structure. This appears to result from the complex conformational ensemble of the disordered protein segments. We show that using accelerated molecular dynamics (aMD) simulations improves chemical shift prediction, suggesting that methods which better sample the conformational ensemble like aMD are more appropriate tools for use in chemical shift prediction for proteins with disordered regions. Moreover, our study suggests that data accurately reflecting protein dynamics must be used as input for chemical shift prediction in order to correctly predict chemical shifts in systems with disorder.

  1. Identification of fidgety movements and prediction of CP by the use of computer-based video analysis is more accurate when based on two video recordings.

    PubMed

    Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild

    2013-08-01

    This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies.

  2. A Summary of Validation Results for LEWICE 2.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    1998-01-01

    A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different point spacing, and time step criteria across general computing platforms. It also differs in the extensive amount of effort undertaken to compare the results in a quantifiable manner against the database of ice shapes which have been generated in the NASA Lewis Icing, Research Tunnel (IRT), The complete set of data used for this comparison is available in a recent contractor report . The result of this comparison shows that the difference between the predicted ice shape from LEWICE 2.0 and the average of the experimental data is 7.2% while the variability of the experimental data is 2.5%.

  3. Rail-highway crossing accident prediction research results - FY80

    DOT National Transportation Integrated Search

    1981-01-01

    This report presents the results of research performed at the : Transportation Systems Center (TSC) dealing with mathematical : methods of predicting accidents at rail-highway crossings. The : work consists of three parts : Part I - Revised DOT Accid...

  4. Model Prediction Results for 2007 Ultrasonic Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Joon; Song, Sung-Jin

    2008-02-01

    The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2007 ultrasonic benchmark problems: prediction of side-drilled hole responses with 45° and 60° refracted shear waves, and effects of surface curvatures on the ultrasonic responses of flat-bottomed hole. To solve this year's ultrasonic benchmark problems, we applied multi-Gaussian beam models for calculation of ultrasonic beam fields and the Kirchhoff approximation and the separation of variables method for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes respectively In this paper, we present comparison results of model predictions to experiments for side-drilled holes and discuss effect of interface curvatures on ultrasonic responses by comparison of peak-to-peak amplitudes of flat-bottomed hole responses with different sizes and interface curvatures.

  5. Towards more accurate wind and solar power prediction by improving NWP model physics

    NASA Astrophysics Data System (ADS)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    nighttime to well mixed conditions during the day presents a big challenge to NWP models. Fast decrease and successive increase in hub-height wind speed after sunrise, and the formation of nocturnal low level jets will be discussed. For PV, the life cycle of low stratus clouds and fog is crucial. Capturing these processes correctly depends on the accurate simulation of diffusion or vertical momentum transport and the interaction with other atmospheric and soil processes within the numerical weather model. Results from Single Column Model simulations and 3d case studies will be presented. Emphasis is placed on wind forecasts; however, some references to highlights concerning the PV-developments will also be given. *) ORKA: Optimierung von Ensembleprognosen regenerativer Einspeisung für den Kürzestfristbereich am Anwendungsbeispiel der Netzsicherheitsrechnungen **) EWeLiNE: Erstellung innovativer Wetter- und Leistungsprognosemodelle für die Netzintegration wetterabhängiger Energieträger, www.projekt-eweline.de

  6. LocTree2 predicts localization for all domains of life

    PubMed Central

    Goldberg, Tatyana; Hamp, Tobias; Rost, Burkhard

    2012-01-01

    Motivation: Subcellular localization is one aspect of protein function. Despite advances in high-throughput imaging, localization maps remain incomplete. Several methods accurately predict localization, but many challenges remain to be tackled. Results: In this study, we introduced a framework to predict localization in life's three domains, including globular and membrane proteins (3 classes for archaea; 6 for bacteria and 18 for eukaryota). The resulting method, LocTree2, works well even for protein fragments. It uses a hierarchical system of support vector machines that imitates the cascading mechanism of cellular sorting. The method reaches high levels of sustained performance (eukaryota: Q18=65%, bacteria: Q6=84%). LocTree2 also accurately distinguishes membrane and non-membrane proteins. In our hands, it compared favorably with top methods when tested on new data. Availability: Online through PredictProtein (predictprotein.org); as standalone version at http://www.rostlab.org/services/loctree2. Contact: localization@rostlab.org Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22962467

  7. Using Laboratory Test Results at Hospital Admission to Predict Short-term Survival in Critically Ill Patients With Metastatic or Advanced Cancer.

    PubMed

    Cheng, Lee; DeJesus, Alma Y; Rodriguez, Maria A

    2017-04-01

    Accurately estimating the life expectancy of critically ill patients with metastatic or advanced cancer is a crucial step in planning appropriate palliative or supportive care. We evaluated the results of laboratory tests performed within two days of hospital admission to predict the likelihood of death within 14 days. We retrospectively selected patients 18 years or older with metastatic or advanced cancer who were admitted to intensive care units or palliative and supportive care services in our hospital. We evaluated whether the following are independent predictors in a logistic regression model: age, sex, comorbidities, and the results of seven commonly available laboratory tests. The end point was death within 14 days in or out of the hospital. Of 901 patients in the development cohort and 45% died within 14 days. The risk of death within 14 days after admission increased with increasing age, lactate dehydrogenase levels, and white blood cell counts and decreasing albumin levels and platelet counts (P < 0.01). The model predictions were confirmed using a separate validation cohort. The areas under the receiver operating characteristic curves were 0.74 and 0.70 for the development and validation cohorts, respectively, indicating good discriminatory ability for the model. Our results suggest that laboratory test results performed within two days of admission are valuable in predicting death within 14 days for patients with metastatic or advanced cancer. Such results may provide an objective assessment tool for physicians and help them initiate conversations with patients and families about end-of-life care. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  8. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  9. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    PubMed Central

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  10. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach.

    PubMed

    Liang, Fan; Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively "switch" from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  11. Measurement and prediction of model-rotor flow fields

    NASA Technical Reports Server (NTRS)

    Owen, F. K.; Tauber, M. E.

    1985-01-01

    This paper shows that a laser velocimeter can be used to measure accurately the three-component velocities induced by a model rotor at transonic tip speeds. The measurements, which were made at Mach numbers from 0.85 to 0.95 and at zero advance ratio, yielded high-resolution, orthogonal velocity values. The measured velocities were used to check the ability of the ROT22 full-potential rotor code to predict accurately the transonic flow field in the crucial region around and beyond the tip of a high-speed rotor blade. The good agreement between the calculated and measured velocities established the code's ability to predict the off-blade flow field at transonic tip speeds. This supplements previous comparisons in which surface pressures were shown to be well predicted on two different tips at advance ratios to 0.45, especially at the critical 90 deg azimuthal blade position. These results demonstrate that the ROT22 code can be used with confidence to predict the important tip-region flow field, including the occurrence, strength, and location of shock waves causing high drag and noise.

  12. Cosmological constraints from the CFHTLenS shear measurements using a new, accurate, and flexible way of predicting non-linear mass clustering

    NASA Astrophysics Data System (ADS)

    Angulo, Raul E.; Hilbert, Stefan

    2015-03-01

    We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.

  13. Probability of Accurate Heart Failure Diagnosis and the Implications for Hospital Readmissions.

    PubMed

    Carey, Sandra A; Bass, Kyle; Saracino, Giovanna; East, Cara A; Felius, Joost; Grayburn, Paul A; Vallabhan, Ravi C; Hall, Shelley A

    2017-04-01

    Heart failure (HF) is a complex syndrome with inherent diagnostic challenges. We studied the scope of possibly inaccurately documented HF in a large health care system among patients assigned a primary diagnosis of HF at discharge. Through a retrospective record review and a classification schema developed from published guidelines, we assessed the probability of the documented HF diagnosis being accurate and determined factors associated with HF-related and non-HF-related hospital readmissions. An arbitration committee of 3 experts reviewed a subset of records to corroborate the results. We assigned a low probability of accurate diagnosis to 133 (19%) of the 712 patients. A subset of patients were also reviewed by an expert panel, which concluded that 13% to 35% of patients probably did not have HF (inter-rater agreement, kappa = 0.35). Low-probability HF was predictive of being readmitted more frequently for non-HF causes (p = 0.018), as well as documented arrhythmias (p = 0.023), and age >60 years (p = 0.006). Documented sleep apnea (p = 0.035), percutaneous coronary intervention (p = 0.006), non-white race (p = 0.047), and B-type natriuretic peptide >400 pg/ml (p = 0.007) were determined to be predictive of HF readmissions in this cohort. In conclusion, approximately 1 in 5 patients documented to have HF were found to have a low probability of actually having it. Moreover, the determination of low-probability HF was twice as likely to result in readmission for non-HF causes and, thus, should be considered a determinant for all-cause readmissions in this population. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Accurate prediction of hot spot residues through physicochemical characteristics of amino acid sequences.

    PubMed

    Chen, Peng; Li, Jinyan; Wong, Limsoon; Kuwahara, Hiroyuki; Huang, Jianhua Z; Gao, Xin

    2013-08-01

    Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. Copyright © 2013 Wiley Periodicals, Inc.

  15. Accurate Prediction of Drug-Induced Liver Injury Using Stem Cell-Derived Populations

    PubMed Central

    Szkolnicka, Dagmara; Farnworth, Sarah L.; Lucendo-Villarin, Baltasar; Storck, Christopher; Zhou, Wenli; Iredale, John P.; Flint, Oliver

    2014-01-01

    Despite major progress in the knowledge and management of human liver injury, there are millions of people suffering from chronic liver disease. Currently, the only cure for end-stage liver disease is orthotopic liver transplantation; however, this approach is severely limited by organ donation. Alternative approaches to restoring liver function have therefore been pursued, including the use of somatic and stem cell populations. Although such approaches are essential in developing scalable treatments, there is also an imperative to develop predictive human systems that more effectively study and/or prevent the onset of liver disease and decompensated organ function. We used a renewable human stem cell resource, from defined genetic backgrounds, and drove them through developmental intermediates to yield highly active, drug-inducible, and predictive human hepatocyte populations. Most importantly, stem cell-derived hepatocytes displayed equivalence to primary adult hepatocytes, following incubation with known hepatotoxins. In summary, we have developed a serum-free, scalable, and shippable cell-based model that faithfully predicts the potential for human liver injury. Such a resource has direct application in human modeling and, in the future, could play an important role in developing renewable cell-based therapies. PMID:24375539

  16. Does ultrasonography accurately diagnose acute cholecystitis? Improving diagnostic accuracy based on a review at a regional hospital

    PubMed Central

    Hwang, Hamish; Marsh, Ian; Doyle, Jason

    2014-01-01

    Background Acute cholecystitis is one of the most common diseases requiring emergency surgery. Ultrasonography is an accurate test for cholelithiasis but has a high false-negative rate for acute cholecystitis. The Murphy sign and laboratory tests performed independently are also not particularly accurate. This study was designed to review the accuracy of ultrasonography for diagnosing acute cholecystitis in a regional hospital. Methods We studied all emergency cholecystectomies performed over a 1-year period. All imaging studies were reviewed by a single radiologist, and all pathology was reviewed by a single pathologist. The reviewers were blinded to each other’s results. Results A total of 107 patients required an emergency cholecystectomy in the study period; 83 of them underwent ultrasonography. Interradiologist agreement was 92% for ultrasonography. For cholelithiasis, ultrasonography had 100% sensitivity, 18% specificity, 81% positive predictive value (PPV) and 100% negative predictive value (NPV). For acute cholecystitis, it had 54% sensitivity, 81% specificity, 85% PPV and 47% NPV. All patients had chronic cholecystitis and 67% had acute cholecystitis on histology. When combined with positive Murphy sign and elevated neutrophil count, an ultrasound showing cholelithiasis or acute cholecystitis yielded a sensitivity of 74%, specificity of 62%, PPV of 80% and NPV of 53% for the diagnosis of acute cholecystitis. Conclusion Ultrasonography alone has a high rate of false-negative studies for acute cholecystitis. However, a higher rate of accurate diagnosis can be achieved using a triad of positive Murphy sign, elevated neutrophil count and an ultrasound showing cholelithiasis or cholecystitis. PMID:24869607

  17. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    NASA Astrophysics Data System (ADS)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  18. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction II: Nonplanar Molecules.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-11-14

    The crystal structure prediction (CSP) of a given compound from its molecular diagram is a fundamental challenge in computational chemistry with implications in relevant technological fields. A key component of CSP is the method to calculate the lattice energy of a crystal, which allows the ranking of candidate structures. This work is the second part of our investigation to assess the potential of the exchange-hole dipole moment (XDM) dispersion model for crystal structure prediction. In this article, we study the relatively large, nonplanar, mostly flexible molecules in the first five blind tests held by the Cambridge Crystallographic Data Centre. Four of the seven experimental structures are predicted as the energy minimum, and thermal effects are demonstrated to have a large impact on the ranking of at least another compound. As in the first part of this series, delocalization error affects the results for a single crystal (compound X), in this case by detrimentally overstabilizing the π-conjugated conformation of the monomer. Overall, B86bPBE-XDM correctly predicts 16 of the 21 compounds in the five blind tests, a result similar to the one obtained using the best CSP method available to date (dispersion-corrected PW91 by Neumann et al.). Perhaps more importantly, the systems for which B86bPBE-XDM fails to predict the experimental structure as the energy minimum are mostly the same as with Neumann's method, which suggests that similar difficulties (absence of vibrational free energy corrections, delocalization error,...) are not limited to B86bPBE-XDM but affect GGA-based DFT-methods in general. Our work confirms B86bPBE-XDM as an excellent option for crystal energy ranking in CSP and offers a guide to identify crystals (organic salts, conjugated flexible systems) where difficulties may appear.

  19. Computer-based personality judgments are more accurate than those made by humans.

    PubMed

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  20. Analyzing online sentiment to predict telephone poll results.

    PubMed

    Fu, King-wa; Chan, Chee-hon

    2013-09-01

    The telephone survey is a common social science research method for capturing public opinion, for example, an individual's values or attitudes, or the government's approval rating. However, reducing domestic landline usage, increasing nonresponse rate, and suffering from response bias of the interviewee's self-reported data pose methodological challenges to such an approach. Because of the labor cost of administration, a phone survey is often conducted on a biweekly or monthly basis, and therefore a daily reflection of public opinion is usually not available. Recently, online sentiment analysis of user-generated content has been deployed to predict public opinion and human behavior. However, its overall effectiveness remains uncertain. This study seeks to examine the temporal association between online sentiment reflected in social media content and phone survey poll results in Hong Kong. Specifically, it aims to find the extent to which online sentiment can predict phone survey results. Using autoregressive integrated moving average time-series analysis, this study suggested that online sentiment scores can lead phone survey results by about 8-15 days, and their correlation coefficients were about 0.16. The finding is significant to the study of social media in social science research, because it supports the conclusion that daily sentiment observed in social media content can serve as a leading predictor for phone survey results, keeping as much as 2 weeks ahead of the monthly announcement of opinion polls. We also discuss the practical and theoretical implications of this study.

  1. Analyzing Medical Image Search Behavior: Semantics and Prediction of Query Results.

    PubMed

    De-Arteaga, Maria; Eggel, Ivan; Kahn, Charles E; Müller, Henning

    2015-10-01

    Log files of information retrieval systems that record user behavior have been used to improve the outcomes of retrieval systems, understand user behavior, and predict events. In this article, a log file of the ARRS GoldMiner search engine containing 222,005 consecutive queries is analyzed. Time stamps are available for each query, as well as masked IP addresses, which enables to identify queries from the same person. This article describes the ways in which physicians (or Internet searchers interested in medical images) search and proposes potential improvements by suggesting query modifications. For example, many queries contain only few terms and therefore are not specific; others contain spelling mistakes or non-medical terms that likely lead to poor or empty results. One of the goals of this report is to predict the number of results a query will have since such a model allows search engines to automatically propose query modifications in order to avoid result lists that are empty or too large. This prediction is made based on characteristics of the query terms themselves. Prediction of empty results has an accuracy above 88%, and thus can be used to automatically modify the query to avoid empty result sets for a user. The semantic analysis and data of reformulations done by users in the past can aid the development of better search systems, particularly to improve results for novice users. Therefore, this paper gives important ideas to better understand how people search and how to use this knowledge to improve the performance of specialized medical search engines.

  2. Genomic Models of Short-Term Exposure Accurately Predict Long-Term Chemical Carcinogenicity and Identify Putative Mechanisms of Action

    PubMed Central

    Gusenleitner, Daniel; Auerbach, Scott S.; Melia, Tisha; Gómez, Harold F.; Sherr, David H.; Monti, Stefano

    2014-01-01

    Background Despite an overall decrease in incidence of and mortality from cancer, about 40% of Americans will be diagnosed with the disease in their lifetime, and around 20% will die of it. Current approaches to test carcinogenic chemicals adopt the 2-year rodent bioassay, which is costly and time-consuming. As a result, fewer than 2% of the chemicals on the market have actually been tested. However, evidence accumulated to date suggests that gene expression profiles from model organisms exposed to chemical compounds reflect underlying mechanisms of action, and that these toxicogenomic models could be used in the prediction of chemical carcinogenicity. Results In this study, we used a rat-based microarray dataset from the NTP DrugMatrix Database to test the ability of toxicogenomics to model carcinogenicity. We analyzed 1,221 gene-expression profiles obtained from rats treated with 127 well-characterized compounds, including genotoxic and non-genotoxic carcinogens. We built a classifier that predicts a chemical's carcinogenic potential with an AUC of 0.78, and validated it on an independent dataset from the Japanese Toxicogenomics Project consisting of 2,065 profiles from 72 compounds. Finally, we identified differentially expressed genes associated with chemical carcinogenesis, and developed novel data-driven approaches for the molecular characterization of the response to chemical stressors. Conclusion Here, we validate a toxicogenomic approach to predict carcinogenicity and provide strong evidence that, with a larger set of compounds, we should be able to improve the sensitivity and specificity of the predictions. We found that the prediction of carcinogenicity is tissue-dependent and that the results also confirm and expand upon previous studies implicating DNA damage, the peroxisome proliferator-activated receptor, the aryl hydrocarbon receptor, and regenerative pathology in the response to carcinogen exposure. PMID:25058030

  3. Prediction of morbidity and mortality in patients with type 2 diabetes.

    PubMed

    Wells, Brian J; Roth, Rachel; Nowacki, Amy S; Arrigain, Susana; Yu, Changhong; Rosenkrans, Wayne A; Kattan, Michael W

    2013-01-01

    Introduction. The objective of this study was to create a tool that accurately predicts the risk of morbidity and mortality in patients with type 2 diabetes according to an oral hypoglycemic agent. Materials and Methods. The model was based on a cohort of 33,067 patients with type 2 diabetes who were prescribed a single oral hypoglycemic agent at the Cleveland Clinic between 1998 and 2006. Competing risk regression models were created for coronary heart disease (CHD), heart failure, and stroke, while a Cox regression model was created for mortality. Propensity scores were used to account for possible treatment bias. A prediction tool was created and internally validated using tenfold cross-validation. The results were compared to a Framingham model and a model based on the United Kingdom Prospective Diabetes Study (UKPDS) for CHD and stroke, respectively. Results and Discussion. Median follow-up for the mortality outcome was 769 days. The numbers of patients experiencing events were as follows: CHD (3062), heart failure (1408), stroke (1451), and mortality (3661). The prediction tools demonstrated the following concordance indices (c-statistics) for the specific outcomes: CHD (0.730), heart failure (0.753), stroke (0.688), and mortality (0.719). The prediction tool was superior to the Framingham model at predicting CHD and was at least as accurate as the UKPDS model at predicting stroke. Conclusions. We created an accurate tool for predicting the risk of stroke, coronary heart disease, heart failure, and death in patients with type 2 diabetes. The calculator is available online at http://rcalc.ccf.org under the heading "Type 2 Diabetes" and entitled, "Predicting 5-Year Morbidity and Mortality." This may be a valuable tool to aid the clinician's choice of an oral hypoglycemic, to better inform patients, and to motivate dialogue between physician and patient.

  4. Can Selforganizing Maps Accurately Predict Photometric Redshifts?

    NASA Technical Reports Server (NTRS)

    Way, Michael J.; Klose, Christian

    2012-01-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods

  5. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  6. Predicting structured metadata from unstructured metadata.

    PubMed

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data-defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. © The Author(s) 2016. Published by Oxford University Press.

  7. Predicting structured metadata from unstructured metadata

    PubMed Central

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data—defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. Database URL: http://www.yeastgenome.org/ PMID:28637268

  8. The prediction of intelligence in preschool children using alternative models to regression.

    PubMed

    Finch, W Holmes; Chang, Mei; Davis, Andrew S; Holden, Jocelyn E; Rothlisberg, Barbara A; McIntosh, David E

    2011-12-01

    Statistical prediction of an outcome variable using multiple independent variables is a common practice in the social and behavioral sciences. For example, neuropsychologists are sometimes called upon to provide predictions of preinjury cognitive functioning for individuals who have suffered a traumatic brain injury. Typically, these predictions are made using standard multiple linear regression models with several demographic variables (e.g., gender, ethnicity, education level) as predictors. Prior research has shown conflicting evidence regarding the ability of such models to provide accurate predictions of outcome variables such as full-scale intelligence (FSIQ) test scores. The present study had two goals: (1) to demonstrate the utility of a set of alternative prediction methods that have been applied extensively in the natural sciences and business but have not been frequently explored in the social sciences and (2) to develop models that can be used to predict premorbid cognitive functioning in preschool children. Predictions of Stanford-Binet 5 FSIQ scores for preschool-aged children is used to compare the performance of a multiple regression model with several of these alternative methods. Results demonstrate that classification and regression trees provided more accurate predictions of FSIQ scores than does the more traditional regression approach. Implications of these results are discussed.

  9. Adaptive vehicle motion estimation and prediction

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  10. Ensemble framework based real-time respiratory motion prediction for adaptive radiotherapy applications.

    PubMed

    Tatinati, Sivanagaraja; Nazarpour, Kianoush; Tech Ang, Wei; Veluvolu, Kalyana C

    2016-08-01

    Successful treatment of tumors with motion-adaptive radiotherapy requires accurate prediction of respiratory motion, ideally with a prediction horizon larger than the latency in radiotherapy system. Accurate prediction of respiratory motion is however a non-trivial task due to the presence of irregularities and intra-trace variabilities, such as baseline drift and temporal changes in fundamental frequency pattern. In this paper, to enhance the accuracy of the respiratory motion prediction, we propose a stacked regression ensemble framework that integrates heterogeneous respiratory motion prediction algorithms. We further address two crucial issues for developing a successful ensemble framework: (1) selection of appropriate prediction methods to ensemble (level-0 methods) among the best existing prediction methods; and (2) finding a suitable generalization approach that can successfully exploit the relative advantages of the chosen level-0 methods. The efficacy of the developed ensemble framework is assessed with real respiratory motion traces acquired from 31 patients undergoing treatment. Results show that the developed ensemble framework improves the prediction performance significantly compared to the best existing methods. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Exploring the knowledge behind predictions in everyday cognition: an iterated learning study.

    PubMed

    Stephens, Rachel G; Dunn, John C; Rao, Li-Lin; Li, Shu

    2015-10-01

    Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.

  12. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  13. High Speed Civil Transport (HSCT) Isolated Nacelle Transonic Boattail Drag Study and Results Using Computational Fluid Dynamics (CFD)

    NASA Technical Reports Server (NTRS)

    Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori

    2005-01-01

    Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25 percent of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.

  14. High Speed Civil Transport (HSCT) Isolated Nacelle Transonic Boattail Drag Study and Results Using Computational Fluid Dynamics (CFD)

    NASA Technical Reports Server (NTRS)

    Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori

    1999-01-01

    Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25% of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust-drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.

  15. Investigations of Fluid-Structure-Coupling and Turbulence Model Effects on the DLR Results of the Fifth AIAA CFD Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Keye, Stefan; Togiti, Vamish; Eisfeld, Bernhard; Brodersen, Olaf P.; Rivers, Melissa B.

    2013-01-01

    The accurate calculation of aerodynamic forces and moments is of significant importance during the design phase of an aircraft. Reynolds-averaged Navier-Stokes (RANS) based Computational Fluid Dynamics (CFD) has been strongly developed over the last two decades regarding robustness, efficiency, and capabilities for aerodynamically complex configurations. Incremental aerodynamic coefficients of different designs can be calculated with an acceptable reliability at the cruise design point of transonic aircraft for non-separated flows. But regarding absolute values as well as increments at off-design significant challenges still exist to compute aerodynamic data and the underlying flow physics with the accuracy required. In addition to drag, pitching moments are difficult to predict because small deviations of the pressure distributions, e.g. due to neglecting wing bending and twisting caused by the aerodynamic loads can result in large discrepancies compared to experimental data. Flow separations that start to develop at off-design conditions, e.g. in corner-flows, at trailing edges, or shock induced, can have a strong impact on the predictions of aerodynamic coefficients too. Based on these challenges faced by the CFD community a working group of the AIAA Applied Aerodynamics Technical Committee initiated in 2001 the CFD Drag Prediction Workshop (DPW) series resulting in five international workshops. The results of the participants and the committee are summarized in more than 120 papers. The latest, fifth workshop took place in June 2012 in conjunction with the 30th AIAA Applied Aerodynamics Conference. The results in this paper will evaluate the influence of static aeroelastic wing deformations onto pressure distributions and overall aerodynamic coefficients based on the NASA finite element structural model and the common grids.

  16. Three-dimensional computed tomographic volumetry precisely predicts the postoperative pulmonary function.

    PubMed

    Kobayashi, Keisuke; Saeki, Yusuke; Kitazawa, Shinsuke; Kobayashi, Naohiro; Kikuchi, Shinji; Goto, Yukinobu; Sakai, Mitsuaki; Sato, Yukio

    2017-11-01

    It is important to accurately predict the patient's postoperative pulmonary function. The aim of this study was to compare the accuracy of predictions of the postoperative residual pulmonary function obtained with three-dimensional computed tomographic (3D-CT) volumetry with that of predictions obtained with the conventional segment-counting method. Fifty-three patients scheduled to undergo lung cancer resection, pulmonary function tests, and computed tomography were enrolled in this study. The postoperative residual pulmonary function was predicted based on the segment-counting and 3D-CT volumetry methods. The predicted postoperative values were compared with the results of postoperative pulmonary function tests. Regarding the linear correlation coefficients between the predicted postoperative values and the measured values, those obtained using the 3D-CT volumetry method tended to be higher than those acquired using the segment-counting method. In addition, the variations between the predicted and measured values were smaller with the 3D-CT volumetry method than with the segment-counting method. These results were more obvious in COPD patients than in non-COPD patients. Our findings suggested that the 3D-CT volumetry was able to predict the residual pulmonary function more accurately than the segment-counting method, especially in patients with COPD. This method might lead to the selection of appropriate candidates for surgery among patients with a marginal pulmonary function.

  17. Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.

    2017-01-01

    With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.

  18. A Prediction Model for Functional Outcomes in Spinal Cord Disorder Patients Using Gaussian Process Regression.

    PubMed

    Lee, Sunghoon Ivan; Mortazavi, Bobak; Hoffman, Haydn A; Lu, Derek S; Li, Charles; Paak, Brian H; Garst, Jordan H; Razaghy, Mehrdad; Espinal, Marie; Park, Eunjeong; Lu, Daniel C; Sarrafzadeh, Majid

    2016-01-01

    Predicting the functional outcomes of spinal cord disorder patients after medical treatments, such as a surgical operation, has always been of great interest. Accurate posttreatment prediction is especially beneficial for clinicians, patients, care givers, and therapists. This paper introduces a prediction method for postoperative functional outcomes by a novel use of Gaussian process regression. The proposed method specifically considers the restricted value range of the target variables by modeling the Gaussian process based on a truncated Normal distribution, which significantly improves the prediction results. The prediction has been made in assistance with target tracking examinations using a highly portable and inexpensive handgrip device, which greatly contributes to the prediction performance. The proposed method has been validated through a dataset collected from a clinical cohort pilot involving 15 patients with cervical spinal cord disorder. The results show that the proposed method can accurately predict postoperative functional outcomes, Oswestry disability index and target tracking scores, based on the patient's preoperative information with a mean absolute error of 0.079 and 0.014 (out of 1.0), respectively.

  19. Individuals achieve more accurate results with meters that are codeless and employ dynamic electrochemistry.

    PubMed

    Rao, Anoop; Wiley, Meg; Iyengar, Sridhar; Nadeau, Dan; Carnevale, Julie

    2010-01-01

    Studies have shown that controlling blood glucose can reduce the onset and progression of the long-term microvascular and neuropathic complications associated with the chronic course of diabetes mellitus. Improved glycemic control can be achieved by frequent testing combined with changes in medication, exercise, and diet. Technological advancements have enabled improvements in analytical accuracy of meters, and this paper explores two such parameters to which that accuracy can be attributed. Four blood glucose monitoring systems (with or without dynamic electrochemistry algorithms, codeless or requiring coding prior to testing) were evaluated and compared with respect to their accuracy. Altogether, 108 blood glucose values were obtained for each system from 54 study participants and compared with the reference values. The analysis depicted in the International Organization for Standardization table format indicates that the devices with dynamic electrochemistry and the codeless feature had the highest proportion of acceptable results overall (System A, 101/103). Results were significant when compared at the 10% bias level with meters that were codeless and utilized static electrochemistry (p = .017) or systems that had static electrochemistry but needed coding (p = .008). Analytical performance of these blood glucose meters differed significantly depending on their technologic features. Meters that utilized dynamic electrochemistry and did not require coding were more accurate than meters that used static electrochemistry or required coding. 2010 Diabetes Technology Society.

  20. Coarse-Graining Polymer Field Theory for Fast and Accurate Simulations of Directed Self-Assembly

    NASA Astrophysics Data System (ADS)

    Liu, Jimmy; Delaney, Kris; Fredrickson, Glenn

    To design effective manufacturing processes using polymer directed self-assembly (DSA), the semiconductor industry benefits greatly from having a complete picture of stable and defective polymer configurations. Field-theoretic simulations are an effective way to study these configurations and predict defect populations. Self-consistent field theory (SCFT) is a particularly successful theory for studies of DSA. Although other models exist that are faster to simulate, these models are phenomenological or derived through asymptotic approximations, often leading to a loss of accuracy relative to SCFT. In this study, we employ our recently-developed method to produce an accurate coarse-grained field theory for diblock copolymers. The method uses a force- and stress-matching strategy to map output from SCFT simulations into parameters for an optimized phase field model. This optimized phase field model is just as fast as existing phenomenological phase field models, but makes more accurate predictions of polymer self-assembly, both in bulk and in confined systems. We study the performance of this model under various conditions, including its predictions of domain spacing, morphology and defect formation energies. Samsung Electronics.

  1. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  2. Predicting Microbial Fuel Cell Biofilm Communities and Bioreactor Performance using Artificial Neural Networks.

    PubMed

    Lesnik, Keaton Larson; Liu, Hong

    2017-09-19

    The complex interactions that occur in mixed-species bioelectrochemical reactors, like microbial fuel cells (MFCs), make accurate predictions of performance outcomes under untested conditions difficult. While direct correlations between any individual waste stream characteristic or microbial community structure and reactor performance have not been able to be directly established, the increase in sequencing data and readily available computational power enables the development of alternate approaches. In the current study, 33 MFCs were evaluated under a range of conditions including eight separate substrates and three different wastewaters. Artificial Neural Networks (ANNs) were used to establish mathematical relationships between wastewater/solution characteristics, biofilm communities, and reactor performance. ANN models that incorporated biotic interactions predicted reactor performance outcomes more accurately than those that did not. The average percent error of power density predictions was 16.01 ± 4.35%, while the average percent error of Coulombic efficiency and COD removal rate predictions were 1.77 ± 0.57% and 4.07 ± 1.06%, respectively. Predictions of power density improved to within 5.76 ± 3.16% percent error through classifying taxonomic data at the family versus class level. Results suggest that the microbial communities and performance of bioelectrochemical systems can be accurately predicted using data-mining, machine-learning techniques.

  3. Prediction of blast-induced air overpressure: a hybrid AI-based predictive model.

    PubMed

    Jahed Armaghani, Danial; Hajihassani, Mohsen; Marto, Aminaton; Shirani Faradonbeh, Roohollah; Mohamad, Edy Tonnizam

    2015-11-01

    Blast operations in the vicinity of residential areas usually produce significant environmental problems which may cause severe damage to the nearby areas. Blast-induced air overpressure (AOp) is one of the most important environmental impacts of blast operations which needs to be predicted to minimize the potential risk of damage. This paper presents an artificial neural network (ANN) optimized by the imperialist competitive algorithm (ICA) for the prediction of AOp induced by quarry blasting. For this purpose, 95 blasting operations were precisely monitored in a granite quarry site in Malaysia and AOp values were recorded in each operation. Furthermore, the most influential parameters on AOp, including the maximum charge per delay and the distance between the blast-face and monitoring point, were measured and used to train the ICA-ANN model. Based on the generalized predictor equation and considering the measured data from the granite quarry site, a new empirical equation was developed to predict AOp. For comparison purposes, conventional ANN models were developed and compared with the ICA-ANN results. The results demonstrated that the proposed ICA-ANN model is able to predict blast-induced AOp more accurately than other presented techniques.

  4. PconsD: ultra rapid, accurate model quality assessment for protein structure prediction.

    PubMed

    Skwark, Marcin J; Elofsson, Arne

    2013-07-15

    Clustering methods are often needed for accurately assessing the quality of modeled protein structures. Recent blind evaluation of quality assessment methods in CASP10 showed that there is little difference between many different methods as far as ranking models and selecting best model are concerned. When comparing many models, the computational cost of the model comparison can become significant. Here, we present PconsD, a fast, stream-computing method for distance-driven model quality assessment that runs on consumer hardware. PconsD is at least one order of magnitude faster than other methods of comparable accuracy. The source code for PconsD is freely available at http://d.pcons.net/. Supplementary benchmarking data are also available there. arne@bioinfo.se Supplementary data are available at Bioinformatics online.

  5. Large arterial occlusive strokes as a medical emergency: need to accurately predict clot location.

    PubMed

    Vanacker, Peter; Faouzi, Mohamed; Eskandari, Ashraf; Maeder, Philippe; Meuli, Reto; Michel, Patrik

    2017-10-01

    Endovascular treatment for acute ischemic stroke with a large intracranial occlusion was recently shown to be effective. Timely knowledge of the presence, site, and extent of arterial occlusions in the ischemic territory has the potential to influence patient selection for endovascular treatment. We aimed to find predictors of large vessel occlusive strokes, on the basis of available demographic, clinical, radiological, and laboratory data in the emergency setting. Patients enrolled in ASTRAL registry with acute ischemic stroke and computed tomography (CT)-angiography within 12 h of stroke onset were selected and categorized according to occlusion site. Easily accessible variables were used in a multivariate analysis. Of 1645 patients enrolled, a significant proportion (46.2%) had a large vessel occlusion in the ischemic territory. The main clinical predictors of any arterial occlusion were in-hospital stroke [odd ratios (OR) 2.1, 95% confidence interval 1.4-3.1], higher initial National Institute of Health Stroke Scale (OR 1.1, 1.1-1.2), presence of visual field defects (OR 1.9, 1.3-2.6), dysarthria (OR 1.4, 1.0-1.9), or hemineglect (OR 2.0, 1.4-2.8) at admission and atrial fibrillation (OR 1.7, 1.2-2.3). Further, the following radiological predictors were identified: time-to-imaging (OR 0.9, 0.9-1.0), early ischemic changes (OR 2.3, 1.7-3.2), and silent lesions on CT (OR 0.7, 0.5-1.0). The area under curve for this analysis was 0.85. Looking at different occlusion sites, National Institute of Health Stroke Scale and early ischemic changes on CT were independent predictors in all subgroups. Neurological deficits, stroke risk factors, and CT findings accurately identify acute ischemic stroke patients at risk of symptomatic vessel occlusion. Predicting the presence of these occlusions may impact emergency stroke care in regions with limited access to noninvasive vascular imaging.

  6. Accurate prediction of subcellular location of apoptosis proteins combining Chou's PseAAC and PsePSSM based on wavelet denoising.

    PubMed

    Yu, Bin; Li, Shan; Qiu, Wen-Ying; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Wang, Ming-Hui; Zhang, Yan

    2017-12-08

    Apoptosis proteins subcellular localization information are very important for understanding the mechanism of programmed cell death and the development of drugs. The prediction of subcellular localization of an apoptosis protein is still a challenging task because the prediction of apoptosis proteins subcellular localization can help to understand their function and the role of metabolic processes. In this paper, we propose a novel method for protein subcellular localization prediction. Firstly, the features of the protein sequence are extracted by combining Chou's pseudo amino acid composition (PseAAC) and pseudo-position specific scoring matrix (PsePSSM), then the feature information of the extracted is denoised by two-dimensional (2-D) wavelet denoising. Finally, the optimal feature vectors are input to the SVM classifier to predict subcellular location of apoptosis proteins. Quite promising predictions are obtained using the jackknife test on three widely used datasets and compared with other state-of-the-art methods. The results indicate that the method proposed in this paper can remarkably improve the prediction accuracy of apoptosis protein subcellular localization, which will be a supplementary tool for future proteomics research.

  7. Theoretical model predictions and experimental results for a wavelength switchable Tm:YAG laser.

    PubMed

    Niu, Yanxiong; Wang, Caili; Liu, Wenwen; Niu, Haisha; Xu, Bing; Man, Da

    2014-07-01

    We present a theoretical model study of a quasi-three-level laser with particular attention given to the Tm:YAG laser. The oscillating conditions of this laser were theoretically analyzed from the point of the pump threshold while taking into account reabsorption loss. The laser oscillation at 2.02 μm with large stimulated emission sections was suppressed by selecting the appropriate coating for the cavity mirrors, then an efficient laser-diode side-pumped continuous-wave Tm:YAG crystal laser operating at 2.07 μm was realized. Experiments with the Tm:YAG laser confirmed the accuracy of the model, and the model was able to accurately predict that the high Stark sub-level within the H36 ground state manifold has a low laser threshold and long laser wavelength, which was achieved by decreasing the transmission of the output coupler.

  8. Measured and predicted rotor performance for the SERI advanced wind turbine blades

    NASA Astrophysics Data System (ADS)

    Tangler, J.; Smith, B.; Kelley, N.; Jager, D.

    1992-02-01

    Measured and predicted rotor performance for the Solar Energy Research Institute (SERI) advanced wind turbine blades were compared to assess the accuracy of predictions and to identify the sources of error affecting both predictions and measurements. An awareness of these sources of error contributes to improved prediction and measurement methods that will ultimately benefit future rotor design efforts. Propeller/vane anemometers were found to underestimate the wind speed in turbulent environments such as the San Gorgonio Pass wind farm area. Using sonic or cup anemometers, good agreement was achieved between predicted and measured power output for wind speeds up to 8 m/sec. At higher wind speeds an optimistic predicted power output and the occurrence of peak power at wind speeds lower than measurements resulted from the omission of turbulence and yaw error. In addition, accurate two-dimensional (2-D) airfoil data prior to stall and a post stall airfoil data synthesization method that reflects three-dimensional (3-D) effects were found to be essential for accurate performance prediction.

  9. Prediction of packaging seal life using thermoanalytical techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigrey, P.J.

    1997-11-01

    In this study, Thermogravimetric Analysis (TGA) has been used to study silicone, Viton and Ethylene Propylene (EPDM) rubber. The studies have shown that TGA accurately predicts the relative order of thermo-oxidative stability of these three materials from the calculated activation energies. As expected, the greatest thermal stability was found in silicone rubber followed by Viton and EPDM rubber. The calculated lifetimes for these materials were in relatively close agreement with published values. The preliminary results also accurately reflect decreased thermal stability and lifetime for EPDM rubber exposed to radiation and chemicals. These results suggest TGA provides a rapid method tomore » evaluate material stability.« less

  10. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    PubMed

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  11. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. Diagnostic peritoneal lavage: volume of lavage effluent needed for accurate determination of a negative lavage.

    PubMed

    Sweeney, J F; Albrink, M H; Bischof, E; McAllister, E W; Rosemurgy, A S

    1994-12-01

    While the ability of diagnostic peritoneal lavage (DPL) to 'rule out' occult intra-abdominal injuries has been well established, the volume of lavage effluent necessary for accurate prediction of a negative lavage has not been determined. To address this, 60 injured adults with blunt (N = 45) or penetrating (N = 15) trauma undergoing DPL were evaluated prospectively through protocol. After infusion of 1l of Ringer's lactate solution, samples of lavage effluent were obtained at 100 cm3, 250 cm3, 500 cm3, and 759 cm3, and when no more effluent could be returned (final sample). DPL was considered negative if final sample RBC count was < or = 100,000/mm3 for blunt injury and < 50,000/mm3 for penetrating injury. The conclusion is that at 100 cm3 of lavage effluent returned, negative results are highly predictive of a negative DPL (98 per cent), though 250 cm3 of lavage effluent is required to predict a negative DPL uniformly (100 per cent).

  13. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  14. Model-data assimilation of multiple phenological observations to constrain and predict leaf area index.

    PubMed

    Viskari, Toni; Hardiman, Brady; Desai, Ankur R; Dietze, Michael C

    2015-03-01

    Our limited ability to accurately simulate leaf phenology is a leading source of uncertainty in models of ecosystem carbon cycling. We evaluate if continuously updating canopy state variables with observations is beneficial for predicting phenological events. We employed ensemble adjustment Kalman filter (EAKF) to update predictions of leaf area index (LAI) and leaf extension using tower-based photosynthetically active radiation (PAR) and moderate resolution imaging spectrometer (MODIS) data for 2002-2005 at Willow Creek, Wisconsin, USA, a mature, even-aged, northern hardwood, deciduous forest. The ecosystem demography model version 2 (ED2) was used as the prediction model, forced by offline climate data. EAKF successfully incorporated information from both the observations and model predictions weighted by their respective uncertainties. The resulting. estimate reproduced the observed leaf phenological cycle in the spring and the fall better than a parametric model prediction. These results indicate that during spring the observations contribute most in determining the correct bud-burst date, after which the model performs well, but accurately modeling fall leaf senesce requires continuous model updating from observations. While the predicted net ecosystem exchange (NEE) of CO2 precedes tower observations and unassimilated model predictions in the spring, overall the prediction follows observed NEE better than the model alone. Our results show state data assimilation successfully simulates the evolution of plant leaf phenology and improves model predictions of forest NEE.

  15. Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.

    PubMed

    Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M

    2013-04-02

    A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.

  16. Predicting the evolution of spreading on complex networks

    PubMed Central

    Chen, Duan-Bing; Xiao, Rui; Zeng, An

    2014-01-01

    Due to the wide applications, spreading processes on complex networks have been intensively studied. However, one of the most fundamental problems has not yet been well addressed: predicting the evolution of spreading based on a given snapshot of the propagation on networks. With this problem solved, one can accelerate or slow down the spreading in advance if the predicted propagation result is narrower or wider than expected. In this paper, we propose an iterative algorithm to estimate the infection probability of the spreading process and then apply it to a mean-field approach to predict the spreading coverage. The validation of the method is performed in both artificial and real networks. The results show that our method is accurate in both infection probability estimation and spreading coverage prediction. PMID:25130862

  17. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal <span class="hlt">predictions</span> aim to <span class="hlt">predict</span> the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal <span class="hlt">predictions</span>. Ensembles are another important aspect. Applying slightly perturbed <span class="hlt">predictions</span> to trigger the famous butterfly effect <span class="hlt">results</span> in an ensemble. Instead of evaluating one</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..MARE35010F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..MARE35010F"><span><span class="hlt">Predicting</span> community composition from pairwise interactions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Friedman, Jonathan; Higgins, Logan; Gore, Jeff</p> <p></p> <p>The ability to <span class="hlt">predict</span> the structure of complex, multispecies communities is crucial for understanding the impact of species extinction and invasion on natural communities, as well as for engineering novel, synthetic communities. Communities are often modeled using phenomenological models, such as the classical generalized Lotka-Volterra (gLV) model. While a lot of our intuition comes from such models, their <span class="hlt">predictive</span> power has rarely been tested experimentally. To directly assess the <span class="hlt">predictive</span> power of this approach, we constructed synthetic communities comprised of up to 8 soil bacteria. We measured the outcome of competition between all species pairs, and used these measurements to <span class="hlt">predict</span> the composition of communities composed of more than 2 species. The pairwise competitions <span class="hlt">resulted</span> in a diverse set of outcomes, including coexistence, exclusion, and bistability, and displayed evidence for both interference and facilitation. Most pair outcomes could be captured by the gLV framework, and the composition of multispecies communities could be <span class="hlt">predicted</span> for communities composed solely of such pairs. Our <span class="hlt">results</span> demonstrate the <span class="hlt">predictive</span> ability and utility of simple phenomenology, which enables <span class="hlt">accurate</span> <span class="hlt">predictions</span> in the absence of mechanistic details.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1190586-occupancy-key-transcription-factors-more-accurate-predictor-enhancer-activity-than-histone-modifications-chromatin-accessibility','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1190586-occupancy-key-transcription-factors-more-accurate-predictor-enhancer-activity-than-histone-modifications-chromatin-accessibility"><span>Occupancy by key transcription factors is a more <span class="hlt">accurate</span> predictor of enhancer activity than histone modifications or chromatin accessibility</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Dogan, Nergiz; Wu, Weisheng; Morrissey, Christapher S.; ...</p> <p>2015-04-23</p> <p>Regulated gene expression controls organismal development, and variation in regulatory patterns has been implicated in complex traits. Thus <span class="hlt">accurate</span> <span class="hlt">prediction</span> of enhancers is important for further understanding of these processes. Genome-wide measurement of epigenetic features, such as histone modifications and occupancy by transcription factors, is improving enhancer <span class="hlt">predictions</span>, but the contribution of these features to <span class="hlt">prediction</span> accuracy is not known. Given the importance of the hematopoietic transcription factor TAL1 for erythroid gene activation, we <span class="hlt">predicted</span> candidate enhancers based on genomic occupancy by TAL1 and measured their activity. Contributions of multiple features to enhancer <span class="hlt">prediction</span> were evaluated based on the resultsmore » of these and other studies. <span class="hlt">Results</span>: TAL1-bound DNA segments were active enhancers at a high rate both in transient transfections of cultured cells (39 of 79, or 56%) and transgenic mice (43 of 66, or 65%). The level of binding signal for TAL1 or GATA1 did not help distinguish TAL1-bound DNA segments as active versus inactive enhancers, nor did the density of regulation-related histone modifications. A meta-analysis of <span class="hlt">results</span> from this and other studies (273 tested <span class="hlt">predicted</span> enhancers) showed that the presence of TAL1, GATA1, EP300, SMAD1, H3K4 methylation, H3K27ac, and CAGE tags at DNase hypersensitive sites gave the most <span class="hlt">accurate</span> predictors of enhancer activity, with a success rate over 80% and a median threefold increase in activity. Chromatin accessibility assays and the histone modifications H3K4me1 and H3K27ac were sensitive for finding enhancers, but they have high false positive rates unless transcription factor occupancy is also included. Conclusions: Occupancy by key transcription factors such as TAL1, GATA1, SMAD1, and EP300, along with evidence of transcription, improves the accuracy of enhancer <span class="hlt">predictions</span> based on epigenetic features.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AIPC.1233.1588A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AIPC.1233.1588A"><span><span class="hlt">Prediction</span> of Scour below Flip Bucket using Soft Computing Techniques</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Azamathulla, H. Md.; Ab Ghani, Aminuddin; Azazi Zakaria, Nor</p> <p>2010-05-01</p> <p>The <span class="hlt">accurate</span> <span class="hlt">prediction</span> of the depth of scour around hydraulic structure (trajectory spillways) has been based on the experimental studies and the equations developed are mainly empirical in nature. This paper evaluates the performance of the soft computing (intelligence) techiques, Adaptive Neuro-Fuzzy System (ANFIS) and Genetic expression Programming (GEP) approach, in <span class="hlt">prediction</span> of scour below a flip bucket spillway. The <span class="hlt">results</span> are very promising, which support the use of these intelligent techniques in <span class="hlt">prediction</span> of highly non-linear scour parameters.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhDT........17E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhDT........17E"><span>Characterization of normality of chaotic systems including <span class="hlt">prediction</span> and detection of anomalies</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Engler, Joseph John</p> <p></p> <p><span class="hlt">Accurate</span> <span class="hlt">prediction</span> and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to <span class="hlt">accurately</span> <span class="hlt">predict</span> future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for <span class="hlt">accurate</span> short term <span class="hlt">predictions</span>, given the dynamics of the system are well understood. This fact has been exploited in the research community and has <span class="hlt">resulted</span> in various algorithms for short term <span class="hlt">predictions</span>. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to <span class="hlt">predict</span> future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may <span class="hlt">result</span> in bifurcation of the normal states, further complicates the problem. The detection of anomalies and <span class="hlt">prediction</span> of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more <span class="hlt">accurate</span> <span class="hlt">prediction</span> of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27286683','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27286683"><span><span class="hlt">Predictive</span> modeling of complications.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P</p> <p>2016-09-01</p> <p><span class="hlt">Predictive</span> analytic algorithms are designed to identify patterns in the data that allow for <span class="hlt">accurate</span> <span class="hlt">predictions</span> without the need for a hypothesis. Therefore, <span class="hlt">predictive</span> modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using <span class="hlt">predictive</span> modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of <span class="hlt">predictive</span> analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to <span class="hlt">predictive</span> analytics, the controversies surrounding the technique, and the future directions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4313801','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4313801"><span>Computer-based personality judgments are more <span class="hlt">accurate</span> than those made by humans</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Youyou, Wu; Kosinski, Michal; Stillwell, David</p> <p>2015-01-01</p> <p>Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although <span class="hlt">accurate</span> personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer <span class="hlt">predictions</span> based on a generic digital footprint (Facebook Likes) are more <span class="hlt">accurate</span> (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when <span class="hlt">predicting</span> life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2872218','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2872218"><span>A Multiscale Red Blood Cell Model with <span class="hlt">Accurate</span> Mechanics, Rheology, and Dynamics</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em</p> <p>2010-01-01</p> <p>Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to <span class="hlt">predict</span> RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical <span class="hlt">predictions</span>, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot <span class="hlt">accurately</span> represent the RBC's rheological properties and its dynamics, and therefore <span class="hlt">accurate</span> modeling of a viscoelastic membrane is necessary. PMID:20483330</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3242543','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3242543"><span><span class="hlt">Predicting</span> Visual Distraction Using Driving Performance Data</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kircher, Katja; Ahlstrom, Christer</p> <p>2010-01-01</p> <p>Behavioral variables are often used as performance indicators (PIs) of visual or internal distraction induced by secondary tasks. The objective of this study is to investigate whether visual distraction can be <span class="hlt">predicted</span> by driving performance PIs in a naturalistic setting. Visual distraction is here defined by a gaze based real-time distraction detection algorithm called AttenD. Seven drivers used an instrumented vehicle for one month each in a small scale field operational test. For each of the visual distraction events detected by AttenD, seven PIs such as steering wheel reversal rate and throttle hold were calculated. Corresponding data were also calculated for time periods during which the drivers were classified as attentive. For each PI, means between distracted and attentive states were calculated using t-tests for different time-window sizes (2 – 40 s), and the window width with the smallest <span class="hlt">resulting</span> p-value was selected as optimal. Based on the optimized PIs, logistic regression was used to <span class="hlt">predict</span> whether the drivers were attentive or distracted. The logistic regression <span class="hlt">resulted</span> in <span class="hlt">predictions</span> which were 76 % correct (sensitivity = 77 % and specificity = 76 %). The conclusion is that there is a relationship between behavioral variables and visual distraction, but the relationship is not strong enough to <span class="hlt">accurately</span> <span class="hlt">predict</span> visual driver distraction. Instead, behavioral PIs are probably best suited as complementary to eye tracking based algorithms in order to make them more <span class="hlt">accurate</span> and robust. PMID:21050615</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15017491','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15017491"><span><span class="hlt">Predictive</span> value of diminutive colonic adenoma trial: the <span class="hlt">PREDICT</span> trial.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schoenfeld, Philip; Shad, Javaid; Ormseth, Eric; Coyle, Walter; Cash, Brooks; Butler, James; Schindler, William; Kikendall, Walter J; Furlong, Christopher; Sobin, Leslie H; Hobbs, Christine M; Cruess, David; Rex, Douglas</p> <p>2003-05-01</p> <p>Diminutive adenomas (1-9 mm in diameter) are frequently found during colon cancer screening with flexible sigmoidoscopy (FS). This trial assessed the <span class="hlt">predictive</span> value of these diminutive adenomas for advanced adenomas in the proximal colon. In a multicenter, prospective cohort trial, we matched 200 patients with normal FS and 200 patients with diminutive adenomas on FS for age and gender. All patients underwent colonoscopy. The presence of advanced adenomas (adenoma >or= 10 mm in diameter, villous adenoma, adenoma with high grade dysplasia, and colon cancer) and adenomas (any size) was recorded. Before colonoscopy, patients completed questionnaires about risk factors for adenomas. The prevalence of advanced adenomas in the proximal colon was similar in patients with diminutive adenomas and patients with normal FS (6% vs. 5.5%, respectively) (relative risk, 1.1; 95% confidence interval [CI], 0.5-2.6). Diminutive adenomas on FS did not <span class="hlt">accurately</span> <span class="hlt">predict</span> advanced adenomas in the proximal colon: sensitivity, 52% (95% CI, 32%-72%); specificity, 50% (95% CI, 49%-51%); positive <span class="hlt">predictive</span> value, 6% (95% CI, 4%-8%); and negative <span class="hlt">predictive</span> value, 95% (95% CI, 92%-97%). Male gender (odds ratio, 1.63; 95% CI, 1.01-2.61) was associated with an increased risk of proximal colon adenomas. Diminutive adenomas on sigmoidoscopy may not <span class="hlt">accurately</span> <span class="hlt">predict</span> advanced adenomas in the proximal colon.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21802568','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21802568"><span>Energy <span class="hlt">prediction</span> equations are inadequate for obese Hispanic youth.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Klein, Catherine J; Villavicencio, Stephan A; Schweitzer, Amy; Bethepu, Joel S; Hoffman, Heather J; Mirza, Nazrat M</p> <p>2011-08-01</p> <p>Assessing energy requirements is a fundamental activity in clinical dietetics practice. A study was designed to determine whether published linear regression equations were <span class="hlt">accurate</span> for <span class="hlt">predicting</span> resting energy expenditure (REE) in fasted Hispanic children with obesity (aged 7 to 15 years). REE was measured using indirect calorimetry; body composition was estimated with whole-body air displacement plethysmography. REE was <span class="hlt">predicted</span> using four equations: Institute of Medicine for healthy-weight children (IOM-HW), IOM for overweight and obese children (IOM-OS), Harris-Benedict, and Schofield. Accuracy of the <span class="hlt">prediction</span> was calculated as the absolute value of the difference between the measured and <span class="hlt">predicted</span> REE divided by the measured REE, expressed as a percentage. <span class="hlt">Predicted</span> values within 85% to 115% of measured were defined as <span class="hlt">accurate</span>. Participants (n=58; 53% boys) were mean age 11.8±2.1 years, had 43.5%±5.1% body fat, and had a body mass index of 31.5±5.8 (98.6±1.1 body mass index percentile). Measured REE was 2,339±680 kcal/day; <span class="hlt">predicted</span> REE was 1,815±401 kcal/day (IOM-HW), 1,794±311 kcal/day (IOM-OS), 1,151±300 kcal/day (Harris-Benedict), and, 1,771±316 kcal/day (Schofield). Measured REE adjusted for body weight averaged 32.0±8.4 kcal/kg/day (95% confidence interval 29.8 to 34.2). Published equations <span class="hlt">predicted</span> REE within 15% accuracy for only 36% to 40% of 58 participants, except for Harris-Benedict, which did not achieve accuracy for any participant. The most frequently <span class="hlt">accurate</span> values were obtained using IOM-HW, which <span class="hlt">predicted</span> REE within 15% accuracy for 55% (17/31) of boys. Published equations did not <span class="hlt">accurately</span> <span class="hlt">predict</span> REE for youth in the study sample. Further studies are warranted to formulate <span class="hlt">accurate</span> energy <span class="hlt">prediction</span> equations for this population. Copyright © 2011 American Dietetic Association. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29440215','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29440215"><span>Does exposure to simulated patient cases improve accuracy of clinicians' <span class="hlt">predictive</span> value estimates of diagnostic test <span class="hlt">results</span>? A within-subjects experiment at St Michael's Hospital, Toronto, Canada.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Armstrong, Bonnie; Spaniol, Julia; Persaud, Nav</p> <p>2018-02-13</p> <p>Clinicians often overestimate the probability of a disease given a positive test <span class="hlt">result</span> (positive <span class="hlt">predictive</span> value; PPV) and the probability of no disease given a negative test <span class="hlt">result</span> (negative <span class="hlt">predictive</span> value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an 'experience format') would promote more <span class="hlt">accurate</span> PPV and NPV estimates compared with a numerical format. Participants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design. The study was completed online, via Qualtrics (Provo, Utah, USA). 50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael's Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency. Estimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true <span class="hlt">predictive</span> value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%, d =0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%, d =0.303, P=0.015). Exposure to simulated patient cases promotes <span class="hlt">accurate</span> estimation of <span class="hlt">predictive</span> values in clinicians. This finding carries implications for diagnostic training and practice. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=517926','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=517926"><span>GASP: Gapped Ancestral Sequence <span class="hlt">Prediction</span> for proteins</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Edwards, Richard J; Shields, Denis C</p> <p>2004-01-01</p> <p>Background The <span class="hlt">prediction</span> of ancestral protein sequences from multiple sequence alignments is useful for many bioinformatics analyses. <span class="hlt">Predicting</span> ancestral sequences is not a simple procedure and relies on <span class="hlt">accurate</span> alignments and phylogenies. Several algorithms exist based on Maximum Parsimony or Maximum Likelihood methods but many current implementations are unable to process residues with gaps, which may represent insertion/deletion (indel) events or sequence fragments. <span class="hlt">Results</span> Here we present a new algorithm, GASP (Gapped Ancestral Sequence <span class="hlt">Prediction</span>), for <span class="hlt">predicting</span> ancestral sequences from phylogenetic trees and the corresponding multiple sequence alignments. Alignments may be of any size and contain gaps. GASP first assigns the positions of gaps in the phylogeny before using a likelihood-based approach centred on amino acid substitution matrices to assign ancestral amino acids. Important outgroup information is used by first working down from the tips of the tree to the root, using descendant data only to assign probabilities, and then working back up from the root to the tips using descendant and outgroup data to make <span class="hlt">predictions</span>. GASP was tested on a number of simulated datasets based on real phylogenies. <span class="hlt">Prediction</span> accuracy for ungapped data was similar to three alternative algorithms tested, with GASP performing better in some cases and worse in others. Adding simple insertions and deletions to the simulated data did not have a detrimental effect on GASP accuracy. Conclusions GASP (Gapped Ancestral Sequence <span class="hlt">Prediction</span>) will <span class="hlt">predict</span> ancestral sequences from multiple protein alignments of any size. Although not as <span class="hlt">accurate</span> in all cases as some of the more sophisticated maximum likelihood approaches, it can process a wide range of input phylogenies and will <span class="hlt">predict</span> ancestral sequences for gapped and ungapped residues alike. PMID:15350199</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1395087','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1395087"><span>Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 <span class="hlt">Accurately</span> <span class="hlt">Predicts</span> Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler</p> <p></p> <p>The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model <span class="hlt">accurately</span> <span class="hlt">predicts</span> phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model <span class="hlt">prediction</span> of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1395087-genome-scale-metabolic-model-green-alga-chlorella-vulgaris-utex-accurately-predicts-phenotypes-under-autotrophic-heterotrophic-mixotrophic-growth-conditions','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1395087-genome-scale-metabolic-model-green-alga-chlorella-vulgaris-utex-accurately-predicts-phenotypes-under-autotrophic-heterotrophic-mixotrophic-growth-conditions"><span>Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 <span class="hlt">Accurately</span> <span class="hlt">Predicts</span> Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler; ...</p> <p>2016-07-02</p> <p>The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model <span class="hlt">accurately</span> <span class="hlt">predicts</span> phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model <span class="hlt">prediction</span> of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27372244','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27372244"><span>Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 <span class="hlt">Accurately</span> <span class="hlt">Predicts</span> Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten</p> <p>2016-09-01</p> <p>The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model <span class="hlt">accurately</span> <span class="hlt">predicts</span> phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model <span class="hlt">prediction</span> of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. © 2016 American Society of Plant Biologists. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5870699','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5870699"><span>miRCat2: <span class="hlt">accurate</span> <span class="hlt">prediction</span> of plant and animal microRNAs from next-generation sequencing datasets</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Paicu, Claudia; Mohorianu, Irina; Stocks, Matthew; Xu, Ping; Coince, Aurore; Billmeier, Martina; Dalmay, Tamas; Moulton, Vincent; Moxon, Simon</p> <p>2017-01-01</p> <p>Abstract Motivation MicroRNAs are a class of ∼21–22 nt small RNAs which are excised from a stable hairpin-like secondary structure. They have important gene regulatory functions and are involved in many pathways including developmental timing, organogenesis and development in eukaryotes. There are several computational tools for miRNA detection from next-generation sequencing datasets. However, many of these tools suffer from high false positive and false negative rates. Here we present a novel miRNA <span class="hlt">prediction</span> algorithm, miRCat2. miRCat2 incorporates a new entropy-based approach to detect miRNA loci, which is designed to cope with the high sequencing depth of current next-generation sequencing datasets. It has a user-friendly interface and produces graphical representations of the hairpin structure and plots depicting the alignment of sequences on the secondary structure. <span class="hlt">Results</span> We test miRCat2 on a number of animal and plant datasets and present a comparative analysis with miRCat, miRDeep2, miRPlant and miReap. We also use mutants in the miRNA biogenesis pathway to evaluate the <span class="hlt">predictions</span> of these tools. <span class="hlt">Results</span> indicate that miRCat2 has an improved accuracy compared with other methods tested. Moreover, miRCat2 <span class="hlt">predicts</span> several new miRNAs that are differentially expressed in wild-type versus mutants in the miRNA biogenesis pathway. Availability and Implementation miRCat2 is part of the UEA small RNA Workbench and is freely available from http://srna-workbench.cmp.uea.ac.uk/. Contact v.moulton@uea.ac.uk or s.moxon@uea.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:28407097</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5074608','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5074608"><span>Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 <span class="hlt">Accurately</span> <span class="hlt">Predicts</span> Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions1</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zuñiga, Cristal; Li, Chien-Ting; Zielinski, Daniel C.; Guarnieri, Michael T.; Antoniewicz, Maciek R.; Zengler, Karsten</p> <p>2016-01-01</p> <p>The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model <span class="hlt">accurately</span> <span class="hlt">predicts</span> phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model <span class="hlt">prediction</span> of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4999179','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4999179"><span>A Systematic Review of <span class="hlt">Predictions</span> of Survival in Palliative Care: How <span class="hlt">Accurate</span> Are Clinicians and Who Are the Experts?</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Harris, Adam; Harries, Priscilla</p> <p>2016-01-01</p> <p> overall accuracy being reported. Data were extracted using a standardised tool, by one reviewer, which could have introduced bias. Devising search terms for prognostic studies is challenging. Every attempt was made to devise search terms that were sufficiently sensitive to detect all prognostic studies; however, it remains possible that some studies were not identified. Conclusion Studies of prognostic accuracy in palliative care are heterogeneous, but the evidence suggests that clinicians’ <span class="hlt">predictions</span> are frequently inaccurate. No sub-group of clinicians was consistently shown to be more <span class="hlt">accurate</span> than any other. Implications of Key Findings Further research is needed to understand how clinical <span class="hlt">predictions</span> are formulated and how their accuracy can be improved. PMID:27560380</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100038315','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100038315"><span>Analysis of Flight Management System <span class="hlt">Predictions</span> of Idle-Thrust Descents</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stell, Laurel</p> <p>2010-01-01</p> <p>To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must <span class="hlt">accurately</span> <span class="hlt">predict</span> descent trajectories. To support development of the predictor and its uncertainty models, descents from cruise to the meter fix were executed using vertical navigation in a B737-700 simulator and a B777-200 simulator, both with commercial FMSs. For both aircraft types, the FMS computed the intended descent path for a specified speed profile assuming idle thrust after top of descent (TOD), and then it controlled the avionics without human intervention. The test matrix varied aircraft weight, descent speed, and wind conditions. The first analysis in this paper determined the effect of the test matrix parameters on the FMS computation of TOD location, and it compared the <span class="hlt">results</span> to those for the current ground predictor in the Efficient Descent Advisor (EDA). The second analysis was similar but considered the time to fly a specified distance to the meter fix. The effects of the test matrix variables together with the accuracy requirements for the predictor will determine the allowable error for the predictor inputs. For the B737, the EDA <span class="hlt">prediction</span> of meter fix crossing time agreed well with the FMS; but its <span class="hlt">prediction</span> of TOD location probably was not sufficiently <span class="hlt">accurate</span> to enable idle-thrust descents in congested airspace, even though the FMS and EDA gave similar shapes for TOD location as a function of the test matrix variables. For the B777, the FMS and EDA gave different shapes for the TOD location function, and the EDA <span class="hlt">prediction</span> of the TOD location is not <span class="hlt">accurate</span> enough to fully enable the concept. Furthermore, the differences between the FMS and EDA <span class="hlt">predictions</span> of meter fix crossing time for the B777 indicated that at least one of them was not sufficiently <span class="hlt">accurate</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3492357','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3492357"><span><span class="hlt">Predicting</span> Turns in Proteins with a Unified Model</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Song, Qi; Li, Tonghua; Cong, Peisheng; Sun, Jiangming; Li, Dapeng; Tang, Shengnan</p> <p>2012-01-01</p> <p>Motivation Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current <span class="hlt">prediction</span> methods are well developed for the <span class="hlt">prediction</span> of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function <span class="hlt">prediction</span> it is necessary to develop a uniform model that can <span class="hlt">accurately</span> <span class="hlt">predict</span> all types of turns simultaneously. <span class="hlt">Results</span> In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i) using newly exploited features of structural evolution information (secondary structure and shape string of protein) based on structure homologies, (ii) considering all types of turns in a unified model, and (iii) practical capability of <span class="hlt">accurate</span> <span class="hlt">prediction</span> of all turns simultaneously for a query. TurnP utilizes <span class="hlt">predicted</span> secondary structures and <span class="hlt">predicted</span> shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries) by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the <span class="hlt">results</span> we achieved were outstanding for turn <span class="hlt">predictions</span> and confirmed the good performance of TurnP for practical applications. PMID:23144872</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29466394','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29466394"><span>A hybrid intelligent method for three-dimensional short-term <span class="hlt">prediction</span> of dissolved oxygen content in aquaculture.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Yingyi; Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang</p> <p>2018-01-01</p> <p>A precise <span class="hlt">predictive</span> model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly <span class="hlt">accurate</span> interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional <span class="hlt">prediction</span> can provide more <span class="hlt">accurate</span> <span class="hlt">results</span> and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content <span class="hlt">prediction</span> model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the <span class="hlt">predicted</span> <span class="hlt">results</span> of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional <span class="hlt">prediction</span> of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3252946','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3252946"><span><span class="hlt">Accurate</span> van der Waals coefficients from density functional theory</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn</p> <p>2012-01-01</p> <p>The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet <span class="hlt">accurate</span> estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for <span class="hlt">accurate</span> evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory <span class="hlt">predicts</span> dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120006651','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120006651"><span>Method for <span class="hlt">Accurately</span> Calibrating a Spectrometer Using Broadband Light</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Simmons, Stephen; Youngquist, Robert</p> <p>2011-01-01</p> <p>A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more <span class="hlt">accurate</span> calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be <span class="hlt">accurate</span> to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this <span class="hlt">predicted</span> pattern, errors in the wavelength assignment made by the spectrometer can be determined.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29190972','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29190972"><span>Development of estrogen receptor beta binding <span class="hlt">prediction</span> model using large sets of chemicals.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao</p> <p>2017-11-03</p> <p>We developed an ER β binding <span class="hlt">prediction</span> model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding <span class="hlt">prediction</span> model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. <span class="hlt">Prediction</span> confidence was analyzed using <span class="hlt">predictions</span> from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. <span class="hlt">Prediction</span> confidence analysis indicated that the higher the <span class="hlt">prediction</span> confidence the more <span class="hlt">accurate</span> the <span class="hlt">predictions</span>. Permutation testing <span class="hlt">results</span> revealed that the <span class="hlt">prediction</span> model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding <span class="hlt">prediction</span>. Application of the <span class="hlt">prediction</span> model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our <span class="hlt">results</span> demonstrated ER β binding of chemicals could be <span class="hlt">accurately</span> <span class="hlt">predicted</span> using the developed model. Coupling with our previously developed ER α <span class="hlt">prediction</span> model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29087949','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29087949"><span>Combining disparate data sources for improved poverty <span class="hlt">prediction</span> and mapping.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pokhriyal, Neeti; Jacques, Damien Christophe</p> <p>2017-11-14</p> <p>More than 330 million people are still living in extreme poverty in Africa. Timely, <span class="hlt">accurate</span>, and spatially fine-grained baseline data are essential to determining policy in favor of reducing poverty. The potential of "Big Data" to estimate socioeconomic factors in Africa has been proven. However, most current studies are limited to using a single data source. We propose a computational framework to <span class="hlt">accurately</span> <span class="hlt">predict</span> the Global Multidimensional Poverty Index (MPI) at a finest spatial granularity and coverage of 552 communes in Senegal using environmental data (related to food security, economic activity, and accessibility to facilities) and call data records (capturing individualistic, spatial, and temporal aspects of people). Our framework is based on Gaussian Process regression, a Bayesian learning technique, providing uncertainty associated with <span class="hlt">predictions</span>. We perform model selection using elastic net regularization to prevent overfitting. Our <span class="hlt">results</span> empirically prove the superior accuracy when using disparate data (Pearson correlation of 0.91). Our approach is used to <span class="hlt">accurately</span> <span class="hlt">predict</span> important dimensions of poverty: health, education, and standard of living (Pearson correlation of 0.84-0.86). All <span class="hlt">predictions</span> are validated using deprivations calculated from census. Our approach can be used to generate poverty maps frequently, and its diagnostic nature is, likely, to assist policy makers in designing better interventions for poverty eradication. Copyright © 2017 the Author(s). Published by PNAS.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5699027','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5699027"><span>Combining disparate data sources for improved poverty <span class="hlt">prediction</span> and mapping</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2017-01-01</p> <p>More than 330 million people are still living in extreme poverty in Africa. Timely, <span class="hlt">accurate</span>, and spatially fine-grained baseline data are essential to determining policy in favor of reducing poverty. The potential of “Big Data” to estimate socioeconomic factors in Africa has been proven. However, most current studies are limited to using a single data source. We propose a computational framework to <span class="hlt">accurately</span> <span class="hlt">predict</span> the Global Multidimensional Poverty Index (MPI) at a finest spatial granularity and coverage of 552 communes in Senegal using environmental data (related to food security, economic activity, and accessibility to facilities) and call data records (capturing individualistic, spatial, and temporal aspects of people). Our framework is based on Gaussian Process regression, a Bayesian learning technique, providing uncertainty associated with <span class="hlt">predictions</span>. We perform model selection using elastic net regularization to prevent overfitting. Our <span class="hlt">results</span> empirically prove the superior accuracy when using disparate data (Pearson correlation of 0.91). Our approach is used to <span class="hlt">accurately</span> <span class="hlt">predict</span> important dimensions of poverty: health, education, and standard of living (Pearson correlation of 0.84–0.86). All <span class="hlt">predictions</span> are validated using deprivations calculated from census. Our approach can be used to generate poverty maps frequently, and its diagnostic nature is, likely, to assist policy makers in designing better interventions for poverty eradication. PMID:29087949</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024987','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024987"><span>Remaining Useful Life <span class="hlt">Prediction</span> for Lithium-Ion Batteries Based on Gaussian Processes Mixture</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang</p> <p>2016-01-01</p> <p>The remaining useful life (RUL) <span class="hlt">prediction</span> of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional <span class="hlt">prediction</span> models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot <span class="hlt">accurately</span> characterize this multimodality. This paper proposes a novel RUL <span class="hlt">prediction</span> method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for <span class="hlt">prediction</span> by the excellent <span class="hlt">predictive</span> <span class="hlt">result</span> of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more <span class="hlt">accurate</span> than the SVM and the GPR. In addition, GPM can yield the <span class="hlt">predictive</span> confidence interval, which makes the <span class="hlt">prediction</span> more reliable than that of traditional models. PMID:27632176</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23757401','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23757401"><span>Microvascular remodelling in preeclampsia: quantifying capillary rarefaction <span class="hlt">accurately</span> and independently <span class="hlt">predicts</span> preeclampsia.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Antonios, Tarek F T; Nama, Vivek; Wang, Duolao; Manyonda, Isaac T</p> <p>2013-09-01</p> <p>Preeclampsia is a major cause of maternal and neonatal mortality and morbidity. The incidence of preeclampsia seems to be rising because of increased prevalence of predisposing disorders, such as essential hypertension, diabetes, and obesity, and there is increasing evidence to suggest widespread microcirculatory abnormalities before the onset of preeclampsia. We hypothesized that quantifying capillary rarefaction could be helpful in the clinical <span class="hlt">prediction</span> of preeclampsia. We measured skin capillary density according to a well-validated protocol at 5 consecutive predetermined visits in 322 consecutive white women, of whom 16 subjects developed preeclampsia. We found that structural capillary rarefaction at 20-24 weeks of gestation yielded a sensitivity of 0.87 with a specificity of 0.50 at the cutoff of 2 capillaries/field with the area under the curve of the receiver operating characteristic value of 0.70, whereas capillary rarefaction at 27-32 weeks of gestation yielded a sensitivity of 0.75 and a higher specificity of 0.77 at the cutoff of 8 capillaries/field with area under the curve of the receiver operating characteristic value of 0.82. Combining capillary rarefaction with uterine artery Doppler pulsatility index increased the sensitivity and specificity of the <span class="hlt">prediction</span>. Multivariable analysis shows that the odds of preeclampsia are increased in women with previous history of preeclampsia or chronic hypertension and in those with increased uterine artery Doppler pulsatility index, but the most powerful and independent predictor of preeclampsia was capillary rarefaction at 27-32 weeks. Quantifying structural rarefaction of skin capillaries in pregnancy is a potentially useful clinical marker for the <span class="hlt">prediction</span> of preeclampsia.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JPS...242..548W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JPS...242..548W"><span>Adaptive on-line <span class="hlt">prediction</span> of the available power of lithium-ion batteries</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Waag, Wladislaw; Fleischer, Christian; Sauer, Dirk Uwe</p> <p>2013-11-01</p> <p>In this paper a new approach for <span class="hlt">prediction</span> of the available power of a lithium-ion battery pack is presented. It is based on a nonlinear battery model that includes current dependency of the battery resistance. It <span class="hlt">results</span> in an <span class="hlt">accurate</span> power <span class="hlt">prediction</span> not only at room temperature, but also at lower temperatures at which the current dependency is substantial. The used model parameters are fully adaptable on-line to the given state of the battery (state of charge, state of health, temperature). This on-line adaption in combination with an explicit consideration of differences between characteristics of individual cells in a battery pack ensures an <span class="hlt">accurate</span> power <span class="hlt">prediction</span> under all possible conditions. The proposed trade-off between the number of used cell parameters and the total accuracy as well as the optimized algorithm <span class="hlt">results</span> in a real-time capability of the method, which is demonstrated on a low-cost 16 bit microcontroller. The verification tests performed on a software-in-the-loop test bench system with four 40 Ah lithium-ion cells show promising <span class="hlt">results</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29481818','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29481818"><span>Can administrative health utilisation data provide an <span class="hlt">accurate</span> diabetes prevalence estimate for a geographical region?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin</p> <p>2018-05-01</p> <p>To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory <span class="hlt">results</span> and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory <span class="hlt">results</span> stored in the Auckland regional laboratory <span class="hlt">result</span> repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive <span class="hlt">predictive</span> value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive <span class="hlt">predictive</span> value of 76% and negative <span class="hlt">predictive</span> value of 98%. The modified VDR algorithm has improved the positive <span class="hlt">predictive</span> value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative <span class="hlt">predictive</span> value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more <span class="hlt">accurate</span> diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory <span class="hlt">results</span> and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3227105','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3227105"><span>Discriminative <span class="hlt">prediction</span> of mammalian enhancers from DNA sequence</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lee, Dongwon; Karchin, Rachel; Beer, Michael A.</p> <p>2011-01-01</p> <p><span class="hlt">Accurately</span> <span class="hlt">predicting</span> regulatory sequences and enhancers in entire genomes is an important but difficult problem, especially in large vertebrate genomes. With the advent of ChIP-seq technology, experimental detection of genome-wide EP300/CREBBP bound regions provides a powerful platform to develop <span class="hlt">predictive</span> tools for regulatory sequences and to study their sequence properties. Here, we develop a support vector machine (SVM) framework which can <span class="hlt">accurately</span> identify EP300-bound enhancers using only genomic sequence and an unbiased set of general sequence features. Moreover, we find that the <span class="hlt">predictive</span> sequence features identified by the SVM classifier reveal biologically relevant sequence elements enriched in the enhancers, but we also identify other features that are significantly depleted in enhancers. The <span class="hlt">predictive</span> sequence features are evolutionarily conserved and spatially clustered, providing further support of their functional significance. Although our SVM is trained on experimental data, we also <span class="hlt">predict</span> novel enhancers and show that these putative enhancers are significantly enriched in both ChIP-seq signal and DNase I hypersensitivity signal in the mouse brain and are located near relevant genes. Finally, we present <span class="hlt">results</span> of comparisons between other EP300/CREBBP data sets using our SVM and uncover sequence elements enriched and/or depleted in the different classes of enhancers. Many of these sequence features play a role in specifying tissue-specific or developmental-stage-specific enhancer activity, but our <span class="hlt">results</span> indicate that some features operate in a general or tissue-independent manner. In addition to providing a high confidence list of enhancer targets for subsequent experimental investigation, these <span class="hlt">results</span> contribute to our understanding of the general sequence structure of vertebrate enhancers. PMID:21875935</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19990021235','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19990021235"><span>Validation <span class="hlt">Results</span> for LEWICE 2.0</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wright, William B.; Rutkowski, Adam</p> <p>1999-01-01</p> <p>A research project is underway at NASA Lewis to produce a computer code which can <span class="hlt">accurately</span> <span class="hlt">predict</span> ice growth under any meteorological conditions for any aircraft surface. This report will present <span class="hlt">results</span> from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce <span class="hlt">results</span> <span class="hlt">accurately</span> for different spacing and time step criteria across computing platform. It also differs in the extensive amount of effort undertaken to compare the <span class="hlt">results</span> in a quantified manner against the database of ice shapes which have been generated in the NASA Lewis Icing Research Tunnel (IRT). The <span class="hlt">results</span> of the shape comparisons are analyzed to determine the range of meteorological conditions under which LEWICE 2.0 is within the experimental repeatability. This comparison shows that the average variation of LEWICE 2.0 from the experimental data is 7.2% while the overall variability of the experimental data is 2.5%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMOS11C1667K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMOS11C1667K"><span>Improvement of operational <span class="hlt">prediction</span> system applied to the oil spill <span class="hlt">prediction</span> in the Yellow Sea</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, C.; Cho, Y.; Choi, B.; Jung, K.</p> <p>2012-12-01</p> <p>Multi-nested operational <span class="hlt">prediction</span> system for the Yellow Sea (YS) has been developed to <span class="hlt">predict</span> the movement of oil spill. Drifter trajectory simulations were performed to <span class="hlt">predict</span> the path of the oil spill of the MV Hebei Spirit accident occurred on 7 December 2007. The oil spill trajectories at the surface <span class="hlt">predicted</span> by numerical model without tidal forcing were remarkably faster than the observation. However the speed of drifters <span class="hlt">predicted</span> by model considering tide was satisfactorily improved not only for the motion with tidal cycle but also for the motion with subtidal period. The subtidal flow of the simulation with tide was weaker than that without tide due to tidal stress. Tidal stress decelerated the southward subtidal flows driven by northwesterly wind along the Korean coast of the YS in winter. This <span class="hlt">result</span> provides a substantial implication that tide must be included for <span class="hlt">accurate</span> <span class="hlt">prediction</span> of oil spill trajectory not only for variation within a tidal cycle but also for longer time scale advection in tide dominant area.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20843843','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20843843"><span>Selective pressures for <span class="hlt">accurate</span> altruism targeting: evidence from digital evolution for difficult-to-test aspects of inclusive fitness theory.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Clune, Jeff; Goldsby, Heather J; Ofria, Charles; Pennock, Robert T</p> <p>2011-03-07</p> <p>Inclusive fitness theory <span class="hlt">predicts</span> that natural selection will favour altruist genes that are more <span class="hlt">accurate</span> in targeting altruism only to copies of themselves. In this paper, we provide evidence from digital evolution in support of this <span class="hlt">prediction</span> by competing multiple altruist-targeting mechanisms that vary in their accuracy in determining whether a potential target for altruism carries a copy of the altruist gene. We compete altruism-targeting mechanisms based on (i) kinship (kin targeting), (ii) genetic similarity at a level greater than that expected of kin (similarity targeting), and (iii) perfect knowledge of the presence of an altruist gene (green beard targeting). Natural selection always favoured the most <span class="hlt">accurate</span> targeting mechanism available. Our investigations also revealed that evolution did not increase the altruism level when all green beard altruists used the same phenotypic marker. The green beard altruism levels stably increased only when mutations that changed the altruism level also changed the marker (e.g. beard colour), such that beard colour reliably indicated the altruism level. For kin- and similarity-targeting mechanisms, we found that evolution was able to stably adjust altruism levels. Our <span class="hlt">results</span> confirm that natural selection favours altruist genes that are increasingly <span class="hlt">accurate</span> in targeting altruism to only their copies. Our work also emphasizes that the concept of targeting accuracy must include both the presence of an altruist gene and the level of altruism it produces.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPCM...30uLT01A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPCM...30uLT01A"><span><span class="hlt">Accurate</span> quasiparticle calculation of x-ray photoelectron spectra of solids</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aoki, Tsubasa; Ohno, Kaoru</p> <p>2018-05-01</p> <p>It has been highly desired to provide an <span class="hlt">accurate</span> and reliable method to calculate core electron binding energies (CEBEs) of crystals and to understand the final state screening effect on a core hole in high resolution x-ray photoelectron spectroscopy (XPS), because the ΔSCF method cannot be simply used for bulk systems. We propose to use the quasiparticle calculation based on many-body perturbation theory for this problem. In this study, CEBEs of band-gapped crystals, silicon, diamond, β-SiC, BN, and AlP, are investigated by means of the GW approximation (GWA) using the full ω integration and compared with the preexisting XPS data. The screening effect on a deep core hole is also investigated in detail by evaluating the relaxation energy (RE) from the core and valence contributions separately. Calculated <span class="hlt">results</span> show that not only the valence electrons but also the core electrons have an important contribution to the RE, and the GWA have a tendency to underestimate CEBEs due to the excess RE. This underestimation can be improved by introducing the self-screening correction to the GWA. The <span class="hlt">resulting</span> C1s, B1s, N1s, Si2p, and Al2p CEBEs are in excellent agreement with the experiments within 1 eV absolute error range. The present self-screening corrected GW approach has the capability to achieve the highly <span class="hlt">accurate</span> <span class="hlt">prediction</span> of CEBEs without any empirical parameter for band-gapped crystals, and provide a more reliable theoretical approach than the conventional ΔSCF-DFT method.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29651994','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29651994"><span><span class="hlt">Accurate</span> quasiparticle calculation of x-ray photoelectron spectra of solids.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aoki, Tsubasa; Ohno, Kaoru</p> <p>2018-05-31</p> <p>It has been highly desired to provide an <span class="hlt">accurate</span> and reliable method to calculate core electron binding energies (CEBEs) of crystals and to understand the final state screening effect on a core hole in high resolution x-ray photoelectron spectroscopy (XPS), because the ΔSCF method cannot be simply used for bulk systems. We propose to use the quasiparticle calculation based on many-body perturbation theory for this problem. In this study, CEBEs of band-gapped crystals, silicon, diamond, β-SiC, BN, and AlP, are investigated by means of the GW approximation (GWA) using the full ω integration and compared with the preexisting XPS data. The screening effect on a deep core hole is also investigated in detail by evaluating the relaxation energy (RE) from the core and valence contributions separately. Calculated <span class="hlt">results</span> show that not only the valence electrons but also the core electrons have an important contribution to the RE, and the GWA have a tendency to underestimate CEBEs due to the excess RE. This underestimation can be improved by introducing the self-screening correction to the GWA. The <span class="hlt">resulting</span> C1s, B1s, N1s, Si2p, and Al2p CEBEs are in excellent agreement with the experiments within 1 eV absolute error range. The present self-screening corrected GW approach has the capability to achieve the highly <span class="hlt">accurate</span> <span class="hlt">prediction</span> of CEBEs without any empirical parameter for band-gapped crystals, and provide a more reliable theoretical approach than the conventional ΔSCF-DFT method.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27498635','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27498635"><span>Methods for Efficiently and <span class="hlt">Accurately</span> Computing Quantum Mechanical Free Energies for Enzyme Catalysis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L</p> <p>2016-01-01</p> <p>Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to <span class="hlt">predict</span> free energies associated with enzymatic processes is critical to understanding and <span class="hlt">predicting</span> their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the <span class="hlt">accurate</span> description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when <span class="hlt">accurate</span> computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to <span class="hlt">accurately</span> describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007PhDT........47R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007PhDT........47R"><span>Helicopter flight dynamics simulation with a time-<span class="hlt">accurate</span> free-vortex wake model</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ribera, Maria</p> <p></p> <p>This dissertation describes the implementation and validation of a coupled rotor-fuselage simulation model with a time-<span class="hlt">accurate</span> free-vortex wake model capable of capturing the response to maneuvers of arbitrary amplitude. The <span class="hlt">resulting</span> model has been used to analyze different flight conditions, including both steady and transient maneuvers. The flight dynamics model is based on a system of coupled nonlinear rotor-fuselage differential equations in first-order, state-space form. The rotor model includes flexible blades, with coupled flap-lag-torsion dynamics and swept tips; the rigid body dynamics are modeled with the non-linear Euler equations. The free wake models the rotor flow field by tracking the vortices released at the blade tips. Their behavior is described by the equations of vorticity transport, which is approximated using finite differences, and solved using a time-<span class="hlt">accurate</span> numerical scheme. The flight dynamics model can be solved as a system of non-linear algebraic trim equations to determine the steady state solution, or integrated in time in response to pilot-applied controls. This study also implements new approaches to reduce the prohibitive computational costs associated with such complex models without losing accuracy. The mathematical model was validated for trim conditions in level flight, turns, climbs and descents. The <span class="hlt">results</span> obtained correlate well with flight test data, both in level flight as well as turning and climbing and descending flight. The swept tip model was also found to improve the trim <span class="hlt">predictions</span>, particularly at high speed. The behavior of the rigid body and the rotor blade dynamics were also studied and related to the aerodynamic load distributions obtained with the free wake induced velocities. The model was also validated in a lateral maneuver from hover. The <span class="hlt">results</span> show improvements in the on-axis <span class="hlt">prediction</span>, and indicate a possible relation between the off-axis <span class="hlt">prediction</span> and the lack of rotor-body interaction</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4320938','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4320938"><span>Robust and <span class="hlt">Accurate</span> Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sivaraks, Haemwaan</p> <p>2015-01-01</p> <p>Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and <span class="hlt">accurate</span> in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment <span class="hlt">results</span> on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive <span class="hlt">predictive</span> value with 0% false alarm rate. The <span class="hlt">results</span> demonstrate that our proposed method is highly <span class="hlt">accurate</span> and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70159346','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70159346"><span>Evaluation of wave runup <span class="hlt">predictions</span> from numerical and parametric models</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.</p> <p>2014-01-01</p> <p>Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be <span class="hlt">predicted</span> from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was <span class="hlt">accurately</span> <span class="hlt">predicted</span> by both the parameterized model and numerical simulations. Infragravity swash heights were most <span class="hlt">accurately</span> <span class="hlt">predicted</span> by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the <span class="hlt">predictions</span> were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at <span class="hlt">predicting</span> incident-band swash heights. An assimilated <span class="hlt">prediction</span> using a weighted average of the parameterized model and the numerical simulations <span class="hlt">resulted</span> in a reduction in <span class="hlt">prediction</span> error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These <span class="hlt">results</span> indicated that the parameterized <span class="hlt">predictions</span> of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized <span class="hlt">predictions</span> of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19730014574&hterms=crosstalk&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dcrosstalk','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19730014574&hterms=crosstalk&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dcrosstalk"><span>Analytical <span class="hlt">prediction</span> of digital signal crosstalk of FCC</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Belleisle, A. P.</p> <p>1972-01-01</p> <p>The <span class="hlt">results</span> are presented of study effort whose aim was the development of <span class="hlt">accurate</span> means of analyzing and <span class="hlt">predicting</span> signal cross-talk in multi-wire digital data cables. A complete analytical model is developed n + 1 wire systems of uniform transmission lines with arbitrary linear boundary conditions. In addition, a minimum set of parameter measurements required for the application of the model are presented. Comparisons between cross-talk <span class="hlt">predicted</span> by this model and actual measured cross-talk are shown for a six conductor ribbon cable.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27543682','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27543682"><span>Genetically informed ecological niche models improve climate change <span class="hlt">predictions</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ikeda, Dana H; Max, Tamara L; Allan, Gerard J; Lau, Matthew K; Shuster, Stephen M; Whitham, Thomas G</p> <p>2017-01-01</p> <p>We examined the hypothesis that ecological niche models (ENMs) more <span class="hlt">accurately</span> <span class="hlt">predict</span> species distributions when they incorporate information on population genetic structure, and concomitantly, local adaptation. Local adaptation is common in species that span a range of environmental gradients (e.g., soils and climate). Moreover, common garden studies have demonstrated a covariance between neutral markers and functional traits associated with a species' ability to adapt to environmental change. We therefore <span class="hlt">predicted</span> that genetically distinct populations would respond differently to climate change, <span class="hlt">resulting</span> in <span class="hlt">predicted</span> distributions with little overlap. To test whether genetic information improves our ability to <span class="hlt">predict</span> a species' niche space, we created genetically informed ecological niche models (gENMs) using Populus fremontii (Salicaceae), a widespread tree species in which prior common garden experiments demonstrate strong evidence for local adaptation. Four major findings emerged: (i) gENMs <span class="hlt">predicted</span> population occurrences with up to 12-fold greater accuracy than models without genetic information; (ii) tests of niche similarity revealed that three ecotypes, identified on the basis of neutral genetic markers and locally adapted populations, are associated with differences in climate; (iii) our forecasts indicate that ongoing climate change will likely shift these ecotypes further apart in geographic space, <span class="hlt">resulting</span> in greater niche divergence; (iv) ecotypes that currently exhibit the largest geographic distribution and niche breadth appear to be buffered the most from climate change. As diverse agents of selection shape genetic variability and structure within species, we argue that gENMs will lead to more <span class="hlt">accurate</span> <span class="hlt">predictions</span> of species distributions under climate change. © 2016 John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28670532','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28670532"><span>Are <span class="hlt">Predictive</span> Equations for Estimating Resting Energy Expenditure <span class="hlt">Accurate</span> in Asian Indian Male Weightlifters?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Joseph, Mini; Gupta, Riddhi Das; Prema, L; Inbakumari, Mercy; Thomas, Nihal</p> <p>2017-01-01</p> <p>The accuracy of existing <span class="hlt">predictive</span> equations to determine the resting energy expenditure (REE) of professional weightlifters remains scarcely studied. Our study aimed at assessing the REE of male Asian Indian weightlifters with indirect calorimetry and to compare the measured REE (mREE) with published equations. A new equation using potential anthropometric variables to <span class="hlt">predict</span> REE was also evaluated. REE was measured on 30 male professional weightlifters aged between 17 and 28 years using indirect calorimetry and compared with the eight formulas <span class="hlt">predicted</span> by Harris-Benedicts, Mifflin-St. Jeor, FAO/WHO/UNU, ICMR, Cunninghams, Owen, Katch-McArdle, and Nelson. Pearson correlation coefficient, intraclass correlation coefficient, and multiple linear regression analysis were carried out to study the agreement between the different methods, association with anthropometric variables, and to formulate a new <span class="hlt">prediction</span> equation for this population. Pearson correlation coefficients between mREE and the anthropometric variables showed positive significance with suprailiac skinfold thickness, lean body mass (LBM), waist circumference, hip circumference, bone mineral mass, and body mass. All eight <span class="hlt">predictive</span> equations underestimated the REE of the weightlifters when compared with the mREE. The highest mean difference was 636 kcal/day (Owen, 1986) and the lowest difference was 375 kcal/day (Cunninghams, 1980). Multiple linear regression done stepwise showed that LBM was the only significant determinant of REE in this group of sportspersons. A new equation using LBM as the independent variable for calculating REE was computed. REE for weightlifters = -164.065 + 0.039 (LBM) (confidence interval -1122.984, 794.854]. This new equation reduced the mean difference with mREE by 2.36 + 369.15 kcal/day (standard error = 67.40). The significant finding of this study was that all the <span class="hlt">prediction</span> equations underestimated the REE. The LBM was the sole determinant of REE in this population</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29228516','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29228516"><span>Refining <span class="hlt">Prediction</span> in Treatment-Resistant Depression: <span class="hlt">Results</span> of Machine Learning Analyses in the TRD III Sample.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kautzky, Alexander; Dold, Markus; Bartova, Lucie; Spies, Marie; Vanicek, Thomas; Souery, Daniel; Montgomery, Stuart; Mendlewicz, Julien; Zohar, Joseph; Fabbri, Chiara; Serretti, Alessandro; Lanzenberger, Rupert; Kasper, Siegfried</p> <p></p> <p>The study objective was to generate a <span class="hlt">prediction</span> model for treatment-resistant depression (TRD) using machine learning featuring a large set of 47 clinical and sociodemographic predictors of treatment outcome. 552 Patients diagnosed with major depressive disorder (MDD) according to DSM-IV criteria were enrolled between 2011 and 2016. TRD was defined as failure to reach response to antidepressant treatment, characterized by a Montgomery-Asberg Depression Rating Scale (MADRS) score below 22 after at least 2 antidepressant trials of adequate length and dosage were administered. RandomForest (RF) was used for <span class="hlt">predicting</span> treatment outcome phenotypes in a 10-fold cross-validation. The full model with 47 predictors yielded an accuracy of 75.0%. When the number of predictors was reduced to 15, accuracies between 67.6% and 71.0% were attained for different test sets. The most informative predictors of treatment outcome were baseline MADRS score for the current episode; impairment of family, social, and work life; the timespan between first and last depressive episode; severity; suicidal risk; age; body mass index; and the number of lifetime depressive episodes as well as lifetime duration of hospitalization. With the application of the machine learning algorithm RF, an efficient <span class="hlt">prediction</span> model with an accuracy of 75.0% for forecasting treatment outcome could be generated, thus surpassing the <span class="hlt">predictive</span> capabilities of clinical evaluation. We also supply a simplified algorithm of 15 easily collected clinical and sociodemographic predictors that can be obtained within approximately 10 minutes, which reached an accuracy of 70.6%. Thus, we are confident that our model will be validated within other samples to advance an <span class="hlt">accurate</span> <span class="hlt">prediction</span> model fit for clinical usage in TRD. © Copyright 2017 Physicians Postgraduate Press, Inc.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=Base+AND+data+AND+memory&id=EJ831493','ERIC'); return false;" href="https://eric.ed.gov/?q=Base+AND+data+AND+memory&id=EJ831493"><span>Base Rates, Contingencies, and <span class="hlt">Prediction</span> Behavior</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kareev, Yaakov; Fiedler, Klaus; Avrahami, Judith</p> <p>2009-01-01</p> <p>A skew in the base rate of upcoming events can often provide a better cue for <span class="hlt">accurate</span> <span class="hlt">predictions</span> than a contingency between signals and events. The authors study <span class="hlt">prediction</span> behavior and test people's sensitivity to both base rate and contingency; they also examine people's ability to compare the benefits of both for <span class="hlt">prediction</span>. They formalize…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25912282','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25912282"><span>Daily FOUR score assessment provides <span class="hlt">accurate</span> prognosis of long-term outcome in out-of-hospital cardiac arrest.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Weiss, N; Venot, M; Verdonk, F; Chardon, A; Le Guennec, L; Llerena, M C; Raimbourg, Q; Taldir, G; Luque, Y; Fagon, J-Y; Guerot, E; Diehl, J-L</p> <p>2015-05-01</p> <p>The <span class="hlt">accurate</span> <span class="hlt">prediction</span> of outcome after out-of-hospital cardiac arrest (OHCA) is of major importance. The recently described Full Outline of UnResponsiveness (FOUR) is well adapted to mechanically ventilated patients and does not depend on verbal response. To evaluate the ability of FOUR assessed by intensivists to <span class="hlt">accurately</span> <span class="hlt">predict</span> outcome in OHCA. We prospectively identified patients admitted for OHCA with a Glasgow Coma Scale below 8. Neurological assessment was performed daily. Outcome was evaluated at 6 months using Glasgow-Pittsburgh Cerebral Performance Categories (GP-CPC). Eighty-five patients were included. At 6 months, 19 patients (22%) had a favorable outcome, GP-CPC 1-2, and 66 (78%) had an unfavorable outcome, GP-CPC 3-5. Compared to both brainstem responses at day 3 and evolution of Glasgow Coma Scale, evolution of FOUR score over the three first days was able to <span class="hlt">predict</span> unfavorable outcome more precisely. Thus, absence of improvement or worsening from day 1 to day 3 of FOUR had 0.88 (0.79-0.97) specificity, 0.71 (0.66-0.76) sensitivity, 0.94 (0.84-1.00) PPV and 0.54 (0.49-0.59) NPV to <span class="hlt">predict</span> unfavorable outcome. Similarly, the brainstem response of FOUR score at 0 evaluated at day 3 had 0.94 (0.89-0.99) specificity, 0.60 (0.50-0.70) sensitivity, 0.96 (0.92-1.00) PPV and 0.47 (0.37-0.57) NPV to <span class="hlt">predict</span> unfavorable outcome. The absence of improvement or worsening from day 1 to day 3 of FOUR evaluated by intensivists provides an <span class="hlt">accurate</span> prognosis of poor neurological outcome in OHCA. Copyright © 2015 Elsevier Masson SAS. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4965072','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4965072"><span>Unscented Kalman Filter-Trained Neural Networks for Slip Model <span class="hlt">Prediction</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Zhencai; Wang, Yang; Liu, Zhen</p> <p>2016-01-01</p> <p>The purpose of this work is to investigate the <span class="hlt">accurate</span> trajectory tracking control of a wheeled mobile robot (WMR) based on the slip model <span class="hlt">prediction</span>. Generally, a nonholonomic WMR may increase the slippage risk, when traveling on outdoor unstructured terrain (such as longitudinal and lateral slippage of wheels). In order to control a WMR stably and <span class="hlt">accurately</span> under the effect of slippage, an unscented Kalman filter and neural networks (NNs) are applied to estimate the slip model in real time. This method exploits the model approximating capabilities of nonlinear state–space NN, and the unscented Kalman filter is used to train NN’s weights online. The slip parameters can be estimated and used to <span class="hlt">predict</span> the time series of deviation velocity, which can be used to compensate control inputs of a WMR. The <span class="hlt">results</span> of numerical simulation show that the desired trajectory tracking control can be performed by <span class="hlt">predicting</span> the nonlinear slip model. PMID:27467703</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29531428','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29531428"><span><span class="hlt">Accurate</span>, <span class="hlt">predictable</span>, repeatable micro-assembly technology for polymer, microfluidic modules.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Tae Yoon; Han, Kyudong; Barrett, Dwhyte O; Park, Sunggook; Soper, Steven A; Murphy, Michael C</p> <p>2018-01-01</p> <p>A method for the design, construction, and assembly of modular, polymer-based, microfluidic devices using simple micro-assembly technology was demonstrated to build an integrated fluidic system consisting of vertically stacked modules for carrying out multi-step molecular assays. As an example of the utility of the modular system, point mutation detection using the ligase detection reaction (LDR) following amplification by the polymerase chain reaction (PCR) was carried out. Fluid interconnects and standoffs ensured that temperatures in the vertically stacked reactors were within ± 0.2 C° at the center of the temperature zones and ± 1.1 C° overall. The vertical spacing between modules was confirmed using finite element models (ANSYS, Inc., Canonsburg, PA) to simulate the steady-state temperature distribution for the assembly. Passive alignment structures, including a hemispherical pin-in-hole, a hemispherical pin-in-slot, and a plate-plate lap joint, were developed using screw theory to enable <span class="hlt">accurate</span> exactly constrained assembly of the microfluidic reactors, cover sheets, and fluid interconnects to facilitate the modular approach. The mean mismatch between the centers of adjacent through holes was 64 ± 7.7 μm, significantly reducing the dead volume necessary to accommodate manufacturing variation. The microfluidic components were easily assembled by hand and the assembly of several different configurations of microfluidic modules for executing the assay was evaluated. Temperatures were measured in the desired range in each reactor. The biochemical performance was comparable to that obtained with benchtop instruments, but took less than 45 min to execute, half the time.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22974047','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22974047"><span>Absolute Hounsfield unit measurement on noncontrast computed tomography cannot <span class="hlt">accurately</span> <span class="hlt">predict</span> struvite stone composition.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Marchini, Giovanni Scala; Gebreselassie, Surafel; Liu, Xiaobo; Pynadath, Cindy; Snyder, Grace; Monga, Manoj</p> <p>2013-02-01</p> <p>The purpose of our study was to determine, in vivo, whether single-energy noncontrast computed tomography (NCCT) can <span class="hlt">accurately</span> <span class="hlt">predict</span> the presence/percentage of struvite stone composition. We retrospectively searched for all patients with struvite components on stone composition analysis between January 2008 and March 2012. Inclusion criteria were NCCT prior to stone analysis and stone size ≥4 mm. A single urologist, blinded to stone composition, reviewed all NCCT to acquire stone location, dimensions, and Hounsfield unit (HU). HU density (HUD) was calculated by dividing mean HU by the stone's largest transverse diameter. Stone analysis was performed via Fourier transform infrared spectrometry. Independent sample Student's t-test and analysis of variance (ANOVA) were used to compare HU/HUD among groups. Spearman's correlation test was used to determine the correlation between HU and stone size and also HU/HUD to % of each component within the stone. Significance was considered if p<0.05. Fourty-four patients met the inclusion criteria. Struvite was the most prevalent component with mean percentage of 50.1%±17.7%. Mean HU and HUD were 820.2±357.9 and 67.5±54.9, respectively. Struvite component analysis revealed a nonsignificant positive correlation with HU (R=0.017; p=0.912) and negative with HUD (R=-0.20; p=0.898). Overall, 3 (6.8%) had <20% of struvite component; 11 (25%), 25 (56.8%), and 5 (11.4%) had 21% to 40%, 41% to 60%, and 61% to 80% of struvite, respectively. ANOVA revealed no difference among groups regarding HU (p=0.68) and HUD (p=0.37), with important overlaps. When comparing pure struvite stones (n=5) with other miscellaneous stones (n=39), no difference was found for HU (p=0.09) but HUD was significantly lower for pure stones (27.9±23.6 v 72.5±55.9, respectively; p=0.006). Again, significant overlaps were seen. Pure struvite stones have significantly lower HUD than mixed struvite stones, but overlap exists. A low HUD may increase the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010032276','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010032276"><span>Evaluation of Turbulence-Model Performance as Applied to Jet-Noise <span class="hlt">Prediction</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.</p> <p>1998-01-01</p> <p>The <span class="hlt">accurate</span> <span class="hlt">prediction</span> of jet noise is possible only if the jet flow field can be <span class="hlt">predicted</span> <span class="hlt">accurately</span>. <span class="hlt">Predictions</span> for the mean velocity and turbulence quantities in the jet flowfield are typically the product of a Reynolds-averaged Navier-Stokes solver coupled with a turbulence model. To evaluate the effectiveness of solvers and turbulence models in <span class="hlt">predicting</span> those quantities most important to jet noise <span class="hlt">prediction</span>, two CFD codes and several turbulence models were applied to a jet configuration over a range of jet temperatures for which experimental data is available.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2549361','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2549361"><span>Do doctors <span class="hlt">accurately</span> assess coronary risk in their patients? Preliminary <span class="hlt">results</span> of the coronary health assessment study.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Grover, S. A.; Lowensteyn, I.; Esrey, K. L.; Steinert, Y.; Joseph, L.; Abrahamowicz, M.</p> <p>1995-01-01</p> <p>OBJECTIVE--To evaluate the ability of doctors in primary care to assess risk patients' risk of coronary heart disease. DESIGN--Questionnaire survey. SETTING--Continuing medical education meetings, Ontario and Quebec, Canada. SUBJECTS--Community based doctors who agreed to enroll in the coronary health assessment study. MAIN OUTCOME MEASURE--Ratings of coronary risk factors and estimates by doctors of relative and absolute coronary risk of two hypothetical patients and the "average" 40 year old Canadian man and 70 year old Canadian woman. <span class="hlt">RESULTS</span>--253 doctors answered the questionnaire. For 30 year olds the doctors rated cigarette smoking as the most important risk factor and raised serum triglyceride concentrations as the least important; for 70 year old patients they rated diabetes as the most important risk factor and raised serum triglyceride concentrations as the least important. They rated each individual risk factor as significantly less important for 70 year olds than for 30 year olds (all risk factors, P < 0.001). They showed a strong understanding of the relative importance of specific risk factors, and most were confident in their ability to estimate coronary risk. While doctors <span class="hlt">accurately</span> estimated the relative risk of a specific patient (compared with the average adult) they systematically overestimated the absolute baseline risk of developing coronary disease and the risk reductions associated with specific interventions. CONCLUSIONS--Despite guidelines on targeting patients at high risk of coronary disease <span class="hlt">accurate</span> assessment of coronary risk remains difficult for many doctors. Additional strategies must be developed to help doctors to assess better their patients' coronary risk. PMID:7728035</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AGUFM.A41G0204S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AGUFM.A41G0204S"><span><span class="hlt">Predicting</span> Tropical Cyclogenesis with a Global Mesoscale Model: Preliminary <span class="hlt">Results</span> with Very Severe Cyclonic Storm Nargis (2008)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, B.; Tao, W.; Atlas, R.</p> <p>2008-12-01</p> <p>Very Severe Cyclonic Storm Nargis, the deadliest named tropical cyclone (TC) in the North Indian Ocean Basin, devastated Burma (Myanmar) in May 2008, causing tremendous damage and numerous fatalities. An increased lead time in the <span class="hlt">prediction</span> of TC Nargis would have increased the warning time and may therefore have saved lives and reduced economic damage. Recent advances in high-resolution global models and supercomputers have shown the potential for improving TC track and intensity forecasts, presumably by improving multi-scale simulations. The key but challenging questions to be answered include: (1) if and how realistic, in terms of timing, location and TC general structure, the global mesoscale model (GMM) can simulate TC genesis and (2) under what conditions can the model extend the lead time of TC genesis forecasts. In this study, we focus on genesis <span class="hlt">prediction</span> for TCs in the Indian Ocean with the GMM. Preliminary real-data simulations show that the initial formation and intensity variations of TC Nargis can be realistically <span class="hlt">predicted</span> at a lead time of up to 5 days. These simulations also suggest that the <span class="hlt">accurate</span> representations of a westerly wind burst (WWB) and an equatorial trough, associated with monsoon circulations and/or a Madden-Julian Oscillation (MJO), are important for <span class="hlt">predicting</span> the formation of this kind of TC. In addition to the WWB and equatorial trough, other favorable environmental conditions will be examined, which include enhanced monsoonal circulation, upper-level outflow, low- and middle-level moistening, and surface fluxes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19760010514&hterms=runoff&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Drunoff','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19760010514&hterms=runoff&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Drunoff"><span>Remote sensing techniques for <span class="hlt">prediction</span> of watershed runoff</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Blanchard, B. J.</p> <p>1975-01-01</p> <p>Hydrologic parameters of watersheds for use in mathematical models and as design criteria for flood detention structures are sometimes difficult to quantify using conventional measuring systems. The advent of remote sensing devices developed in the past decade offers the possibility that watershed characteristics such as vegetative cover, soils, soil moisture, etc., may be quantified rapidly and economically. Experiments with visible and near infrared data from the LANDSAT-1 multispectral scanner indicate a simple technique for calibration of runoff equation coefficients is feasible. The technique was tested on 10 watersheds in the Chickasha area and test <span class="hlt">results</span> show more <span class="hlt">accurate</span> runoff coefficients were obtained than with conventional methods. The technique worked equally as well using a dry fall scene. The runoff equation coefficients were then <span class="hlt">predicted</span> for 22 subwatersheds with flood detention structures. <span class="hlt">Predicted</span> values were again more <span class="hlt">accurate</span> than coefficients produced by conventional methods.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1332708','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1332708"><span>Univariate Time Series <span class="hlt">Prediction</span> of Solar Power Using a Hybrid Wavelet-ARMA-NARX <span class="hlt">Prediction</span> Method</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng</p> <p></p> <p>This paper proposes a new hybrid method for super short-term solar power <span class="hlt">prediction</span>. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An <span class="hlt">accurate</span> super short-time <span class="hlt">prediction</span> is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to <span class="hlt">predict</span> one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for <span class="hlt">prediction</span>. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA <span class="hlt">prediction</span>. The proposed method is applied to the data captured from UCLA solar PV panels and the <span class="hlt">results</span> are compared with some of the common and most recent solar power <span class="hlt">prediction</span> methods. The <span class="hlt">results</span> validate the effectiveness of the proposed approach and show a considerable improvement in the <span class="hlt">prediction</span> precision.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1127084.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1127084.pdf"><span>Early <span class="hlt">Prediction</span> of Student Dropout and Performance in MOOCSs Using Higher Granularity Temporal Information</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ye, Cheng; Biswas, Gautam</p> <p>2014-01-01</p> <p>Our project is motivated by the early dropout and low completion rate problem in MOOCs. We have extended traditional features for MOOC analysis with richer and higher granularity information to make more <span class="hlt">accurate</span> <span class="hlt">predictions</span> of dropout and performance. The <span class="hlt">results</span> show that finer-grained temporal information increases the <span class="hlt">predictive</span> power in the…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27456080','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27456080"><span><span class="hlt">Predicting</span> prolonged dose titration in patients starting warfarin.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Finkelman, Brian S; French, Benjamin; Bershaw, Luanne; Brensinger, Colleen M; Streiff, Michael B; Epstein, Andrew E; Kimmel, Stephen E</p> <p>2016-11-01</p> <p>Patients initiating warfarin therapy generally experience a dose-titration period of weeks to months, during which time they are at higher risk of both thromboembolic and bleeding events. <span class="hlt">Accurate</span> <span class="hlt">prediction</span> of prolonged dose titration could help clinicians determine which patients might be better treated by alternative anticoagulants that, while more costly, do not require dose titration. A <span class="hlt">prediction</span> model was derived in a prospective cohort of patients starting warfarin (n = 390), using Cox regression, and validated in an external cohort (n = 663) from a later time period. Prolonged dose titration was defined as a dose-titration period >12 weeks. Predictor variables were selected using a modified best subsets algorithm, using leave-one-out cross-validation to reduce overfitting. The final model had five variables: warfarin indication, insurance status, number of doctor's visits in the previous year, smoking status, and heart failure. The area under the ROC curve (AUC) in the derivation cohort was 0.66 (95%CI 0.60, 0.74) using leave-one-out cross-validation, but only 0.59 (95%CI 0.54, 0.64) in the external validation cohort, and varied across clinics. Including genetic factors in the model did not improve the area under the ROC curve (0.59; 95%CI 0.54, 0.65). Relative utility curves indicated that the model was unlikely to provide a clinically meaningful benefit compared with no <span class="hlt">prediction</span>. Our <span class="hlt">results</span> suggest that prolonged dose titration cannot be <span class="hlt">accurately</span> <span class="hlt">predicted</span> in warfarin patients using traditional clinical, social, and genetic predictors, and that <span class="hlt">accurate</span> <span class="hlt">prediction</span> will need to accommodate heterogeneities across clinical sites and over time. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25432298','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25432298"><span><span class="hlt">Prediction</span> of hearing loss among the noise-exposed workers in a steel factory using artificial intelligence approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aliabadi, Mohsen; Farhadian, Maryam; Darvishi, Ebrahim</p> <p>2015-08-01</p> <p><span class="hlt">Prediction</span> of hearing loss in noisy workplaces is considered to be an important aspect of hearing conservation program. Artificial intelligence, as a new approach, can be used to <span class="hlt">predict</span> the complex phenomenon such as hearing loss. Using artificial neural networks, this study aims to present an empirical model for the <span class="hlt">prediction</span> of the hearing loss threshold among noise-exposed workers. Two hundred and ten workers employed in a steel factory were chosen, and their occupational exposure histories were collected. To determine the hearing loss threshold, the audiometric test was carried out using a calibrated audiometer. The personal noise exposure was also measured using a noise dosimeter in the workstations of workers. Finally, data obtained five variables, which can influence the hearing loss, were used for the development of the <span class="hlt">prediction</span> model. Multilayer feed-forward neural networks with different structures were developed using MATLAB software. Neural network structures had one hidden layer with the number of neurons being approximately between 5 and 15 neurons. The best developed neural networks with one hidden layer and ten neurons could <span class="hlt">accurately</span> <span class="hlt">predict</span> the hearing loss threshold with RMSE = 2.6 dB and R(2) = 0.89. The <span class="hlt">results</span> also confirmed that neural networks could provide more <span class="hlt">accurate</span> <span class="hlt">predictions</span> than multiple regressions. Since occupational hearing loss is frequently non-curable, <span class="hlt">results</span> of <span class="hlt">accurate</span> <span class="hlt">prediction</span> can be used by occupational health experts to modify and improve noise exposure conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5080976','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5080976"><span>Toward <span class="hlt">Accurate</span> and Quantitative Comparative Metagenomics</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nayfach, Stephen; Pollard, Katherine S.</p> <p>2016-01-01</p> <p>Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable <span class="hlt">accurate</span> comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for <span class="hlt">predictive</span> ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4904691','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4904691"><span>Point of Care Ultrasound <span class="hlt">Accurately</span> Distinguishes Inflammatory from Noninflammatory Disease in Patients Presenting with Abdominal Pain and Diarrhea</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Novak, Kerri L.; Jacob, Deepti; Kaplan, Gilaad G.; Boyce, Emma; Ghosh, Subrata; Ma, Irene; Lu, Cathy; Wilson, Stephanie; Panaccione, Remo</p> <p>2016-01-01</p> <p>Background. Approaches to distinguish inflammatory bowel disease (IBD) from noninflammatory disease that are noninvasive, <span class="hlt">accurate</span>, and readily available are desirable. Such approaches may decrease time to diagnosis and better utilize limited endoscopic resources. The aim of this study was to evaluate the diagnostic accuracy for gastroenterologist performed point of care ultrasound (POCUS) in the detection of luminal inflammation relative to gold standard ileocolonoscopy. Methods. A prospective, single-center study was conducted on convenience sample of patients presenting with symptoms of diarrhea and/or abdominal pain. Patients were offered POCUS prior to having ileocolonoscopy. Sensitivity, specificity, positive <span class="hlt">predictive</span> value (PPV), and negative <span class="hlt">predictive</span> value (NPV) with 95% confidence intervals (CI), as well as likelihood ratios, were calculated. <span class="hlt">Results</span>. Fifty-eight patients were included in this study. The overall sensitivity, specificity, PPV, and NPV were 80%, 97.8%, 88.9%, and 95.7%, respectively, with positive and negative likelihood ratios (LR) of 36.8 and 0.20. Conclusion. POCUS can <span class="hlt">accurately</span> be performed at the bedside to detect transmural inflammation of the intestine. This noninvasive approach may serve to expedite diagnosis, improve allocation of endoscopic resources, and facilitate initiation of appropriate medical therapy. PMID:27446838</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19708729','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19708729"><span>Neural network approach to quantum-chemistry data: <span class="hlt">accurate</span> <span class="hlt">prediction</span> of density functional theory energies.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Balabin, Roman M; Lomakina, Ekaterina I</p> <p>2009-08-21</p> <p>Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock <span class="hlt">results</span> were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the <span class="hlt">prediction</span> error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6+/-0.2 kcal mol(-1). In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression <span class="hlt">results</span> were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark <span class="hlt">results</span> were provided.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3926285','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3926285"><span>Nonexposure <span class="hlt">Accurate</span> Location K-Anonymity Algorithm in LBS</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's <span class="hlt">accurate</span> coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the <span class="hlt">accurate</span> locations of all users. Therefore, location cloaking without exposing the user's <span class="hlt">accurate</span> location to any party is urgently needed. In this paper, we present such two nonexposure <span class="hlt">accurate</span> location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their <span class="hlt">accurate</span> coordinates. Experimental <span class="hlt">results</span> show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920018978','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920018978"><span>Future missions studies: Combining Schatten's solar activity <span class="hlt">prediction</span> model with a chaotic <span class="hlt">prediction</span> model</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ashrafi, S.</p> <p>1991-01-01</p> <p>K. Schatten (1991) recently developed a method for combining his <span class="hlt">prediction</span> model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar <span class="hlt">prediction</span> model (KS) uses a dynamo to mimic solar dynamics, <span class="hlt">accurate</span> <span class="hlt">prediction</span> is limited to long-term solar behavior (10 to 20 years). The Chaotic <span class="hlt">prediction</span> model (SA) uses the recently developed techniques of nonlinear dynamics to <span class="hlt">predict</span> solar activity. It can be used to <span class="hlt">predict</span> activity only up to the horizon. In theory, the chaotic <span class="hlt">prediction</span> should be several orders of magnitude better than statistical <span class="hlt">predictions</span> up to that horizon; beyond the horizon, chaotic <span class="hlt">predictions</span> would theoretically be just as good as statistical <span class="hlt">predictions</span>. Therefore, chaos theory puts a fundamental limit on <span class="hlt">predictability</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NatSR...744649A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NatSR...744649A"><span>Generating highly <span class="hlt">accurate</span> <span class="hlt">prediction</span> hypotheses through collaborative ensemble learning</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco</p> <p>2017-03-01</p> <p>Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their <span class="hlt">predictive</span> capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/8785690','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/8785690"><span>Age differences in recall and <span class="hlt">predicting</span> recall of action events and words.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McDonald-Miszczak, L; Hubley, A M; Hultsch, D F</p> <p>1996-03-01</p> <p>Age differences in recall and <span class="hlt">prediction</span> of recall were examined with different memory tasks. We asked 36 younger (19-28 yrs) and 36 older (60-81 yrs) women to provide both global and item-by-item <span class="hlt">predictions</span> of their recall, and then to recall either (a) Subject Performance Tasks (SPTs), (b) verb-noun word-pairs memorized in list-like fashion (Word-Pairs), or (c) nonsense verb-noun word-pairs (Nonsense-Pairs) over three experimental trials. Based on previous research, we hypothesized that these tasks would vary in relative difficulty and flexibility of encoding. The <span class="hlt">results</span> indicated that (a) age differences in global <span class="hlt">predictions</span> (task specific self-efficacy) and recall performance across trials were minimized with SPT as compared with verbal materials, (b) global <span class="hlt">predictions</span> were higher and more <span class="hlt">accurate</span> for SPT as compared to verbal materials, and (c) item-by-item <span class="hlt">predictions</span> were most <span class="hlt">accurate</span> for materials encoded with the most flexibility (Nonsense Pairs). The <span class="hlt">results</span> suggest that SPTs may provide some level of environmental support to reduce age differences in performance and task-specific self-efficacy, but that memory monitoring may depend on specific characteristics of the stimuli (i.e., flexibility of encoding) rather than their verbal or nonverbal nature.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9704E..05T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9704E..05T"><span>Raman spectroscopy for highly <span class="hlt">accurate</span> estimation of the age of refrigerated porcine muscle</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Timinis, Constantinos; Pitris, Costas</p> <p>2016-03-01</p> <p>The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an <span class="hlt">accurate</span> indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was <span class="hlt">predicted</span> with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has <span class="hlt">resulted</span> in a <span class="hlt">prediction</span> of the sample age far more <span class="hlt">accurately</span> than any report in the literature.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27159856','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27159856"><span>A Systematic Approach to <span class="hlt">Predicting</span> Spring Force for Sagittal Craniosynostosis Surgery.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Guangming; Tan, Hua; Qian, Xiaohua; Zhang, Jian; Li, King; David, Lisa R; Zhou, Xiaobo</p> <p>2016-05-01</p> <p>Spring-assisted surgery (SAS) can effectively treat scaphocephaly by reshaping crania with the appropriate spring force. However, it is difficult to <span class="hlt">accurately</span> estimate spring force without considering biomechanical properties of tissues. This study presents and validates a reliable system to <span class="hlt">accurately</span> <span class="hlt">predict</span> the spring force for sagittal craniosynostosis surgery. The authors randomly chose 23 patients who underwent SAS and had been followed for at least 2 years. An elastic model was designed to characterize the biomechanical behavior of calvarial bone tissue for each individual. After simulating the contact force on <span class="hlt">accurate</span> position of the skull strip with the springs, the finite element method was applied to calculating the stress of each tissue node based on the elastic model. A support vector regression approach was then used to model the relationships between biomechanical properties generated from spring force, bone thickness, and the change of cephalic index after surgery. Therefore, for a new patient, the optimal spring force can be <span class="hlt">predicted</span> based on the learned model with virtual spring simulation and dynamic programming approach prior to SAS. Leave-one-out cross-validation was implemented to assess the accuracy of our <span class="hlt">prediction</span>. As a <span class="hlt">result</span>, the mean <span class="hlt">prediction</span> accuracy of this model was 93.35%, demonstrating the great potential of this model as a useful adjunct for preoperative planning tool.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20483330','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20483330"><span>A multiscale red blood cell model with <span class="hlt">accurate</span> mechanics, rheology, and dynamics.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em</p> <p>2010-05-19</p> <p>Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to <span class="hlt">predict</span> RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical <span class="hlt">predictions</span>, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot <span class="hlt">accurately</span> represent the RBC's rheological properties and its dynamics, and therefore <span class="hlt">accurate</span> modeling of a viscoelastic membrane is necessary. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014E%26ES...22b2006K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014E%26ES...22b2006K"><span>Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance <span class="hlt">prediction</span> accuracy-</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ko, P.; Kurosawa, S.</p> <p>2014-03-01</p> <p>The understanding and <span class="hlt">accurate</span> <span class="hlt">prediction</span> of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance <span class="hlt">prediction</span> method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is <span class="hlt">predicted</span> by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The <span class="hlt">prediction</span> accuracy is evaluated by comparing with the model test <span class="hlt">results</span> of Ns 400 Kaplan model turbine. As a <span class="hlt">result</span> that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are <span class="hlt">accurately</span> <span class="hlt">predicted</span>. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated <span class="hlt">prediction</span> method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4491972','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4491972"><span>Tissue resonance interaction <span class="hlt">accurately</span> detects colon lesions: A double-blind pilot study</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dore, Maria P; Tufano, Marcello O; Pes, Giovanni M; Cuccu, Marianna; Farina, Valentina; Manca, Alessandra; Graham, David Y</p> <p>2015-01-01</p> <p>AIM: To investigated the performance of the tissue resonance interaction method (TRIM) for the non-invasive detection of colon lesions. METHODS: We performed a prospective single-center blinded pilot study of consecutive adults undergoing colonoscopy at the University Hospital in Sassari, Italy. Before patients underwent colonoscopy, they were examined by the TRIMprobe which detects differences in electromagnetic properties between pathological and normal tissues. All patients had completed the polyethylene glycol-containing bowel prep for the colonoscopy procedure before being screened. During the procedure the subjects remained fully dressed. A hand-held probe was moved over the abdomen and variations in electromagnetic signals were recorded for 3 spectral lines (462-465 MHz, 930 MHz, and 1395 MHz). A single investigator, blind to any clinical information, performed the test using the TRIMprob system. Abnormal signals were identified and recorded as malignant or benign (adenoma or hyperplastic polyps). Findings were compared with those from colonoscopy with histologic confirmation. Statistical analysis was performed by χ2 test. <span class="hlt">RESULTS</span>: A total of 305 consecutive patients fulfilling the inclusion criteria were enrolled over a period of 12 months. The most frequent indication for colonoscopy was abdominal pain (33%). The TRIMprob was well accepted by all patients; none spontaneously complained about the procedure, and no adverse effects were observed. TRIM proved inaccurate for polyp detection in patients with inflammatory bowel disease (IBD) and they were excluded leaving 281 subjects (mean age 59 ± 13 years; 107 males). The TRIM detected and <span class="hlt">accurately</span> characterized all 12 adenocarcinomas and 135/137 polyps (98.5%) including 64 adenomatous (100%) found. The method identified cancers and polyps with 98.7% sensitivity, 96.2% specificity, and 97.5% diagnostic accuracy, compared to colonoscopy and histology analyses. The positive <span class="hlt">predictive</span> value was 96.7% and the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950017027','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950017027"><span>A time-<span class="hlt">accurate</span> finite volume method valid at all flow velocities</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kim, S.-W.</p> <p>1993-01-01</p> <p>A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated <span class="hlt">results</span> show that the ITA is the most stable numerically and yields the most <span class="hlt">accurate</span> <span class="hlt">results</span>. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical <span class="hlt">results</span> obtained using the PISO are attributed to its second corrector step that cause the numerical <span class="hlt">results</span> to deviate further from a divergence free velocity field. The <span class="hlt">accurate</span> numerical <span class="hlt">results</span> obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully <span class="hlt">predicts</span> the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5746097','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5746097"><span><span class="hlt">Accurate</span> <span class="hlt">prediction</span> of subcellular location of apoptosis proteins combining Chou’s PseAAC and PsePSSM based on wavelet denoising</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Wang, Ming-Hui; Zhang, Yan</p> <p>2017-01-01</p> <p>Apoptosis proteins subcellular localization information are very important for understanding the mechanism of programmed cell death and the development of drugs. The <span class="hlt">prediction</span> of subcellular localization of an apoptosis protein is still a challenging task because the <span class="hlt">prediction</span> of apoptosis proteins subcellular localization can help to understand their function and the role of metabolic processes. In this paper, we propose a novel method for protein subcellular localization <span class="hlt">prediction</span>. Firstly, the features of the protein sequence are extracted by combining Chou's pseudo amino acid composition (PseAAC) and pseudo-position specific scoring matrix (PsePSSM), then the feature information of the extracted is denoised by two-dimensional (2-D) wavelet denoising. Finally, the optimal feature vectors are input to the SVM classifier to <span class="hlt">predict</span> subcellular location of apoptosis proteins. Quite promising <span class="hlt">predictions</span> are obtained using the jackknife test on three widely used datasets and compared with other state-of-the-art methods. The <span class="hlt">results</span> indicate that the method proposed in this paper can remarkably improve the <span class="hlt">prediction</span> accuracy of apoptosis protein subcellular localization, which will be a supplementary tool for future proteomics research. PMID:29296195</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27095264','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27095264"><span>Can blind persons <span class="hlt">accurately</span> assess body size from the voice?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka</p> <p>2016-04-01</p> <p>Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can <span class="hlt">accurately</span> assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the <span class="hlt">prediction</span> that <span class="hlt">accurate</span> voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for <span class="hlt">accurate</span> body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4881350','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4881350"><span>Can blind persons <span class="hlt">accurately</span> assess body size from the voice?</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Oleszkiewicz, Anna; Sorokowska, Agnieszka</p> <p>2016-01-01</p> <p>Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can <span class="hlt">accurately</span> assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the <span class="hlt">prediction</span> that <span class="hlt">accurate</span> voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20–65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for <span class="hlt">accurate</span> body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25821022','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25821022"><span><span class="hlt">Predicting</span> hepatitis B monthly incidence rates using weighted Markov chains and time series methods.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam</p> <p>2015-01-01</p> <p>Hepatitis B (HB) is a major global mortality. <span class="hlt">Accurately</span> <span class="hlt">predicting</span> the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to <span class="hlt">predict</span> monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The <span class="hlt">results</span> of different applied methods were compared to correct percentages of <span class="hlt">predicted</span> incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct <span class="hlt">predicted</span> percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of <span class="hlt">results</span> of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most <span class="hlt">accurate</span> <span class="hlt">prediction</span> of the incidence rates.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3963956','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3963956"><span>Fast and <span class="hlt">Accurate</span> Multivariate Gaussian Modeling of Protein Families: <span class="hlt">Predicting</span> Residue Contacts and Protein-Interaction Partners</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea</p> <p>2014-01-01</p> <p>In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their <span class="hlt">predictions</span> have been successfully implemented into tertiary and quaternary protein structure <span class="hlt">prediction</span> methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The <span class="hlt">resulting</span> statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the <span class="hlt">prediction</span> of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code. PMID:24663061</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5821340','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5821340"><span>A hybrid intelligent method for three-dimensional short-term <span class="hlt">prediction</span> of dissolved oxygen content in aquaculture</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang</p> <p>2018-01-01</p> <p>A precise <span class="hlt">predictive</span> model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly <span class="hlt">accurate</span> interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional <span class="hlt">prediction</span> can provide more <span class="hlt">accurate</span> <span class="hlt">results</span> and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content <span class="hlt">prediction</span> model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the <span class="hlt">predicted</span> <span class="hlt">results</span> of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional <span class="hlt">prediction</span> of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies. PMID:29466394</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110008300','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110008300"><span><span class="hlt">Predicting</span> Airspace Capacity Impacts Using the Consolidated Storm <span class="hlt">Prediction</span> for Aviation</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Russell, Carl</p> <p>2010-01-01</p> <p>Convective weather is currently the largest contributor to air traffic delays in the United States. In order to make effective traffic flow management decisions to mitigate these delays, weather forecasts must be made as early and as <span class="hlt">accurately</span> as possible. A forecast product that could be used to mitigate convective weather impacts is the Consolidated Storm <span class="hlt">Prediction</span> for Aviation. This product provides forecasts of cloud water content and convective top heights at 0- to 8-hour look-ahead times. The objective of this study was to examine a method of <span class="hlt">predicting</span> the impact of convective weather on air traffic sector capacities using these forecasts. Polygons representing forecast convective weather were overlaid at multiple flight levels on a sector map to calculate the fraction of each sector covered by weather. The fractional volume coverage was used as the primary metric to determine convection s impact on sectors. <span class="hlt">Results</span> reveal that the forecasts can be used to <span class="hlt">predict</span> the probability and magnitude of weather impacts on sector capacity up to eight hours in advance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4654735','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4654735"><span>Neural-scaled entropy <span class="hlt">predicts</span> the effects of nonlinear frequency compression on speech perception</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rallapalli, Varsha H.; Alexander, Joshua M.</p> <p>2015-01-01</p> <p>The Neural-Scaled Entropy (NSE) model quantifies information in the speech signal that has been altered beyond simple gain adjustments by sensorineural hearing loss (SNHL) and various signal processing. An extension of Cochlear-Scaled Entropy (CSE) [Stilp, Kiefte, Alexander, and Kluender (2010). J. Acoust. Soc. Am. 128(4), 2112–2126], NSE quantifies information as the change in 1-ms neural firing patterns across frequency. To evaluate the model, data from a study that examined nonlinear frequency compression (NFC) in listeners with SNHL were used because NFC can recode the same input information in multiple ways in the output, <span class="hlt">resulting</span> in different outcomes for different speech classes. Overall, <span class="hlt">predictions</span> were more <span class="hlt">accurate</span> for NSE than CSE. The NSE model <span class="hlt">accurately</span> described the observed degradation in recognition, and lack thereof, for consonants in a vowel-consonant-vowel context that had been processed in different ways by NFC. While NSE <span class="hlt">accurately</span> <span class="hlt">predicted</span> recognition of vowel stimuli processed with NFC, it underestimated them relative to a low-pass control condition without NFC. In addition, without modifications, it could not <span class="hlt">predict</span> the observed improvement in recognition for word final /s/ and /z/. Findings suggest that model modifications that include information from slower modulations might improve <span class="hlt">predictions</span> across a wider variety of conditions. PMID:26627780</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016amos.confE..49L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016amos.confE..49L"><span>Towards Relaxing the Spherical Solar Radiation Pressure Model for <span class="hlt">Accurate</span> Orbit <span class="hlt">Predictions</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lachut, M.; Bennett, J.</p> <p>2016-09-01</p> <p>The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit <span class="hlt">predictions</span> by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus <span class="hlt">resulting</span> in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010050135','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010050135"><span>Towards <span class="hlt">Accurate</span> <span class="hlt">Prediction</span> of Turbulent, Three-Dimensional, Recirculating Flows with the NCC</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Iannetti, A.; Tacina, R.; Jeng, S.-M.; Cai, J.</p> <p>2001-01-01</p> <p>The National Combustion Code (NCC) was used to calculate the steady state, nonreacting flow field of a prototype Lean Direct Injection (LDI) swirler. This configuration used nine groups of eight holes drilled at a thirty-five degree angle to induce swirl. These nine groups created swirl in the same direction, or a corotating pattern. The static pressure drop across the holes was fixed at approximately four percent. Computations were performed on one quarter of the geometry, because the geometry is considered rotationally periodic every ninety degrees. The final computational grid used was approximately 2.26 million tetrahedral cells, and a cubic nonlinear k - epsilon model was used to model turbulence. The NCC <span class="hlt">results</span> were then compared to time averaged Laser Doppler Velocimetry (LDV) data. The LDV measurements were performed on the full geometry, but four ninths of the geometry was measured. One-, two-, and three-dimensional representations of both flow fields are presented. The NCC computations compare both qualitatively and quantitatively well to the LDV data, but differences exist downstream. The comparison is encouraging, and shows that NCC can be used for future injector design studies. To improve the flow <span class="hlt">prediction</span> accuracy of turbulent, three-dimensional, recirculating flow fields with the NCC, recommendations are given.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdSpR..61..207Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdSpR..61..207Z"><span>Motion <span class="hlt">prediction</span> of a non-cooperative space target</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan</p> <p>2018-01-01</p> <p>Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion <span class="hlt">prediction</span> of a free-floating non-cooperative target in space is studied and a motion <span class="hlt">prediction</span> algorithm is proposed. In order to <span class="hlt">predict</span> the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the <span class="hlt">predicted</span> motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. <span class="hlt">Accurate</span> <span class="hlt">prediction</span> needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the <span class="hlt">predicted</span> motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an <span class="hlt">accurate</span> identification can be obtained in time. The numerical <span class="hlt">results</span> show that the proposed motion <span class="hlt">prediction</span> algorithm is able to <span class="hlt">predict</span> the motion of the target.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=62227&keyword=critical+AND+chain&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=62227&keyword=critical+AND+chain&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span><span class="hlt">PREDICTING</span> CHEMICAL RESIDUES IN AQUATIC FOOD CHAINS</span></a></p> <p><a target="_blank" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>The need to <span class="hlt">accurately</span> <span class="hlt">predict</span> chemical accumulation in aquatic organisms is critical for a variety of environmental applications including the assessment of contaminated sediments. Approaches for <span class="hlt">predicting</span> chemical residues can be divided into two general classes, empirical an...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27174312','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27174312"><span>Disambiguating past events: <span class="hlt">Accurate</span> source memory for time and context depends on different retrieval processes.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Persson, Bjorn M; Ainge, James A; O'Connor, Akira R</p> <p>2016-07-01</p> <p>Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more <span class="hlt">accurate</span> model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved <span class="hlt">accurately</span> using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, <span class="hlt">resulting</span> in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of <span class="hlt">results</span> from Experiment 1. Dual process theory <span class="hlt">predicts</span> that it should only be possible to retrieve source context from an event using recollection, and our <span class="hlt">results</span> are consistent with this <span class="hlt">prediction</span>. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5218827','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5218827"><span>Psychosis <span class="hlt">prediction</span> and clinical utility in familial high-risk studies: Selective review, synthesis, and implications for early detection and intervention</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shah, Jai L.; Tandon, Neeraj; Keshavan, Matcheri S.</p> <p>2016-01-01</p> <p>Aim <span class="hlt">Accurate</span> <span class="hlt">prediction</span> of which individuals will go on to develop psychosis would assist early intervention and prevention paradigms. We sought to review investigations of prospective psychosis <span class="hlt">prediction</span> based on markers and variables examined in longitudinal familial high-risk (FHR) studies. Methods We performed literature searches in MedLine, PubMed and PsycINFO for articles assessing performance characteristics of <span class="hlt">predictive</span> clinical tests in FHR studies of psychosis. Studies were included if they reported one or more <span class="hlt">predictive</span> variables in subjects at FHR for psychosis. We complemented this search strategy with references drawn from articles, reviews, book chapters and monographs. <span class="hlt">Results</span> Across generations of familial high-risk projects, <span class="hlt">predictive</span> studies have investigated behavioral, cognitive, psychometric, clinical, neuroimaging, and other markers. Recent analyses have incorporated multivariate and multi-domain approaches to risk ascertainment, although with still generally modest <span class="hlt">results</span>. Conclusions While a broad range of risk factors has been identified, no individual marker or combination of markers can at this time enable <span class="hlt">accurate</span> prospective <span class="hlt">prediction</span> of emerging psychosis for individuals at FHR. We outline the complex and multi-level nature of psychotic illness, the myriad of factors influencing its development, and methodological hurdles to <span class="hlt">accurate</span> and reliable <span class="hlt">prediction</span>. Prospects and challenges for future generations of FHR studies are discussed in the context of early detection and intervention strategies. PMID:23693118</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1943b0068S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1943b0068S"><span>Fatigue crack growth and life <span class="hlt">prediction</span> under mixed-mode loading</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sajith, S.; Murthy, K. S. R. K.; Robi, P. S.</p> <p>2018-04-01</p> <p>Fatigue crack growth life as a function of crack length is essential for the prevention of catastrophic failures from damage tolerance perspective. In damage tolerance design approach, principles of fracture mechanics are usually applied to <span class="hlt">predict</span> the fatigue life of structural components. Numerical <span class="hlt">prediction</span> of crack growth versus number of cycles is essential in damage tolerance design. For cracks under mixed mode I/II loading, modified Paris law (d/a d N =C (ΔKe q ) m ) along with different equivalent stress intensity factor (ΔKeq) model is used for fatigue crack growth rate <span class="hlt">prediction</span>. There are a large number of ΔKeq models available for the mixed mode I/II loading, the selection of proper ΔKeq model has significant impact on fatigue life <span class="hlt">prediction</span>. In the present investigation, the performance of ΔKeq models in fatigue life <span class="hlt">prediction</span> is compared with respect to the experimental findings as there are no guidelines/suggestions available on the selection of these models for <span class="hlt">accurate</span> and/or conservative <span class="hlt">predictions</span> of fatigue life. Within the limitations of availability of experimental data and currently available numerical simulation techniques, the <span class="hlt">results</span> of present study attempt to outline models that would provide <span class="hlt">accurate</span> and conservative life <span class="hlt">predictions</span>. Such a study aid the numerical analysts or engineers in the proper selection of the model for numerical simulation of the fatigue life. Moreover, the present investigation also suggests a procedure to enhance the accuracy of life <span class="hlt">prediction</span> using Paris law.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19970009636','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19970009636"><span><span class="hlt">Accurate</span> Finite Difference Algorithms</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Goodrich, John W.</p> <p>1996-01-01</p> <p>Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce <span class="hlt">accurate</span> <span class="hlt">results</span> after O(10(exp 6)) periods of propagation with eight grid points per wavelength.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19610306','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19610306"><span>Commissioning a passive-scattering proton therapy nozzle for <span class="hlt">accurate</span> SOBP delivery.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Engelsman, M; Lu, H M; Herrup, D; Bussiere, M; Kooy, H M</p> <p>2009-06-01</p> <p>Proton radiotherapy centers that currently use passively scattered proton beams do field specific calibrations for a non-negligible fraction of treatment fields, which is time and resource consuming. Our improved understanding of the passive scattering mode of the IBA universal nozzle, especially of the current modulation function, allowed us to re-commission our treatment control system for <span class="hlt">accurate</span> delivery of SOBPs of any range and modulation, and to <span class="hlt">predict</span> the output for each of these fields. We moved away from individual field calibrations to a state where continued quality assurance of SOBP field delivery is ensured by limited system-wide measurements that only require one hour per week. This manuscript reports on a protocol for generation of desired SOBPs and <span class="hlt">prediction</span> of dose output.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2832065','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2832065"><span>Commissioning a passive-scattering proton therapy nozzle for <span class="hlt">accurate</span> SOBP delivery</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Engelsman, M.; Lu, H.-M.; Herrup, D.; Bussiere, M.; Kooy, H. M.</p> <p>2009-01-01</p> <p>Proton radiotherapy centers that currently use passively scattered proton beams do field specific calibrations for a non-negligible fraction of treatment fields, which is time and resource consuming. Our improved understanding of the passive scattering mode of the IBA universal nozzle, especially of the current modulation function, allowed us to re-commission our treatment control system for <span class="hlt">accurate</span> delivery of SOBPs of any range and modulation, and to <span class="hlt">predict</span> the output for each of these fields. We moved away from individual field calibrations to a state where continued quality assurance of SOBP field delivery is ensured by limited system-wide measurements that only require one hour per week. This manuscript reports on a protocol for generation of desired SOBPs and <span class="hlt">prediction</span> of dose output. PMID:19610306</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28481575','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28481575"><span><span class="hlt">Accurate</span> where it counts: Empathic accuracy on conflict and no-conflict days.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lazarus, Gal; Bar-Kalifa, Eran; Rafaeli, Eshkol</p> <p>2018-03-01</p> <p>When we are <span class="hlt">accurate</span> regarding our partners' negative moods, are we seen as more responsive (and do we see them as such) as a function of the presence/absence of conflict? In 2 daily diary studies, empathic accuracy (EA) was assessed by comparing targets' daily negative moods with perceivers' inferences of these moods. We hypothesized that conflict will be associated with reductions in perceived partner responsiveness (PPR) for both parties; that on no-conflict days, EA will be positively associated with both parties' PPR; that on conflict days, this positive association will be stronger for targets but will become negative for perceivers; and that regardless of conflict, overestimation (vs. underestimation) of negative moods will be tied with higher PPR for targets but with lower PPR for perceivers. Thirty-six (Sample 1) and 77 (Sample 2) committed couples completed daily diaries (for 21 or 35 days, respectively). We utilized multilevel polynomial regression with response surface analyses, a sophisticated approach for studying multisource data of this sort (Edwards & Parry, 1993). <span class="hlt">Results</span> partially supported our hypotheses: conflict was tied to reduced PPR; on no-conflict days, EA was not consistently <span class="hlt">predictive</span> of target or perceiver PPR; on conflict days, EA <span class="hlt">predicted</span> increased target PPR but decreased perceiver PPR; finally, overestimation <span class="hlt">predicted</span> increased target PPR on no-conflict days and decreased perceiver PPR regardless of conflict. These <span class="hlt">results</span> highlight the double-edged effects of EA on conflict days, and the importance of investigating dyadic EA in a context-sensitive approach. (PsycINFO Database Record (c) 2018 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMGC41C1101M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMGC41C1101M"><span>Multi-scale <span class="hlt">predictions</span> of coniferous forest mortality in the northern hemisphere</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McDowell, N. G.</p> <p>2015-12-01</p> <p>Global temperature rise and extremes accompanying drought threaten forests and their associated climatic feedbacks. Our incomplete understanding of the fundamental physiological thresholds of vegetation mortality during drought limits our ability to <span class="hlt">accurately</span> simulate future vegetation distributions and associated climate feedbacks. Here we integrate experimental evidence with models to show potential widespread loss of needleleaf evergreen trees (NET; ~ conifers) within the Southwest USA by 2100; with rising temperature being the primary cause of mortality. Experimentally, dominant Southwest USA NET species died when they fell below predawn water potential (Ypd) thresholds (April-August mean) beyond which photosynthesis, stomatal and hydraulic conductance, and carbohydrate availability approached zero. Empirical and mechanistic models <span class="hlt">accurately</span> <span class="hlt">predicted</span> NET Ypd, and 91% of <span class="hlt">predictions</span> (10/11) exceeded mortality thresholds within the 21st century due to temperature rise. Completely independent global models <span class="hlt">predicted</span> >50% loss of northern hemisphere NET by 2100, consistent with the findings for Southwest USA. The global models disagreed with the ecosystem process models in regards to future mortality in Southwest USA, however, highlighting the potential underestimates of future NET mortality as simulated by the global models and signifying the importance of improving regional <span class="hlt">predictions</span>. Taken together, these <span class="hlt">results</span> from the validated regional <span class="hlt">predictions</span> and the global simulations <span class="hlt">predict</span> global-scale conifer loss in coming decades under projected global warming.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27565341','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27565341"><span>Toward <span class="hlt">Accurate</span> and Quantitative Comparative Metagenomics.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nayfach, Stephen; Pollard, Katherine S</p> <p>2016-08-25</p> <p>Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable <span class="hlt">accurate</span> comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for <span class="hlt">predictive</span> ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4206277','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4206277"><span>Combining Physicochemical and Evolutionary Information for Protein Contact <span class="hlt">Prediction</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Schneider, Michael; Brock, Oliver</p> <p>2014-01-01</p> <p>We introduce a novel contact <span class="hlt">prediction</span> method that achieves high <span class="hlt">prediction</span> accuracy by combining evolutionary and physicochemical information about native contacts. We obtain evolutionary information from multiple-sequence alignments and physicochemical information from <span class="hlt">predicted</span> ab initio protein structures. These structures represent low-energy states in an energy landscape and thus capture the physicochemical information encoded in the energy function. Such low-energy structures are likely to contain native contacts, even if their overall fold is not native. To differentiate native from non-native contacts in those structures, we develop a graph-based representation of the structural context of contacts. We then use this representation to train an support vector machine classifier to identify most likely native contacts in otherwise non-native structures. The <span class="hlt">resulting</span> contact <span class="hlt">predictions</span> are highly <span class="hlt">accurate</span>. As a <span class="hlt">result</span> of combining two sources of information—evolutionary and physicochemical—we maintain <span class="hlt">prediction</span> accuracy even when only few sequence homologs are present. We show that the <span class="hlt">predicted</span> contacts help to improve ab initio structure <span class="hlt">prediction</span>. A web service is available at http://compbio.robotics.tu-berlin.de/epc-map/. PMID:25338092</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5108651','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5108651"><span>Interpretable Decision Sets: A Joint Framework for Description and <span class="hlt">Prediction</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec</p> <p>2016-01-01</p> <p>One of the most important obstacles to deploying <span class="hlt">predictive</span> models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s <span class="hlt">prediction</span> and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building <span class="hlt">predictive</span> models that are highly <span class="hlt">accurate</span>, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, <span class="hlt">accurate</span>, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as <span class="hlt">accurate</span> at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, <span class="hlt">results</span> of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more <span class="hlt">accurately</span> than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4181496','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4181496"><span>Simulated Annealing Based Hybrid Forecast for Improving Daily Municipal Solid Waste Generation <span class="hlt">Prediction</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei</p> <p>2014-01-01</p> <p>A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more <span class="hlt">accurate</span> forecast model. The hybrid model was built and multistep ahead <span class="hlt">prediction</span> ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more <span class="hlt">accurate</span> and reliable <span class="hlt">results</span> and to degrade less in longer <span class="hlt">predictions</span> than three individual models. The average one-week step ahead <span class="hlt">prediction</span> has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.1916G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.1916G"><span>How <span class="hlt">accurate</span> are the weather forecasts for Bierun (southern Poland)?</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gawor, J.</p> <p>2012-04-01</p> <p>Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather <span class="hlt">prediction</span> models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather <span class="hlt">prediction</span> models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale <span class="hlt">Prediction</span> System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The <span class="hlt">prediction</span> periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The <span class="hlt">results</span> of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How <span class="hlt">accurate</span> are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most <span class="hlt">predictable</span>? 4) Why</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCoPh.339...96C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCoPh.339...96C"><span><span class="hlt">Prediction</span> of discretization error using the error transport equation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Celik, Ismail B.; Parsons, Don Roscoe</p> <p>2017-06-01</p> <p>This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately <span class="hlt">predict</span> the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for <span class="hlt">accurately</span> <span class="hlt">predicting</span> the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The <span class="hlt">result</span> is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error <span class="hlt">predictions</span> based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error <span class="hlt">predictions</span> are not quite as <span class="hlt">accurate</span> as those from Richardson extrapolation, the <span class="hlt">results</span> are reasonable and only require one numerical grid. The current <span class="hlt">results</span> indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23280647','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23280647"><span>Are external knee load and EMG measures <span class="hlt">accurate</span> indicators of internal knee contact forces during gait?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Meyer, Andrew J; D'Lima, Darryl D; Besier, Thor F; Lloyd, David G; Colwell, Clifford W; Fregly, Benjamin J</p> <p>2013-06-01</p> <p>Mechanical loading is believed to be a critical factor in the development and treatment of knee osteoarthritis. However, the contact forces to which the knee articular surfaces are subjected during daily activities cannot be measured clinically. Thus, the ability to <span class="hlt">predict</span> internal knee contact forces <span class="hlt">accurately</span> using external measures (i.e., external knee loads and muscle electromyographic [EMG] signals) would be clinically valuable. We quantified how well external knee load and EMG measures <span class="hlt">predict</span> internal knee contact forces during gait. A single subject with a force-measuring tibial prosthesis and post-operative valgus alignment performed four gait patterns (normal, medial thrust, walking pole, and trunk sway) to induce a wide range of external and internal knee joint loads. Linear regression analyses were performed to assess how much of the variability in internal contact forces was accounted for by variability in the external measures. Though the different gait patterns successfully induced significant changes in the external and internal quantities, changes in external measures were generally weak indicators of changes in total, medial, and lateral contact force. Our <span class="hlt">results</span> suggest that when total contact force may be changing, caution should be exercised when inferring changes in knee contact forces based on observed changes in external knee load and EMG measures. Advances in musculoskeletal modeling methods may be needed for <span class="hlt">accurate</span> estimation of in vivo knee contact forces. Copyright © 2012 Orthopaedic Research Society.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23380492','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23380492"><span>Are restrained eaters <span class="hlt">accurate</span> monitors of their intoxication? <span class="hlt">Results</span> from a field experiment.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Buchholz, Laura J; Crowther, Janis H; Olds, R Scott; Smith, Kathryn E; Ridolfi, Danielle R</p> <p>2013-04-01</p> <p>Brief interventions encourage college students to eat more before drinking to prevent harm (Dimeff et al., 1999), although many women decrease their caloric intake (Giles et al., 2009) and the number of eating episodes (Luce et al., 2012) prior to drinking alcohol. Participants were 37 undergraduate women (24.3% Caucasian) who were recruited from a local bar district in the Midwest. This study examined whether changes in eating after intending to drink interacted with dietary restraint to <span class="hlt">predict</span> accuracy of one's intoxication. <span class="hlt">Results</span> indicated that changes in eating significantly moderated the relationship between dietary restraint and accuracy of one's intoxication level. After eating more food before intending to drink, women higher in restraint were more likely to overestimate their intoxication than women lower in restraint. There were no differences between women with high levels and low levels of dietary restraint in the accuracy of their intoxication after eating less food before intending to drink. Future research would benefit from examining interoceptive awareness as a possible mechanism involved in this relationship. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=130909&keyword=statistics+AND+levels&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=130909&keyword=statistics+AND+levels&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>SPATIAL <span class="hlt">PREDICTION</span> USING COMBINED SOURCES OF DATA</span></a></p> <p><a target="_blank" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>For improved environmental decision-making, it is important to develop new models for spatial <span class="hlt">prediction</span> that <span class="hlt">accurately</span> characterize important spatial and temporal patterns of air pollution. As the U .S. Environmental Protection Agency begins to use spatial <span class="hlt">prediction</span> in the reg...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/1211130-toward-fully-silico-melting-point-prediction-using-molecular-simulations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1211130-toward-fully-silico-melting-point-prediction-using-molecular-simulations"><span>Toward Fully in Silico Melting Point <span class="hlt">Prediction</span> Using Molecular Simulations</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhang, Y; Maginn, EJ</p> <p>2013-03-01</p> <p>Melting point is one of the most fundamental and practically important properties of a compound. Molecular computation of melting points. However, all of these methods simulation methods have been developed for the <span class="hlt">accurate</span> need an experimental crystal structure as input, which means that such calculations are not really <span class="hlt">predictive</span> since the melting point can be measured easily in experiments once a crystal structure is known. On the other hand, crystal structure <span class="hlt">prediction</span> (CSP) has become an active field and significant progress has been made, although challenges still exist. One of the main challenges is the existence of many crystal structuresmore » (polymorphs) that are very close in energy. Thermal effects and kinetic factors make the situation even more complicated, such that it is still not trivial to <span class="hlt">predict</span> experimental crystal structures. In this work, we exploit the fact that free energy differences are often small between crystal structures. We show that <span class="hlt">accurate</span> melting point <span class="hlt">predictions</span> can be made by using a reasonable crystal structure from CSP as a starting point for a free energy-based melting point calculation. The key is that most crystal structures <span class="hlt">predicted</span> by CSP have free energies that are close to that of the experimental structure. The proposed method was tested on two rigid molecules and the <span class="hlt">results</span> suggest that a fully in silico melting point <span class="hlt">prediction</span> method is possible.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810024426','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810024426"><span>Space vehicle acoustics <span class="hlt">prediction</span> improvement for payloads. [space shuttle</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dandridge, R. E.</p> <p>1979-01-01</p> <p>The modal analysis method was extensively modified for the <span class="hlt">prediction</span> of space vehicle noise reduction in the shuttle payload enclosure, and this program was adapted to the IBM 360 computer. The <span class="hlt">predicted</span> noise reduction levels for two test cases were compared with experimental <span class="hlt">results</span> to determine the validity of the analytical model for <span class="hlt">predicting</span> space vehicle payload noise environments in the 10 Hz one-third octave band regime. The <span class="hlt">prediction</span> approach for the two test cases generally gave reasonable magnitudes and trends when compared with the measured noise reduction spectra. The discrepancies in the <span class="hlt">predictions</span> could be corrected primarily by improved modeling of the vehicle structural walls and of the enclosed acoustic space to obtain a more <span class="hlt">accurate</span> assessment of normal modes. Techniques for improving and expandng the noise <span class="hlt">prediction</span> for a payload environment are also suggested.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H33B1539C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H33B1539C"><span>Identify the dominant variables to <span class="hlt">predict</span> stream water temperature</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chien, H.; Flagler, J.</p> <p>2016-12-01</p> <p>Stream water temperature is a critical variable controlling water quality and the health of aquatic ecosystems. <span class="hlt">Accurate</span> <span class="hlt">prediction</span> of water temperature and the assessment of the impacts of environmental variables on water temperature variation are critical for water resources management, particularly in the context of water quality and aquatic ecosystem sustainability. The objective of this study is to measure stream water temperature and air temperature and to examine the importance of streamflow on stream water temperature <span class="hlt">prediction</span>. The measured stream water temperature and air temperature will be used to test two hypotheses: 1) streamflow is a relatively more important factor than air temperature in regulating water temperature, and 2) by combining air temperature and streamflow data stream water temperature can be more <span class="hlt">accurately</span> estimated. Water and air temperature data loggers are placed at two USGS stream gauge stations #01362357and #01362370, located in the upper Esopus Creek watershed in Phonecia, NY. The ARIMA (autoregressive integrated moving average) time series model is used to analyze the measured water temperature data, identify the dominant environmental variables, and <span class="hlt">predict</span> the water temperature with identified dominant variable. The preliminary <span class="hlt">results</span> show that streamflow is not a significant variable in <span class="hlt">predicting</span> stream water temperature at both USGS gauge stations. Daily mean air temperature is sufficient to <span class="hlt">predict</span> stream water temperature at this site scale.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/10090798','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/10090798"><span>Frequency, probability, and <span class="hlt">prediction</span>: easy solutions to cognitive illusions?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Griffin, D; Buehler, R</p> <p>1999-02-01</p> <p>Many errors in probabilistic judgment have been attributed to people's inability to think in statistical terms when faced with information about a single case. Prior theoretical analyses and empirical <span class="hlt">results</span> imply that the errors associated with case-specific reasoning may be reduced when people make frequentistic <span class="hlt">predictions</span> about a set of cases. In studies of three previously identified cognitive biases, we find that frequency-based <span class="hlt">predictions</span> are different from-but no better than-case-specific judgments of probability. First, in studies of the "planning fallacy, " we compare the accuracy of aggregate frequency and case-specific probability judgments in <span class="hlt">predictions</span> of students' real-life projects. When aggregate and single-case <span class="hlt">predictions</span> are collected from different respondents, there is little difference between the two: Both are overly optimistic and show little <span class="hlt">predictive</span> validity. However, in within-subject comparisons, the aggregate judgments are significantly more conservative than the single-case <span class="hlt">predictions</span>, though still optimistically biased. <span class="hlt">Results</span> from studies of overconfidence in general knowledge and base rate neglect in categorical <span class="hlt">prediction</span> underline a general conclusion. Frequentistic <span class="hlt">predictions</span> made for sets of events are no more statistically sophisticated, nor more <span class="hlt">accurate</span>, than <span class="hlt">predictions</span> made for individual events using subjective probability. Copyright 1999 Academic Press.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29078793','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29078793"><span><span class="hlt">Prediction</span> of clinical response to drugs in ovarian cancer using the chemotherapy resistance test (CTR-test).</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kischkel, Frank Christian; Meyer, Carina; Eich, Julia; Nassir, Mani; Mentze, Monika; Braicu, Ioana; Kopp-Schneider, Annette; Sehouli, Jalid</p> <p>2017-10-27</p> <p>In order to validate if the test <span class="hlt">result</span> of the Chemotherapy Resistance Test (CTR-Test) is able to <span class="hlt">predict</span> the resistances or sensitivities of tumors in ovarian cancer patients to drugs, the CTR-Test <span class="hlt">result</span> and the corresponding clinical response of individual patients were correlated retrospectively. <span class="hlt">Results</span> were compared to previous recorded correlations. The CTR-Test was performed on tumor samples from 52 ovarian cancer patients for specific chemotherapeutic drugs. Patients were treated with monotherapies or drug combinations. Resistances were classified as extreme (ER), medium (MR) or slight (SR) resistance in the CTR-Test. Combination treatment resistances were transformed by a scoring system into these classifications. <span class="hlt">Accurate</span> sensitivity <span class="hlt">prediction</span> was accomplished in 79% of the cases and <span class="hlt">accurate</span> <span class="hlt">prediction</span> of resistance in 100% of the cases in the total data set. The data set of single agent treatment and drug combination treatment were analyzed individually. Single agent treatment lead to an <span class="hlt">accurate</span> sensitivity in 44% of the cases and the drug combination to 95% accuracy. The detection of resistances was in both cases to 100% correct. ROC curve analysis indicates that the CTR-Test <span class="hlt">result</span> correlates with the clinical response, at least for the combination chemotherapy. Those values are similar or better than the values from a publication from 1990. Chemotherapy resistance testing in vitro via the CTR-Test is able to <span class="hlt">accurately</span> detect resistances in ovarian cancer patients. These numbers confirm and even exceed <span class="hlt">results</span> published in 1990. Better sensitivity detection might be caused by a higher percentage of drug combinations tested in 2012 compared to 1990. Our study confirms the functionality of the CTR-Test to plan an efficient chemotherapeutic treatment for ovarian cancer patients.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28339854','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28339854"><span><span class="hlt">Prediction</span> versus aetiology: common pitfalls and how to avoid them.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van Diepen, Merel; Ramspek, Chava L; Jager, Kitty J; Zoccali, Carmine; Dekker, Friedo W</p> <p>2017-04-01</p> <p><span class="hlt">Prediction</span> research is a distinct field of epidemiologic research, which should be clearly separated from aetiological research. Both <span class="hlt">prediction</span> and aetiology make use of multivariable modelling, but the underlying research aim and interpretation of <span class="hlt">results</span> are very different. Aetiology aims at uncovering the causal effect of a specific risk factor on an outcome, adjusting for confounding factors that are selected based on pre-existing knowledge of causal relations. In contrast, <span class="hlt">prediction</span> aims at <span class="hlt">accurately</span> <span class="hlt">predicting</span> the risk of an outcome using multiple predictors collectively, where the final <span class="hlt">prediction</span> model is usually based on statistically significant, but not necessarily causal, associations in the data at hand.In both scientific and clinical practice, however, the two are often confused, <span class="hlt">resulting</span> in poor-quality publications with limited interpretability and applicability. A major problem is the frequently encountered aetiological interpretation of <span class="hlt">prediction</span> <span class="hlt">results</span>, where individual variables in a <span class="hlt">prediction</span> model are attributed causal meaning. This article stresses the differences in use and interpretation of aetiological and <span class="hlt">prediction</span> studies, and gives examples of common pitfalls. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4509804','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4509804"><span>External validation of a simple clinical tool used to <span class="hlt">predict</span> falls in people with Parkinson disease</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Duncan, Ryan P.; Cavanaugh, James T.; Earhart, Gammon M.; Ellis, Terry D.; Ford, Matthew P.; Foreman, K. Bo; Leddy, Abigail L.; Paul, Serene S.; Canning, Colleen G.; Thackeray, Anne; Dibble, Leland E.</p> <p>2015-01-01</p> <p>Background Assessment of fall risk in an individual with Parkinson disease (PD) is a critical yet often time consuming component of patient care. Recently a simple clinical <span class="hlt">prediction</span> tool based only on fall history in the previous year, freezing of gait in the past month, and gait velocity <1.1 m/s was developed and <span class="hlt">accurately</span> <span class="hlt">predicted</span> future falls in a sample of individuals with PD. METHODS We sought to externally validate the utility of the tool by administering it to a different cohort of 171 individuals with PD. Falls were monitored prospectively for 6 months following predictor assessment. <span class="hlt">RESULTS</span> The tool <span class="hlt">accurately</span> discriminated future fallers from non-fallers (area under the curve [AUC] = 0.83; 95% CI 0.76 –0.89), comparable to the developmental study. CONCLUSION The <span class="hlt">results</span> validated the utility of the tool for allowing clinicians to quickly and <span class="hlt">accurately</span> identify an individual’s risk of an impending fall. PMID:26003412</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790008550','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790008550"><span>Initial Comparison of Single Cylinder Stirling Engine Computer Model <span class="hlt">Predictions</span> with Test <span class="hlt">Results</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tew, R. C., Jr.; Thieme, L. G.; Miao, D.</p> <p>1979-01-01</p> <p>A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to <span class="hlt">predict</span> the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model <span class="hlt">predictions</span> with the early <span class="hlt">results</span> of the Lewis Research Center GPU-3 tests are compared.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26167085','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26167085"><span>Tissue resonance interaction <span class="hlt">accurately</span> detects colon lesions: A double-blind pilot study.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dore, Maria P; Tufano, Marcello O; Pes, Giovanni M; Cuccu, Marianna; Farina, Valentina; Manca, Alessandra; Graham, David Y</p> <p>2015-07-07</p> <p>To investigated the performance of the tissue resonance interaction method (TRIM) for the non-invasive detection of colon lesions. We performed a prospective single-center blinded pilot study of consecutive adults undergoing colonoscopy at the University Hospital in Sassari, Italy. Before patients underwent colonoscopy, they were examined by the TRIMprobe which detects differences in electromagnetic properties between pathological and normal tissues. All patients had completed the polyethylene glycol-containing bowel prep for the colonoscopy procedure before being screened. During the procedure the subjects remained fully dressed. A hand-held probe was moved over the abdomen and variations in electromagnetic signals were recorded for 3 spectral lines (462-465 MHz, 930 MHz, and 1395 MHz). A single investigator, blind to any clinical information, performed the test using the TRIMprob system. Abnormal signals were identified and recorded as malignant or benign (adenoma or hyperplastic polyps). Findings were compared with those from colonoscopy with histologic confirmation. Statistical analysis was performed by χ(2) test. A total of 305 consecutive patients fulfilling the inclusion criteria were enrolled over a period of 12 months. The most frequent indication for colonoscopy was abdominal pain (33%). The TRIMprob was well accepted by all patients; none spontaneously complained about the procedure, and no adverse effects were observed. TRIM proved inaccurate for polyp detection in patients with inflammatory bowel disease (IBD) and they were excluded leaving 281 subjects (mean age 59 ± 13 years; 107 males). The TRIM detected and <span class="hlt">accurately</span> characterized all 12 adenocarcinomas and 135/137 polyps (98.5%) including 64 adenomatous (100%) found. The method identified cancers and polyps with 98.7% sensitivity, 96.2% specificity, and 97.5% diagnostic accuracy, compared to colonoscopy and histology analyses. The positive <span class="hlt">predictive</span> value was 96.7% and the negative <span class="hlt">predictive</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1020a2003R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1020a2003R"><span>A Comparative Study of Data Mining Techniques on Football Match <span class="hlt">Prediction</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rosli, Che Mohamad Firdaus Che Mohd; Zainuri Saringat, Mohd; Razali, Nazim; Mustapha, Aida</p> <p>2018-05-01</p> <p>Data <span class="hlt">prediction</span> have become a trend in today’s business or organization. This paper is set to <span class="hlt">predict</span> match outcomes for association football from the perspective of football club managers and coaches. This paper explored different data mining techniques used for <span class="hlt">predicting</span> the match outcomes where the target class is win, draw and lose. The main objective of this research is to find the most <span class="hlt">accurate</span> data mining technique that fits the nature of football data. The techniques tested are Decision Trees, Neural Networks, Bayesian Network, and k-Nearest Neighbors. The <span class="hlt">results</span> from the comparative experiments showed that Decision Trees produced the highest average <span class="hlt">prediction</span> accuracy in the domain of football match <span class="hlt">prediction</span> by 99.56%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18940251','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18940251"><span><span class="hlt">Predicting</span> drug hydrolysis based on moisture uptake in various packaging designs.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Naversnik, Klemen; Bohanec, Simona</p> <p>2008-12-18</p> <p>An attempt was made to <span class="hlt">predict</span> the stability of a moisture sensitive drug product based on the knowledge of the dependence of the degradation rate on tablet moisture. The moisture increase inside a HDPE bottle with the drug formulation was simulated with the sorption-desorption moisture transfer model, which, in turn, allowed an <span class="hlt">accurate</span> <span class="hlt">prediction</span> of the drug degradation kinetics. The stability <span class="hlt">prediction</span>, obtained by computer simulation, was made in a considerably shorter time frame and required little resources compared to a conventional stability study. The <span class="hlt">prediction</span> was finally upgraded to a stochastic Monte Carlo simulation, which allowed quantitative incorporation of uncertainty, stemming from various sources. The <span class="hlt">resulting</span> distribution of the outcome of interest (amount of degradation product at expiry) is a comprehensive way of communicating the <span class="hlt">result</span> along with its uncertainty, superior to single-value <span class="hlt">results</span> or confidence intervals.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/1369123-radiometer-calibration-methods-resulting-irradiance-differences-radiometer-calibration-methods-resulting-irradiance-differences','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1369123-radiometer-calibration-methods-resulting-irradiance-differences-radiometer-calibration-methods-resulting-irradiance-differences"><span>Radiometer calibration methods and <span class="hlt">resulting</span> irradiance differences: Radiometer calibration methods and <span class="hlt">resulting</span> irradiance differences</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Habte, Aron; Sengupta, Manajit; Andreas, Afshin</p> <p></p> <p><span class="hlt">Accurate</span> solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and <span class="hlt">resulting</span> differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration <span class="hlt">resulted</span> in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the <span class="hlt">accurate</span> <span class="hlt">prediction</span> of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA533151&Location=U2&doc=GetTRDoc.pdf','USGSPUBS'); return false;" href="http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA533151&Location=U2&doc=GetTRDoc.pdf"><span><span class="hlt">Prediction</span> and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Plant, Nathaniel G.; Holland, K. Todd</p> <p>2011-01-01</p> <p><span class="hlt">Prediction</span> of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from <span class="hlt">accurate</span> numerical <span class="hlt">predictions</span>. However, the <span class="hlt">predictions</span> are only as <span class="hlt">accurate</span> as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful <span class="hlt">predictions</span> require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide <span class="hlt">accurate</span> <span class="hlt">predictions</span> of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that <span class="hlt">predictions</span> of a detailed geophysical model of the wave evolution are reproduced <span class="hlt">accurately</span> using a Bayesian approach. In this surf-zone application, forward <span class="hlt">prediction</span> skill was 83%, and uncertainties in the model inputs were <span class="hlt">accurately</span> transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic <span class="hlt">prediction</span> uncertainties were computed. More consistent <span class="hlt">predictions</span> and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved <span class="hlt">predictions</span> (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while <span class="hlt">predicting</span> wave heights.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/403630-predicting-overload-affected-fatigue-crack-growth-steels','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/403630-predicting-overload-affected-fatigue-crack-growth-steels"><span><span class="hlt">Predicting</span> overload-affected fatigue crack growth in steels</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Skorupa, M.; Skorupa, A.; Ladecki, B.</p> <p>1996-12-01</p> <p>The ability of semi-empirical crack closure models to <span class="hlt">predict</span> the effect of overloads on fatigue crack growth in low-alloy steels has been investigated. With this purpose, the CORPUS model developed for aircraft metals and spectra has been checked first through comparisons between the simulated and observed <span class="hlt">results</span> for a low-alloy steel. The CORPUS <span class="hlt">predictions</span> of crack growth under several types of simple load histories containing overloads appeared generally unconservative which prompted the authors to formulate a new model, more suitable for steels. With the latter approach, the assumed evolution of the crack opening stress during the delayed retardation stage hasmore » been based on experimental <span class="hlt">results</span> reported for various steels. For all the load sequences considered, the <span class="hlt">predictions</span> from the proposed model appeared to be by far more <span class="hlt">accurate</span> than those from CORPUS. Based on the analysis <span class="hlt">results</span>, the capability of semi-empirical <span class="hlt">prediction</span> concepts to cover experimentally observed trends that have been reported for sequences with overloads is discussed. Finally, possibilities of improving the model performance are considered.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19840059489&hterms=military&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dmilitary','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19840059489&hterms=military&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dmilitary"><span>Evaluation of ride quality <span class="hlt">prediction</span> methods for operational military helicopters</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.</p> <p>1984-01-01</p> <p>The <span class="hlt">results</span> of a simulator study conducted to compare and validate various ride quality <span class="hlt">prediction</span> methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are <span class="hlt">results</span> quantifying 35 helicopter pilots' discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. <span class="hlt">Results</span> of the study indicated that crew discomfort <span class="hlt">results</span> from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. <span class="hlt">Accurate</span> <span class="hlt">prediction</span> required a metric incorporating the interactive effects of both noise and vibration. The best metric for <span class="hlt">predicting</span> crew comfort to the combined noise and vibration environment was the NASA discomfort index.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29076175','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29076175"><span>Molecular acidity: An <span class="hlt">accurate</span> description with information-theoretic approach in density functional reactivity theory.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cao, Xiaofang; Rong, Chunying; Zhong, Aiguo; Lu, Tian; Liu, Shubin</p> <p>2018-01-15</p> <p>Molecular acidity is one of the important physiochemical properties of a molecular system, yet its <span class="hlt">accurate</span> calculation and <span class="hlt">prediction</span> are still an unresolved problem in the literature. In this work, we propose to make use of the quantities from the information-theoretic (IT) approach in density functional reactivity theory and provide an <span class="hlt">accurate</span> description of molecular acidity from a completely new perspective. To illustrate our point, five different categories of acidic series, singly and doubly substituted benzoic acids, singly substituted benzenesulfinic acids, benzeneseleninic acids, phenols, and alkyl carboxylic acids, have been thoroughly examined. We show that using IT quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Rényi entropy, one is able to simultaneously <span class="hlt">predict</span> experimental pKa values of these different categories of compounds. Because of the universality of the quantities employed in this work, which are all density dependent, our approach should be general and be applicable to other systems as well. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5690697','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5690697"><span>Base pair probability estimates improve the <span class="hlt">prediction</span> accuracy of RNA non-canonical base pairs</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2017-01-01</p> <p><span class="hlt">Prediction</span> of RNA tertiary structure from sequence is an important problem, but generating <span class="hlt">accurate</span> structure models for even short sequences remains difficult. <span class="hlt">Predictions</span> of RNA tertiary structure tend to be least <span class="hlt">accurate</span> in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be <span class="hlt">predicted</span> using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are <span class="hlt">predicted</span> to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by <span class="hlt">predicting</span> the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow <span class="hlt">accurate</span> inference of non-canonical pairs, an important step towards <span class="hlt">accurate</span> <span class="hlt">prediction</span> of the full tertiary structure. Software to <span class="hlt">predict</span> non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package. PMID:29107980</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29104362','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29104362"><span>Scoring Coreference Partitions of <span class="hlt">Predicted</span> Mentions: A Reference Implementation.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pradhan, Sameer; Luo, Xiaoqiang; Recasens, Marta; Hovy, Eduard; Ng, Vincent; Strube, Michael</p> <p>2014-06-01</p> <p>The definitions of two coreference scoring metrics- B 3 and CEAF-are underspecified with respect to <span class="hlt">predicted</span> , as opposed to key (or gold ) mentions. Several variations have been proposed that manipulate either, or both, the key and <span class="hlt">predicted</span> mentions in order to get a one-to-one mapping. On the other hand, the metric BLANC was, until recently, limited to scoring partitions of key mentions. In this paper, we (i) argue that mention manipulation for scoring <span class="hlt">predicted</span> mentions is unnecessary, and potentially harmful as it could produce unintuitive <span class="hlt">results</span>; (ii) illustrate the application of all these measures to scoring <span class="hlt">predicted</span> mentions; (iii) make available an open-source, thoroughly-tested reference implementation of the main coreference evaluation measures; and (iv) rescore the <span class="hlt">results</span> of the CoNLL-2011/2012 shared task systems with this implementation. This will help the community <span class="hlt">accurately</span> measure and compare new end-to-end coreference resolution algorithms.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5514335','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5514335"><span>DrugECs: An Ensemble System with Feature Subspaces for <span class="hlt">Accurate</span> Drug-Target Interaction <span class="hlt">Prediction</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jiang, Jinjian; Wang, Nian; Zhang, Jun</p> <p>2017-01-01</p> <p>Background Drug-target interaction is key in drug discovery, especially in the design of new lead compound. However, the work to find a new lead compound for a specific target is complicated and hard, and it always leads to many mistakes. Therefore computational techniques are commonly adopted in drug design, which can save time and costs to a significant extent. <span class="hlt">Results</span> To address the issue, a new <span class="hlt">prediction</span> system is proposed in this work to identify drug-target interaction. First, drug-target pairs are encoded with a fragment technique and the software “PaDEL-Descriptor.” The fragment technique is for encoding target proteins, which divides each protein sequence into several fragments in order and encodes each fragment with several physiochemical properties of amino acids. The software “PaDEL-Descriptor” creates encoding vectors for drug molecules. Second, the dataset of drug-target pairs is resampled and several overlapped subsets are obtained, which are then input into kNN (k-Nearest Neighbor) classifier to build an ensemble system. Conclusion Experimental <span class="hlt">results</span> on the drug-target dataset showed that our method performs better and runs faster than the state-of-the-art predictors. PMID:28744468</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1007a2039S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1007a2039S"><span>Increasing <span class="hlt">Prediction</span> the Original Final Year Project of Student Using Genetic Algorithm</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saragih, Rijois Iboy Erwin; Turnip, Mardi; Sitanggang, Delima; Aritonang, Mendarissan; Harianja, Eva</p> <p>2018-04-01</p> <p>Final year project is very important forgraduation study of a student. Unfortunately, many students are not seriouslydidtheir final projects. Many of studentsask for someone to do it for them. In this paper, an application of genetic algorithms to <span class="hlt">predict</span> the original final year project of a studentis proposed. In the simulation, the data of the final project for the last 5 years is collected. The genetic algorithm has several operators namely population, selection, crossover, and mutation. The <span class="hlt">result</span> suggest that genetic algorithm can do better <span class="hlt">prediction</span> than other comparable model. Experimental <span class="hlt">results</span> of <span class="hlt">predicting</span> showed that 70% was more <span class="hlt">accurate</span> than the previous researched.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2099502','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2099502"><span>Parturition <span class="hlt">prediction</span> and timing of canine pregnancy</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kim, YeunHee; Travis, Alexander J.; Meyers-Wallen, Vicki N.</p> <p>2007-01-01</p> <p>An <span class="hlt">accurate</span> method of <span class="hlt">predicting</span> the date of parturition in the bitch is clinically useful to minimize or prevent reproductive losses by timely intervention. Similarly, an <span class="hlt">accurate</span> method of timing canine ovulation and gestation is critical for development of assisted reproductive technologies, e.g. estrous synchronization and embryo transfer. This review discusses present methods for <span class="hlt">accurately</span> timing canine gestational age and outlines their use in clinical management of high-risk pregnancies and embryo transfer research. PMID:17904630</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28219971','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28219971"><span><span class="hlt">Predicting</span> human olfactory perception from chemical features of odor molecules.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Keller, Andreas; Gerkin, Richard C; Guan, Yuanfang; Dhurandhar, Amit; Turu, Gabor; Szalai, Bence; Mainland, Joel D; Ihara, Yusuke; Yu, Chung Wen; Wolfinger, Russ; Vens, Celine; Schietgat, Leander; De Grave, Kurt; Norel, Raquel; Stolovitzky, Gustavo; Cecchi, Guillermo A; Vosshall, Leslie B; Meyer, Pablo</p> <p>2017-02-24</p> <p>It is still not possible to <span class="hlt">predict</span> whether a given molecule will have a perceived odor or what olfactory percept it will produce. We therefore organized the crowd-sourced DREAM Olfaction <span class="hlt">Prediction</span> Challenge. Using a large olfactory psychophysical data set, teams developed machine-learning algorithms to <span class="hlt">predict</span> sensory attributes of molecules based on their chemoinformatic features. The <span class="hlt">resulting</span> models <span class="hlt">accurately</span> <span class="hlt">predicted</span> odor intensity and pleasantness and also successfully <span class="hlt">predicted</span> 8 among 19 rated semantic descriptors ("garlic," "fish," "sweet," "fruit," "burnt," "spices," "flower," and "sour"). Regularized linear models performed nearly as well as random forest-based ones, with a <span class="hlt">predictive</span> accuracy that closely approaches a key theoretical limit. These models help to <span class="hlt">predict</span> the perceptual qualities of virtually any molecule with high accuracy and also reverse-engineer the smell of a molecule. Copyright © 2017, American Association for the Advancement of Science.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AdSpR..60.2855W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AdSpR..60.2855W"><span>Impacts of Earth rotation parameters on GNSS ultra-rapid orbit <span class="hlt">prediction</span>: Derivation and real-time correction</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto</p> <p>2017-12-01</p> <p>Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot <span class="hlt">accurately</span> obtain real-time Earth rotation parameters (ERPs). Thus, the <span class="hlt">prediction</span> of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the <span class="hlt">predicted</span> ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs <span class="hlt">predicted</span> by IERS and IGS is analyzed. The error of the ERPs <span class="hlt">predicted</span> for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit <span class="hlt">prediction</span> by GNSS is studied. The methods for orbit integration and frame transformation in orbit <span class="hlt">prediction</span> with introduced ERP errors dominate the accuracy of the <span class="hlt">predicted</span> orbit. Experimental <span class="hlt">results</span> show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the <span class="hlt">predicted</span> ultra-rapid orbit. To obtain the most <span class="hlt">accurate</span> <span class="hlt">predicted</span> ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are <span class="hlt">predicted</span> on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding <span class="hlt">predicted</span> orbit is transformed from GCRS to ITRS to adjust for the <span class="hlt">predicted</span> ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the <span class="hlt">predicted</span> orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with <span class="hlt">predicted</span> ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental <span class="hlt">results</span> show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit <span class="hlt">prediction</span> (except the geosynchronous BeiDou orbits). The accuracy of orbit <span class="hlt">prediction</span> is enhanced by at least 50</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1903f0010P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1903f0010P"><span>Developing a stochastic traffic volume <span class="hlt">prediction</span> model for public-private partnership projects</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Phong, Nguyen Thanh; Likhitruangsilp, Veerasak; Onishi, Masamitsu</p> <p>2017-11-01</p> <p>Transportation projects require an enormous amount of capital investment <span class="hlt">resulting</span> from their tremendous size, complexity, and risk. Due to the limitation of public finances, the private sector is invited to participate in transportation project development. The private sector can entirely or partially invest in transportation projects in the form of Public-Private Partnership (PPP) scheme, which has been an attractive option for several developing countries, including Vietnam. There are many factors affecting the success of PPP projects. The <span class="hlt">accurate</span> <span class="hlt">prediction</span> of traffic volume is considered one of the key success factors of PPP transportation projects. However, only few research works investigated how to <span class="hlt">predict</span> traffic volume over a long period of time. Moreover, conventional traffic volume forecasting methods are usually based on deterministic models which <span class="hlt">predict</span> a single value of traffic volume but do not consider risk and uncertainty. This knowledge gap makes it difficult for concessionaires to estimate PPP transportation project revenues <span class="hlt">accurately</span>. The objective of this paper is to develop a probabilistic traffic volume <span class="hlt">prediction</span> model. First, traffic volumes were estimated following the Geometric Brownian Motion (GBM) process. Monte Carlo technique is then applied to simulate different scenarios. The <span class="hlt">results</span> show that this stochastic approach can systematically analyze variations in the traffic volume and yield more reliable estimates for PPP projects.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12611089','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12611089"><span>Ultrasonic <span class="hlt">prediction</span> of term birth weight in Hispanic women. Accuracy in an outpatient clinic.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nahum, Gerard G; Pham, Krystle Q; McHugh, John P</p> <p>2003-01-01</p> <p>To investigate the accuracy of ultrasonic fetal biometric algorithms for estimating term fetal weight. Ultrasonographic fetal biometric assessments were made in 74 Hispanic women who delivered at 37-42 weeks of gestation. Measurements were taken of the fetal biparietal diameter, head circumference, abdominal circumference and femur length. Twenty-seven standard fetal biometric algorithms were assessed for their accuracy in <span class="hlt">predicting</span> fetal weight. <span class="hlt">Results</span> were compared to those obtained by merely guessing the mean term birth weight in each case. The correlation between ultrasonically <span class="hlt">predicted</span> and actual birth weights ranged from 0.52 to 0.79. The different ultrasonic algorithms estimated fetal weight to within +/- 8.6-15.0% (+/- 295-520 g) of actual birth weight as compared with +/- 13.6% (+/- 449 g) for guessing the mean birth weight in each case (mean +/- SD). The mean absolute <span class="hlt">prediction</span> errors for 17 of the ultrasonic equations (63%) were superior to those obtained by guessing the mean birth weight by 3.2-5.0% (96-154 g) (P < .05). Fourteen algorithms (52%) were more <span class="hlt">accurate</span> for <span class="hlt">predicting</span> fetal weight to within +/- 15%, and 20 algorithms (74%) were more <span class="hlt">accurate</span> for <span class="hlt">predicting</span> fetal weight to within +/- 10% of actual birth weight than simply guessing the mean birth weight (P < .05). Ten ultrasonic equations (37%) showed significant utility for <span class="hlt">predicting</span> fetal weight > 4,000 g (likelihood ratio > 5.0). Term fetal weight <span class="hlt">predictions</span> using the majority of sonographic fetal biometric equations are more <span class="hlt">accurate</span>, by up to 154 g and 5%, than simply guessing the population-specific mean birth weight.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070035071','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070035071"><span><span class="hlt">Prediction</span> of Size Effects in Notched Laminates Using Continuum Damage Mechanics</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Camanho, D. P.; Maimi, P.; Davila, C. G.</p> <p>2007-01-01</p> <p>This paper examines the use of a continuum damage model to <span class="hlt">predict</span> strength and size effects in notched carbon-epoxy laminates. The effects of size and the development of a fracture process zone before final failure are identified in an experimental program. The continuum damage model is described and the <span class="hlt">resulting</span> <span class="hlt">predictions</span> of size effects are compared with alternative approaches: the point stress and the inherent flaw models, the Linear-Elastic Fracture Mechanics approach, and the strength of materials approach. The <span class="hlt">results</span> indicate that the continuum damage model is the most <span class="hlt">accurate</span> technique to <span class="hlt">predict</span> size effects in composites. Furthermore, the continuum damage model does not require any calibration and it is applicable to general geometries and boundary conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29116458','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29116458"><span>3D gut-liver chip with a PK model for <span class="hlt">prediction</span> of first-pass metabolism.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Dong Wook; Ha, Sang Keun; Choi, Inwook; Sung, Jong Hwan</p> <p>2017-11-07</p> <p><span class="hlt">Accurate</span> <span class="hlt">prediction</span> of first-pass metabolism is essential for improving the time and cost efficiency of drug development process. Here, we have developed a microfluidic gut-liver co-culture chip that aims to reproduce the first-pass metabolism of oral drugs. This chip consists of two separate layers for gut (Caco-2) and liver (HepG2) cell lines, where cells can be co-cultured in both 2D and 3D forms. Both cell lines were maintained well in the chip, verified by confocal microscopy and measurement of hepatic enzyme activity. We investigated the PK profile of paracetamol in the chip, and corresponding PK model was constructed, which was used to <span class="hlt">predict</span> PK profiles for different chip design parameters. Simulation <span class="hlt">results</span> implied that a larger absorption surface area and a higher metabolic capacity are required to reproduce the in vivo PK profile of paracetamol more <span class="hlt">accurately</span>. Our study suggests the possibility of reproducing the human PK profile on a chip, contributing to <span class="hlt">accurate</span> <span class="hlt">prediction</span> of pharmacological effect of drugs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5656435','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5656435"><span>Development and Preliminary Performance of a Risk Factor Screen to <span class="hlt">Predict</span> Posttraumatic Psychological Disorder After Trauma Exposure</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Carlson, Eve B.; Palmieri, Patrick A.; Spain, David A.</p> <p>2017-01-01</p> <p>Objective We examined data from a prospective study of risk factors that increase vulnerability or resilience, exacerbate distress, or foster recovery to determine whether risk factors <span class="hlt">accurately</span> <span class="hlt">predict</span> which individuals will later have high posttraumatic (PT) symptom levels and whether brief measures of risk factors also <span class="hlt">accurately</span> <span class="hlt">predict</span> later symptom elevations. Method Using data from 129 adults exposed to traumatic injury of self or a loved one, we conducted receiver operating characteristic (ROC) analyses of 14 risk factors assessed by full-length measures, determined optimal cutoff scores and calculated <span class="hlt">predictive</span> performance for the nine that were most <span class="hlt">predictive</span>. For five risk factors, we identified sets of items that accounted for 90% of variance in total scores and calculated <span class="hlt">predictive</span> performance for sets of brief risk measures. <span class="hlt">Results</span> A set of nine risk factors assessed by full measures identified 89% of those who later had elevated PT symptoms (sensitivity) and 78% of those who did not (specificity). A set of four brief risk factor measures assessed soon after injury identified 86% of those who later had elevated PT symptoms and 72% of those who did not. Conclusions Use of sets of brief risk factor measures shows promise of <span class="hlt">accurate</span> <span class="hlt">prediction</span> of PT psychological disorder and probable PTSD or depression. Replication of <span class="hlt">predictive</span> accuracy is needed in a new and larger sample. PMID:28622811</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10408E..0TH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10408E..0TH"><span>Determination of <span class="hlt">accurate</span> vertical atmospheric profiles of extinction and turbulence</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hammel, Steve; Campbell, James; Hallenborg, Eric</p> <p>2017-09-01</p> <p>Our ability to generate an <span class="hlt">accurate</span> vertical profile characterizing the atmosphere from the surface to a point above the boundary layer top is quite rudimentary. The region from a land or sea surface to an altitude of 3000 meters is dynamic and particularly important to the performance of many active optical systems. <span class="hlt">Accurate</span> and agile instruments are necessary to provide measurements in various conditions, and models are needed to provide the framework and <span class="hlt">predictive</span> capability necessary for system design and optimization. We introduce some of the path characterization instruments and describe the first work to calibrate and validate them. Along with a verification of measurement accuracy, the tests must also establish each instruments performance envelope. Measurement of these profiles in the field is a problem, and we will present a discussion of recent field test activity to address this issue. The Comprehensive Atmospheric Boundary Layer Extinction/Turbulence Resolution Analysis eXperiment (CABLE/TRAX) was conducted late June 2017. There were two distinct objectives for the experiment: 1) a comparison test of various scintillometers and transmissometers on a homogeneous horizontal path; 2) a vertical profile experiment. In this paper we discuss only the vertical profiling effort, and we describe the instruments that generated data for vertical profiles of absorption, scattering, and turbulence. These three profiles are the core requirements for an <span class="hlt">accurate</span> assessment of laser beam propagation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8788E..1AM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8788E..1AM"><span>Highly <span class="hlt">accurate</span> surface maps from profilometer measurements</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.</p> <p>2013-04-01</p> <p>Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because <span class="hlt">accurate</span> 3D corrections are not possible with the single trace. We show a method to produce an <span class="hlt">accurate</span> fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving <span class="hlt">accurate</span> measurements. The part must be centered to within 0.5% of the diameter to achieve <span class="hlt">accurate</span> <span class="hlt">results</span>. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28705497','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28705497"><span>Quicksilver: Fast <span class="hlt">predictive</span> image registration - A deep learning approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc</p> <p>2017-09-01</p> <p>This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise <span class="hlt">prediction</span> of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the <span class="hlt">prediction</span> model. While the <span class="hlt">prediction</span> strategy is general, we focus on <span class="hlt">predictions</span> for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we <span class="hlt">predict</span> the momentum-parameterization of LDDMM, which facilitates a patch-wise <span class="hlt">prediction</span> strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our <span class="hlt">prediction</span> network which can be sampled during the testing time to calculate uncertainties in the <span class="hlt">predicted</span> deformations. Finally, we introduce a new correction network which greatly increases the <span class="hlt">prediction</span> accuracy of an already existing <span class="hlt">prediction</span> network. We show experimental <span class="hlt">results</span> for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method <span class="hlt">accurately</span> <span class="hlt">predicts</span> registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration <span class="hlt">results</span> on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120013447','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120013447"><span>Comparison of <span class="hlt">Predictive</span> Modeling Methods of Aircraft Landing Speed</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Diallo, Ousmane H.</p> <p>2012-01-01</p> <p>Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in <span class="hlt">accurately</span> and precisely spacing aircraft landing at congested airports. Such tools will require an <span class="hlt">accurate</span> landing-speed <span class="hlt">prediction</span> to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an <span class="hlt">accurate</span> landing-speed model that has acceptable <span class="hlt">prediction</span> errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed <span class="hlt">prediction</span> model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to <span class="hlt">predict</span> the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error <span class="hlt">prediction</span> by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline <span class="hlt">predicted</span> landing-speed error standard deviation. Overall, the constructed models <span class="hlt">predict</span> the landing-speed more <span class="hlt">accurately</span> and precisely than the current state-of-the-art.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2242818','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2242818"><span><span class="hlt">Accurate</span> Structural Correlations from Maximum Likelihood Superpositions</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Theobald, Douglas L; Wuttke, Deborah S</p> <p>2008-01-01</p> <p>The cores of globular proteins are densely packed, <span class="hlt">resulting</span> in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. <span class="hlt">Accurate</span> analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly <span class="hlt">accurate</span> technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the <span class="hlt">results</span>, will facilitate the <span class="hlt">accurate</span> determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/1344948-dft-based-method-more-accurate-adsorption-energies-adaptive-sum-energies-from-rpbe-vdw-density-functionals','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1344948-dft-based-method-more-accurate-adsorption-energies-adaptive-sum-energies-from-rpbe-vdw-density-functionals"><span>DFT-based method for more <span class="hlt">accurate</span> adsorption energies: An adaptive sum of energies from RPBE and vdW density functionals</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hensley, Alyssa J. R.; Ghale, Kushal; Rieg, Carolin</p> <p></p> <p>In recent years, the popularity of density functional theory with periodic boundary conditions (DFT) has surged for the design and optimization of functional materials. However, no single DFT exchange–correlation functional currently available gives <span class="hlt">accurate</span> adsorption energies on transition metals both when bonding to the surface is dominated by strong covalent or ionic bonding and when it has strong contributions from van der Waals interactions (i.e., dispersion forces). Here we present a new, simple method for <span class="hlt">accurately</span> <span class="hlt">predicting</span> adsorption energies on transition-metal surfaces based on DFT calculations, using an adaptively weighted sum of energies from RPBE and optB86b-vdW (or optB88-vdW) densitymore » functionals. This method has been benchmarked against a set of 39 reliable experimental energies for adsorption reactions. Our <span class="hlt">results</span> show that this method has a mean absolute error and root mean squared error relative to experiments of 13.4 and 19.3 kJ/mol, respectively, compared to 20.4 and 26.4 kJ/mol for the BEEF-vdW functional. For systems with large van der Waals contributions, this method decreases these errors to 11.6 and 17.5 kJ/mol. Furthermore, this method provides <span class="hlt">predictions</span> of adsorption energies both for processes dominated by strong covalent or ionic bonding and for those dominated by dispersion forces that are more <span class="hlt">accurate</span> than those of any current standard DFT functional alone.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1344948-dft-based-method-more-accurate-adsorption-energies-adaptive-sum-energies-from-rpbe-vdw-density-functionals','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1344948-dft-based-method-more-accurate-adsorption-energies-adaptive-sum-energies-from-rpbe-vdw-density-functionals"><span>DFT-based method for more <span class="hlt">accurate</span> adsorption energies: An adaptive sum of energies from RPBE and vdW density functionals</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Hensley, Alyssa J. R.; Ghale, Kushal; Rieg, Carolin; ...</p> <p>2017-01-26</p> <p>In recent years, the popularity of density functional theory with periodic boundary conditions (DFT) has surged for the design and optimization of functional materials. However, no single DFT exchange–correlation functional currently available gives <span class="hlt">accurate</span> adsorption energies on transition metals both when bonding to the surface is dominated by strong covalent or ionic bonding and when it has strong contributions from van der Waals interactions (i.e., dispersion forces). Here we present a new, simple method for <span class="hlt">accurately</span> <span class="hlt">predicting</span> adsorption energies on transition-metal surfaces based on DFT calculations, using an adaptively weighted sum of energies from RPBE and optB86b-vdW (or optB88-vdW) densitymore » functionals. This method has been benchmarked against a set of 39 reliable experimental energies for adsorption reactions. Our <span class="hlt">results</span> show that this method has a mean absolute error and root mean squared error relative to experiments of 13.4 and 19.3 kJ/mol, respectively, compared to 20.4 and 26.4 kJ/mol for the BEEF-vdW functional. For systems with large van der Waals contributions, this method decreases these errors to 11.6 and 17.5 kJ/mol. Furthermore, this method provides <span class="hlt">predictions</span> of adsorption energies both for processes dominated by strong covalent or ionic bonding and for those dominated by dispersion forces that are more <span class="hlt">accurate</span> than those of any current standard DFT functional alone.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4646514','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4646514"><span><span class="hlt">Accurate</span> determination of segmented X-ray detector geometry</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton</p> <p>2015-01-01</p> <p>Recent advances in X-ray detector technology have <span class="hlt">resulted</span> in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. <span class="hlt">Accurate</span> determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations <span class="hlt">predicted</span> from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the <span class="hlt">results</span> of experiments. PMID:26561117</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1234724-accurate-determination-segmented-ray-detector-geometry','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1234724-accurate-determination-segmented-ray-detector-geometry"><span><span class="hlt">Accurate</span> determination of segmented X-ray detector geometry</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; ...</p> <p>2015-10-22</p> <p>Recent advances in X-ray detector technology have <span class="hlt">resulted</span> in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. <span class="hlt">Accurate</span> determination of the location of detector elements relative to the beam-sample interaction point is critical formore » many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations <span class="hlt">predicted</span> from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. Furthermore, we show that the refined detector geometry greatly improves the <span class="hlt">results</span> of experiments.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29795026','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29795026"><span><span class="hlt">Accurate</span> Traffic Flow <span class="hlt">Prediction</span> in Heterogeneous Vehicular Networks in an Intelligent Transport System Using a Supervised Non-Parametric Classifier.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>El-Sayed, Hesham; Sankar, Sharmi; Daraghmi, Yousef-Awwad; Tiwari, Prayag; Rattagan, Ekarat; Mohanty, Manoranjan; Puthal, Deepak; Prasad, Mukesh</p> <p>2018-05-24</p> <p>Heterogeneous vehicular networks (HETVNETs) evolve from vehicular ad hoc networks (VANETs), which allow vehicles to always be connected so as to obtain safety services within intelligent transportation systems (ITSs). The services and data provided by HETVNETs should be neither interrupted nor delayed. Therefore, Quality of Service (QoS) improvement of HETVNETs is one of the topics attracting the attention of researchers and the manufacturing community. Several methodologies and frameworks have been devised by researchers to address QoS-<span class="hlt">prediction</span> service issues. In this paper, to improve QoS, we evaluate various traffic characteristics of HETVNETs and propose a new supervised learning model to capture knowledge on all possible traffic patterns. This model is a refinement of support vector machine (SVM) kernels with a radial basis function (RBF). The proposed model produces better <span class="hlt">results</span> than SVMs, and outperforms other <span class="hlt">prediction</span> methods used in a traffic context, as it has lower computational complexity and higher <span class="hlt">prediction</span> accuracy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A33L..01D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A33L..01D"><span>Postprocessing for Air Quality <span class="hlt">Predictions</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Delle Monache, L.</p> <p>2017-12-01</p> <p>In recent year, air quality (AQ) forecasting has made significant progress towards better <span class="hlt">predictions</span> with the goal of protecting the public from harmful pollutants. This progress is the <span class="hlt">results</span> of improvements in weather and chemical transport models, their coupling, and more <span class="hlt">accurate</span> emission inventories (e.g., with the development of new algorithms to account in near real-time for fires). Nevertheless, AQ <span class="hlt">predictions</span> are still affected at times by significant biases which stem from limitations in both weather and chemistry transport models. Those are the <span class="hlt">result</span> of numerical approximations and the poor representation (and understanding) of important physical and chemical process. Moreover, although the quality of emission inventories has been significantly improved, they are still one of the main sources of uncertainties in AQ <span class="hlt">predictions</span>. For operational real-time AQ forecasting, a significant portion of these biases can be reduced with the implementation of postprocessing methods. We will review some of the techniques that have been proposed to reduce both systematic and random errors of AQ <span class="hlt">predictions</span>, and improve the correlation between <span class="hlt">predictions</span> and observations of ground-level ozone and surface particulate matter less than 2.5 µm in diameter (PM2.5). These methods, which can be applied to both deterministic and probabilistic <span class="hlt">predictions</span>, include simple bias-correction techniques, corrections inspired by the Kalman filter, regression methods, and the more recently developed analog-based algorithms. These approaches will be compared and contrasted, and strength and weaknesses of each will be discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2769639','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2769639"><span><span class="hlt">Predictive</span> Monitoring for Improved Management of Glucose Levels</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Reifman, Jaques; Rajaraman, Srinivasan; Gribok, Andrei; Ward, W. Kenneth</p> <p>2007-01-01</p> <p>Background Recent developments and expected near-future improvements in continuous glucose monitoring (CGM) devices provide opportunities to couple them with mathematical forecasting models to produce <span class="hlt">predictive</span> monitoring systems for early, proactive glycemia management of diabetes mellitus patients before glucose levels drift to undesirable levels. This article assesses the feasibility of data-driven models to serve as the forecasting engine of <span class="hlt">predictive</span> monitoring systems. Methods We investigated the capabilities of data-driven autoregressive (AR) models to (1) capture the correlations in glucose time-series data, (2) make <span class="hlt">accurate</span> <span class="hlt">predictions</span> as a function of <span class="hlt">prediction</span> horizon, and (3) be made portable from individual to individual without any need for model tuning. The investigation is performed by employing CGM data from nine type 1 diabetic subjects collected over a continuous 5-day period. <span class="hlt">Results</span> With CGM data serving as the gold standard, AR model-based <span class="hlt">predictions</span> of glucose levels assessed over nine subjects with Clarke error grid analysis indicated that, for a 30-minute <span class="hlt">prediction</span> horizon, individually tuned models yield 97.6 to 100.0% of data in the clinically acceptable zones A and B, whereas cross-subject, portable models yield 95.8 to 99.7% of data in zones A and B. Conclusions This study shows that, for a 30-minute <span class="hlt">prediction</span> horizon, data-driven AR models provide sufficiently-<span class="hlt">accurate</span> and clinically-acceptable estimates of glucose levels for timely, proactive therapy and should be considered as the modeling engine for <span class="hlt">predictive</span> monitoring of patients with type 1 diabetes mellitus. It also suggests that AR models can be made portable from individual to individual with minor performance penalties, while greatly reducing the burden associated with model tuning and data collection for model development. PMID:19885110</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23144872','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23144872"><span><span class="hlt">Predicting</span> turns in proteins with a unified model.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Song, Qi; Li, Tonghua; Cong, Peisheng; Sun, Jiangming; Li, Dapeng; Tang, Shengnan</p> <p>2012-01-01</p> <p>Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current <span class="hlt">prediction</span> methods are well developed for the <span class="hlt">prediction</span> of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function <span class="hlt">prediction</span> it is necessary to develop a uniform model that can <span class="hlt">accurately</span> <span class="hlt">predict</span> all types of turns simultaneously. In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i) using newly exploited features of structural evolution information (secondary structure and shape string of protein) based on structure homologies, (ii) considering all types of turns in a unified model, and (iii) practical capability of <span class="hlt">accurate</span> <span class="hlt">prediction</span> of all turns simultaneously for a query. TurnP utilizes <span class="hlt">predicted</span> secondary structures and <span class="hlt">predicted</span> shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries) by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the <span class="hlt">results</span> we achieved were outstanding for turn <span class="hlt">predictions</span> and confirmed the good performance of TurnP for practical applications.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090034484','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090034484"><span>Atomic Oxygen Erosion Yield <span class="hlt">Prediction</span> for Spacecraft Polymers in Low Earth Orbit</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Banks, Bruce A.; Backus, Jane A.; Manno, Michael V.; Waters, Deborah L.; Cameron, Kevin C.; deGroh, Kim K.</p> <p>2009-01-01</p> <p>The ability to <span class="hlt">predict</span> the atomic oxygen erosion yield of polymers based on their chemistry and physical properties has been only partially successful because of a lack of reliable low Earth orbit (LEO) erosion yield data. Unfortunately, many of the early experiments did not utilize dehydrated mass loss measurements for erosion yield determination, and the <span class="hlt">resulting</span> mass loss due to atomic oxygen exposure may have been compromised because samples were often not in consistent states of dehydration during the pre-flight and post-flight mass measurements. This is a particular problem for short duration mission exposures or low erosion yield materials. However, as a <span class="hlt">result</span> of the retrieval of the Polymer Erosion and Contamination Experiment (PEACE) flown as part of the Materials International Space Station Experiment 2 (MISSE 2), the erosion yields of 38 polymers and pyrolytic graphite were <span class="hlt">accurately</span> measured. The experiment was exposed to the LEO environment for 3.95 years from August 16, 2001 to July 30, 2005 and was successfully retrieved during a space walk on July 30, 2005 during Discovery s STS-114 Return to Flight mission. The 40 different materials tested (including Kapton H fluence witness samples) were selected specifically to represent a variety of polymers used in space as well as a wide variety of polymer chemical structures. The MISSE 2 PEACE Polymers experiment used carefully dehydrated mass measurements, as well as <span class="hlt">accurate</span> density measurements to obtain <span class="hlt">accurate</span> erosion yield data for high-fluence (8.43 1021 atoms/sq cm). The <span class="hlt">resulting</span> data was used to develop an erosion yield <span class="hlt">predictive</span> tool with a correlation coefficient of 0.895 and uncertainty of +/-6.3 10(exp -25)cu cm/atom. The <span class="hlt">predictive</span> tool utilizes the chemical structures and physical properties of polymers to <span class="hlt">predict</span> in-space atomic oxygen erosion yields. A <span class="hlt">predictive</span> tool concept (September 2009 version) is presented which represents an improvement over an earlier (December 2008) version.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT........46M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT........46M"><span><span class="hlt">Predictive</span> Temperature Equations for Three Sites at the Grand Canyon</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McLaughlin, Katrina Marie Neitzel</p> <p></p> <p>Climate data collected at a number of automated weather stations were used to create a series of <span class="hlt">predictive</span> equations spanning from December 2009 to May 2010 in order to better <span class="hlt">predict</span> the temperatures along hiking trails within the Grand Canyon. The central focus of this project is how atmospheric variables interact and can be combined to <span class="hlt">predict</span> the weather in the Grand Canyon at the Indian Gardens, Phantom Ranch, and Bright Angel sites. Through the use of statistical analysis software and data regression, <span class="hlt">predictive</span> equations were determined. The <span class="hlt">predictive</span> equations are simple or multivariable best fits that reflect the curvilinear nature of the data. With data analysis software curves <span class="hlt">resulting</span> from the <span class="hlt">predictive</span> equations were plotted along with the observed data. Each equation's reduced chi2 was determined to aid the visual examination of the <span class="hlt">predictive</span> equations' ability to reproduce the observed data. From this information an equation or pair of equations was determined to be the best of the <span class="hlt">predictive</span> equations. Although a best <span class="hlt">predictive</span> equation for each month and season was determined for each site, future work may refine equations to <span class="hlt">result</span> in a more <span class="hlt">accurate</span> <span class="hlt">predictive</span> equation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AIPC..778..241Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AIPC..778..241Z"><span>An Anisotropic Hardening Model for Springback <span class="hlt">Prediction</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zeng, Danielle; Xia, Z. Cedric</p> <p>2005-08-01</p> <p>As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, <span class="hlt">accurate</span> springback <span class="hlt">prediction</span> for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to <span class="hlt">accurately</span> represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback <span class="hlt">results</span> for a DP600 straight U-channel test.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <footer><a id="backToTop" href="#top"> </a><nav><a id="backToTop" href="#top"> </a><ul class="links"><a id="backToTop" href="#top"> </a><li><a id="backToTop" href="#top"></a><a href="/sitemap.html">Site Map</a></li> <li><a href="/members/index.html">Members Only</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://doe.responsibledisclosure.com/hc/en-us" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> <div class="small">Science.gov is maintained by the U.S. Department of Energy's <a href="https://www.osti.gov/" target="_blank">Office of Scientific and Technical Information</a>, in partnership with <a href="https://www.cendi.gov/" target="_blank">CENDI</a>.</div> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>