Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
Zhou, Hufeng; Gao, Shangzhi; Nguyen, Nam Ninh; Fan, Mengyuan; Jin, Jingjing; Liu, Bing; Zhao, Liang; Xiong, Geng; Tan, Min; Li, Shijun; Wong, Limsoon
2014-04-08
H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are essential for understanding the infection mechanism of the formidable pathogen M. tuberculosis H37Rv. Computational prediction is an important strategy to fill the gap in experimental H. sapiens-M. tuberculosis H37Rv PPI data. Homology-based prediction is frequently used in predicting both intra-species and inter-species PPIs. However, some limitations are not properly resolved in several published works that predict eukaryote-prokaryote inter-species PPIs using intra-species template PPIs. We develop a stringent homology-based prediction approach by taking into account (i) differences between eukaryotic and prokaryotic proteins and (ii) differences between inter-species and intra-species PPI interfaces. We compare our stringent homology-based approach to a conventional homology-based approach for predicting host-pathogen PPIs, based on cellular compartment distribution analysis, disease gene list enrichment analysis, pathway enrichment analysis and functional category enrichment analysis. These analyses support the validity of our prediction result, and clearly show that our approach has better performance in predicting H. sapiens-M. tuberculosis H37Rv PPIs. Using our stringent homology-based approach, we have predicted a set of highly plausible H. sapiens-M. tuberculosis H37Rv PPIs which might be useful for many of related studies. Based on our analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent homology-based approach, we have discovered several interesting properties which are reported here for the first time. We find that both host proteins and pathogen proteins involved in the host-pathogen PPIs tend to be hubs in their own intra-species PPI network. Also, both host and pathogen proteins involved in host-pathogen PPIs tend to have longer primary sequence, tend to have more domains, tend to be more hydrophilic, etc. And the protein domains from both host and pathogen proteins involved in host-pathogen PPIs tend to have lower charge, and tend to be more hydrophilic. Our stringent homology-based prediction approach provides a better strategy in predicting PPIs between eukaryotic hosts and prokaryotic pathogens than a conventional homology-based approach. The properties we have observed from the predicted H. sapiens-M. tuberculosis H37Rv PPI network are useful for understanding inter-species host-pathogen PPI networks and provide novel insights for host-pathogen interaction studies.
Stringent DDI-based prediction of H. sapiens-M. tuberculosis H37Rv protein-protein interactions.
Zhou, Hufeng; Rezaei, Javad; Hugo, Willy; Gao, Shangzhi; Jin, Jingjing; Fan, Mengyuan; Yong, Chern-Han; Wozniak, Michal; Wong, Limsoon
2013-01-01
H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are very important information to illuminate the infection mechanism of M. tuberculosis H37Rv. But current H. sapiens-M. tuberculosis H37Rv PPI data are very scarce. This seriously limits the study of the interaction between this important pathogen and its host H. sapiens. Computational prediction of H. sapiens-M. tuberculosis H37Rv PPIs is an important strategy to fill in the gap. Domain-domain interaction (DDI) based prediction is one of the frequently used computational approaches in predicting both intra-species and inter-species PPIs. However, the performance of DDI-based host-pathogen PPI prediction has been rather limited. We develop a stringent DDI-based prediction approach with emphasis on (i) differences between the specific domain sequences on annotated regions of proteins under the same domain ID and (ii) calculation of the interaction strength of predicted PPIs based on the interacting residues in their interaction interfaces. We compare our stringent DDI-based approach to a conventional DDI-based approach for predicting PPIs based on gold standard intra-species PPIs and coherent informative Gene Ontology terms assessment. The assessment results show that our stringent DDI-based approach achieves much better performance in predicting PPIs than the conventional approach. Using our stringent DDI-based approach, we have predicted a small set of reliable H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. We also analyze the H. sapiens-M. tuberculosis H37Rv PPIs predicted by our stringent DDI-based approach using cellular compartment distribution analysis, functional category enrichment analysis and pathway enrichment analysis. The analyses support the validity of our prediction result. Also, based on an analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent DDI-based approach, we have discovered some important properties of domains involved in host-pathogen PPIs. We find that both host and pathogen proteins involved in host-pathogen PPIs tend to have more domains than proteins involved in intra-species PPIs, and these domains have more interaction partners than domains on proteins involved in intra-species PPI. The stringent DDI-based prediction approach reported in this work provides a stringent strategy for predicting host-pathogen PPIs. It also performs better than a conventional DDI-based approach in predicting PPIs. We have predicted a small set of accurate H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies.
Ye, Hao; Luo, Heng; Ng, Hui Wen; Meehan, Joe; Ge, Weigong; Tong, Weida; Hong, Huixiao
2016-01-01
ToxCast data have been used to develop models for predicting in vivo toxicity. To predict the in vivo toxicity of a new chemical using a ToxCast data based model, its ToxCast bioactivity data are needed but not normally available. The capability of predicting ToxCast bioactivity data is necessary to fully utilize ToxCast data in the risk assessment of chemicals. We aimed to understand and elucidate the relationships between the chemicals and bioactivity data of the assays in ToxCast and to develop a network analysis based method for predicting ToxCast bioactivity data. We conducted modularity analysis on a quantitative network constructed from ToxCast data to explore the relationships between the assays and chemicals. We further developed Nebula (neighbor-edges based and unbiased leverage algorithm) for predicting ToxCast bioactivity data. Modularity analysis on the network constructed from ToxCast data yielded seven modules. Assays and chemicals in the seven modules were distinct. Leave-one-out cross-validation yielded a Q(2) of 0.5416, indicating ToxCast bioactivity data can be predicted by Nebula. Prediction domain analysis showed some types of ToxCast assay data could be more reliably predicted by Nebula than others. Network analysis is a promising approach to understand ToxCast data. Nebula is an effective algorithm for predicting ToxCast bioactivity data, helping fully utilize ToxCast data in the risk assessment of chemicals. Published by Elsevier Ltd.
Stringent homology-based prediction of H. sapiens-M. tuberculosis H37Rv protein-protein interactions
2014-01-01
Background H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are essential for understanding the infection mechanism of the formidable pathogen M. tuberculosis H37Rv. Computational prediction is an important strategy to fill the gap in experimental H. sapiens-M. tuberculosis H37Rv PPI data. Homology-based prediction is frequently used in predicting both intra-species and inter-species PPIs. However, some limitations are not properly resolved in several published works that predict eukaryote-prokaryote inter-species PPIs using intra-species template PPIs. Results We develop a stringent homology-based prediction approach by taking into account (i) differences between eukaryotic and prokaryotic proteins and (ii) differences between inter-species and intra-species PPI interfaces. We compare our stringent homology-based approach to a conventional homology-based approach for predicting host-pathogen PPIs, based on cellular compartment distribution analysis, disease gene list enrichment analysis, pathway enrichment analysis and functional category enrichment analysis. These analyses support the validity of our prediction result, and clearly show that our approach has better performance in predicting H. sapiens-M. tuberculosis H37Rv PPIs. Using our stringent homology-based approach, we have predicted a set of highly plausible H. sapiens-M. tuberculosis H37Rv PPIs which might be useful for many of related studies. Based on our analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent homology-based approach, we have discovered several interesting properties which are reported here for the first time. We find that both host proteins and pathogen proteins involved in the host-pathogen PPIs tend to be hubs in their own intra-species PPI network. Also, both host and pathogen proteins involved in host-pathogen PPIs tend to have longer primary sequence, tend to have more domains, tend to be more hydrophilic, etc. And the protein domains from both host and pathogen proteins involved in host-pathogen PPIs tend to have lower charge, and tend to be more hydrophilic. Conclusions Our stringent homology-based prediction approach provides a better strategy in predicting PPIs between eukaryotic hosts and prokaryotic pathogens than a conventional homology-based approach. The properties we have observed from the predicted H. sapiens-M. tuberculosis H37Rv PPI network are useful for understanding inter-species host-pathogen PPI networks and provide novel insights for host-pathogen interaction studies. Reviewers This article was reviewed by Michael Gromiha, Narayanaswamy Srinivasan and Thomas Dandekar. PMID:24708540
A micromechanics-based strength prediction methodology for notched metal matrix composites
NASA Technical Reports Server (NTRS)
Bigelow, C. A.
1992-01-01
An analytical micromechanics based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and post fatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.
A micromechanics-based strength prediction methodology for notched metal-matrix composites
NASA Technical Reports Server (NTRS)
Bigelow, C. A.
1993-01-01
An analytical micromechanics-based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three-dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and postfatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics-based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.
Stringent DDI-based Prediction of H. sapiens-M. tuberculosis H37Rv Protein-Protein Interactions
2013-01-01
Background H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are very important information to illuminate the infection mechanism of M. tuberculosis H37Rv. But current H. sapiens-M. tuberculosis H37Rv PPI data are very scarce. This seriously limits the study of the interaction between this important pathogen and its host H. sapiens. Computational prediction of H. sapiens-M. tuberculosis H37Rv PPIs is an important strategy to fill in the gap. Domain-domain interaction (DDI) based prediction is one of the frequently used computational approaches in predicting both intra-species and inter-species PPIs. However, the performance of DDI-based host-pathogen PPI prediction has been rather limited. Results We develop a stringent DDI-based prediction approach with emphasis on (i) differences between the specific domain sequences on annotated regions of proteins under the same domain ID and (ii) calculation of the interaction strength of predicted PPIs based on the interacting residues in their interaction interfaces. We compare our stringent DDI-based approach to a conventional DDI-based approach for predicting PPIs based on gold standard intra-species PPIs and coherent informative Gene Ontology terms assessment. The assessment results show that our stringent DDI-based approach achieves much better performance in predicting PPIs than the conventional approach. Using our stringent DDI-based approach, we have predicted a small set of reliable H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. We also analyze the H. sapiens-M. tuberculosis H37Rv PPIs predicted by our stringent DDI-based approach using cellular compartment distribution analysis, functional category enrichment analysis and pathway enrichment analysis. The analyses support the validity of our prediction result. Also, based on an analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent DDI-based approach, we have discovered some important properties of domains involved in host-pathogen PPIs. We find that both host and pathogen proteins involved in host-pathogen PPIs tend to have more domains than proteins involved in intra-species PPIs, and these domains have more interaction partners than domains on proteins involved in intra-species PPI. Conclusions The stringent DDI-based prediction approach reported in this work provides a stringent strategy for predicting host-pathogen PPIs. It also performs better than a conventional DDI-based approach in predicting PPIs. We have predicted a small set of accurate H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. PMID:24564941
Evaluation of predictive capacities of biomarkers based on research synthesis.
Hattori, Satoshi; Zhou, Xiao-Hua
2016-11-10
The objective of diagnostic studies or prognostic studies is to evaluate and compare predictive capacities of biomarkers. Suppose we are interested in evaluation and comparison of predictive capacities of continuous biomarkers for a binary outcome based on research synthesis. In analysis of each study, subjects are often classified into two groups of the high-expression and low-expression groups according to a cut-off value, and statistical analysis is based on a 2 × 2 table defined by the response and the high expression or low expression of the biomarker. Because the cut-off is study specific, it is difficult to interpret a combined summary measure such as an odds ratio based on the standard meta-analysis techniques. The summary receiver operating characteristic curve is a useful method for meta-analysis of diagnostic studies in the presence of heterogeneity of cut-off values to examine discriminative capacities of biomarkers. We develop a method to estimate positive or negative predictive curves, which are alternative to the receiver operating characteristic curve based on information reported in published papers of each study. These predictive curves provide a useful graphical presentation of pairs of positive and negative predictive values and allow us to compare predictive capacities of biomarkers of different scales in the presence of heterogeneity in cut-off values among studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Text Mining Improves Prediction of Protein Functional Sites
Cohn, Judith D.; Ravikumar, Komandur E.
2012-01-01
We present an approach that integrates protein structure analysis and text mining for protein functional site prediction, called LEAP-FS (Literature Enhanced Automated Prediction of Functional Sites). The structure analysis was carried out using Dynamics Perturbation Analysis (DPA), which predicts functional sites at control points where interactions greatly perturb protein vibrations. The text mining extracts mentions of residues in the literature, and predicts that residues mentioned are functionally important. We assessed the significance of each of these methods by analyzing their performance in finding known functional sites (specifically, small-molecule binding sites and catalytic sites) in about 100,000 publicly available protein structures. The DPA predictions recapitulated many of the functional site annotations and preferentially recovered binding sites annotated as biologically relevant vs. those annotated as potentially spurious. The text-based predictions were also substantially supported by the functional site annotations: compared to other residues, residues mentioned in text were roughly six times more likely to be found in a functional site. The overlap of predictions with annotations improved when the text-based and structure-based methods agreed. Our analysis also yielded new high-quality predictions of many functional site residues that were not catalogued in the curated data sources we inspected. We conclude that both DPA and text mining independently provide valuable high-throughput protein functional site predictions, and that integrating the two methods using LEAP-FS further improves the quality of these predictions. PMID:22393388
NASA Astrophysics Data System (ADS)
Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.
2017-11-01
In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.
A Technical Analysis Information Fusion Approach for Stock Price Analysis and Modeling
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
In this paper, we address the problem of technical analysis information fusion in improving stock market index-level prediction. We present an approach for analyzing stock market price behavior based on different categories of technical analysis metrics and a multiple predictive system. Each category of technical analysis measures is used to characterize stock market price movements. The presented predictive system is based on an ensemble of neural networks (NN) coupled with particle swarm intelligence for parameter optimization where each single neural network is trained with a specific category of technical analysis measures. The experimental evaluation on three international stock market indices and three individual stocks show that the presented ensemble-based technical indicators fusion system significantly improves forecasting accuracy in comparison with single NN. Also, it outperforms the classical neural network trained with index-level lagged values and NN trained with stationary wavelet transform details and approximation coefficients. As a result, technical information fusion in NN ensemble architecture helps improving prediction accuracy.
Ma, Chuang; Xin, Mingming; Feldmann, Kenneth A.; Wang, Xiangfeng
2014-01-01
Machine learning (ML) is an intelligent data mining technique that builds a prediction model based on the learning of prior knowledge to recognize patterns in large-scale data sets. We present an ML-based methodology for transcriptome analysis via comparison of gene coexpression networks, implemented as an R package called machine learning–based differential network analysis (mlDNA) and apply this method to reanalyze a set of abiotic stress expression data in Arabidopsis thaliana. The mlDNA first used a ML-based filtering process to remove nonexpressed, constitutively expressed, or non-stress-responsive “noninformative” genes prior to network construction, through learning the patterns of 32 expression characteristics of known stress-related genes. The retained “informative” genes were subsequently analyzed by ML-based network comparison to predict candidate stress-related genes showing expression and network differences between control and stress networks, based on 33 network topological characteristics. Comparative evaluation of the network-centric and gene-centric analytic methods showed that mlDNA substantially outperformed traditional statistical testing–based differential expression analysis at identifying stress-related genes, with markedly improved prediction accuracy. To experimentally validate the mlDNA predictions, we selected 89 candidates out of the 1784 predicted salt stress–related genes with available SALK T-DNA mutagenesis lines for phenotypic screening and identified two previously unreported genes, mutants of which showed salt-sensitive phenotypes. PMID:24520154
RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis.
Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab
2012-01-01
RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. http://www.cemb.edu.pk/sw.html RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language.
NASA Technical Reports Server (NTRS)
Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.
1974-01-01
The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.
Analysis of Biaxial Stress Fields in Plates Cracking at Elevated Temperatures
1989-10-19
used . Based on the enhanced theory, it is predicted that the minimum resolvable stress amplitude using thermographic stress analysis will be...because of limitations in the commercial thermographic equipment used . Based on the enhanced theory, it is predicted that the minimum resolvable stress...amplitude using thermographic stress analysis will be approximately independent of temperature, provided relevant thermal and mechanical material
A study of microstructural characteristics and differential thermal analysis of Ni-based superalloys
NASA Technical Reports Server (NTRS)
Aggarwal, M. D.; Lal, R. B.; Oyekenu, Samuel A.; Parr, Richard; Gentz, Stephen
1989-01-01
The objective of this work is to correlate the mechanical properties of the Ni-based superalloy MAR M246(Hf) used in the Space Shuttle Main Engine with its structural characteristics by systematic study of optical photomicrographs and differential thermal analysis. The authors developed a method of predicting the liquidus and solidus temperature of various nickel based superalloys (MAR-M247, Waspaloy, Udimet-41, polycrystalline and single crystals of CMSX-2 and CMSX-3) and comparing the predictions with the experimental differential thermal analysis (DTA) curves using Perkin-Elmer DTA 1700. The method of predicting these temperatures is based on the additive effect of the components dissolved in nickel. The results were compared with the experimental values.
Automated Clinical Assessment from Smart home-based Behavior Data
Dawadi, Prafulla Nath; Cook, Diane Joyce; Schmitter-Edgecombe, Maureen
2016-01-01
Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behaviour in the home and predicting standard clinical assessment scores of the residents. To accomplish this goal, we propose a Clinical Assessment using Activity Behavior (CAAB) approach to model a smart home resident’s daily behavior and predict the corresponding standard clinical assessment scores. CAAB uses statistical features that describe characteristics of a resident’s daily activity performance to train machine learning algorithms that predict the clinical assessment scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years using prediction and classification-based experiments. In the prediction-based experiments, we obtain a statistically significant correlation (r = 0.72) between CAAB-predicted and clinician-provided cognitive assessment scores and a statistically significant correlation (r = 0.45) between CAAB-predicted and clinician-provided mobility scores. Similarly, for the classification-based experiments, we find CAAB has a classification accuracy of 72% while classifying cognitive assessment scores and 76% while classifying mobility scores. These prediction and classification results suggest that it is feasible to predict standard clinical scores using smart home sensor data and learning-based data analysis. PMID:26292348
Space shuttle booster multi-engine base flow analysis
NASA Technical Reports Server (NTRS)
Tang, H. H.; Gardiner, C. R.; Anderson, W. A.; Navickas, J.
1972-01-01
A comprehensive review of currently available techniques pertinent to several prominent aspects of the base thermal problem of the space shuttle booster is given along with a brief review of experimental results. A tractable engineering analysis, capable of predicting the power-on base pressure, base heating, and other base thermal environmental conditions, such as base gas temperature, is presented and used for an analysis of various space shuttle booster configurations. The analysis consists of a rational combination of theoretical treatments of the prominent flow interaction phenomena in the base region. These theories consider jet mixing, plume flow, axisymmetric flow effects, base injection, recirculating flow dynamics, and various modes of heat transfer. Such effects as initial boundary layer expansion at the nozzle lip, reattachment, recompression, choked vent flow, and nonisoenergetic mixing processes are included in the analysis. A unified method was developed and programmed to numerically obtain compatible solutions for the various flow field components in both flight and ground test conditions. Preliminary prediction for a 12-engine space shuttle booster base thermal environment was obtained for a typical trajectory history. Theoretical predictions were also obtained for some clustered-engine experimental conditions. Results indicate good agreement between the data and theoretical predicitons.
Predicting epileptic seizures from scalp EEG based on attractor state analysis.
Chu, Hyunho; Chung, Chun Kee; Jeong, Woorim; Cho, Kwang-Hyun
2017-05-01
Epilepsy is the second most common disease of the brain. Epilepsy makes it difficult for patients to live a normal life because it is difficult to predict when seizures will occur. In this regard, if seizures could be predicted a reasonable period of time before their occurrence, epilepsy patients could take precautions against them and improve their safety and quality of life. In this paper, we investigate a novel seizure precursor based on attractor state analysis for seizure prediction. We analyze the transition process from normal to seizure attractor state and investigate a precursor phenomenon seen before reaching the seizure attractor state. From the result of an analysis, we define a quantified spectral measure in scalp EEG for seizure prediction. From scalp EEG recordings, the Fourier coefficients of six EEG frequency bands are extracted, and the defined spectral measure is computed based on the coefficients for each half-overlapped 20-second-long window. The computed spectral measure is applied to seizure prediction using a low-complexity methodology. Within scalp EEG, we identified an early-warning indicator before an epileptic seizure occurs. Getting closer to the bifurcation point that triggers the transition from normal to seizure state, the power spectral density of low frequency bands of the perturbation of an attractor in the EEG, showed a relative increase. A low-complexity seizure prediction algorithm using this feature was evaluated, using ∼583h of scalp EEG in which 143 seizures in 16 patients were recorded. With the test dataset, the proposed method showed high sensitivity (86.67%) with a false prediction rate of 0.367h -1 and average prediction time of 45.3min. A novel seizure prediction method using scalp EEG, based on attractor state analysis, shows potential for application with real epilepsy patients. This is the first study in which the seizure-precursor phenomenon of an epileptic seizure is investigated based on attractor-based analysis of the macroscopic dynamics of the brain. With the scalp EEG, we first propose use of a spectral feature identified for seizure prediction, in which the dynamics of an attractor are excluded, and only the perturbation dynamics from the attractor are considered. Copyright © 2017 Elsevier B.V. All rights reserved.
Sea surface temperature predictions using a multi-ocean analysis ensemble scheme
NASA Astrophysics Data System (ADS)
Zhang, Ying; Zhu, Jieshun; Li, Zhongxian; Chen, Haishan; Zeng, Gang
2017-08-01
This study examined the global sea surface temperature (SST) predictions by a so-called multiple-ocean analysis ensemble (MAE) initialization method which was applied in the National Centers for Environmental Prediction (NCEP) Climate Forecast System Version 2 (CFSv2). Different from most operational climate prediction practices which are initialized by a specific ocean analysis system, the MAE method is based on multiple ocean analyses. In the paper, the MAE method was first justified by analyzing the ocean temperature variability in four ocean analyses which all are/were applied for operational climate predictions either at the European Centre for Medium-range Weather Forecasts or at NCEP. It was found that these systems exhibit substantial uncertainties in estimating the ocean states, especially at the deep layers. Further, a set of MAE hindcasts was conducted based on the four ocean analyses with CFSv2, starting from each April during 1982-2007. The MAE hindcasts were verified against a subset of hindcasts from the NCEP CFS Reanalysis and Reforecast (CFSRR) Project. Comparisons suggested that MAE shows better SST predictions than CFSRR over most regions where ocean dynamics plays a vital role in SST evolutions, such as the El Niño and Atlantic Niño regions. Furthermore, significant improvements were also found in summer precipitation predictions over the equatorial eastern Pacific and Atlantic oceans, for which the local SST prediction improvements should be responsible. The prediction improvements by MAE imply a problem for most current climate predictions which are based on a specific ocean analysis system. That is, their predictions would drift towards states biased by errors inherent in their ocean initialization system, and thus have large prediction errors. In contrast, MAE arguably has an advantage by sampling such structural uncertainties, and could efficiently cancel these errors out in their predictions.
NASA Technical Reports Server (NTRS)
Johnston, John D.; Parrish, Keith; Howard, Joseph M.; Mosier, Gary E.; McGinnis, Mark; Bluth, Marcel; Kim, Kevin; Ha, Hong Q.
2004-01-01
This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal- optical, often referred to as "STOP", analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. The paper begins an overview of multi-disciplinary engineering analysis, or integrated modeling, which is a critical element of the JWST mission. The STOP analysis process is then described. This process consists of the following steps: thermal analysis, structural analysis, and optical analysis. Temperatures predicted using geometric and thermal math models are mapped to the structural finite element model in order to predict thermally-induced deformations. Motions and deformations at optical surfaces are input to optical models and optical performance is predicted using either an optical ray trace or WFE estimation techniques based on prior ray traces or first order optics. Following the discussion of the analysis process, results based on models representing the design at the time of the System Requirements Review. In addition to baseline performance predictions, sensitivity studies are performed to assess modeling uncertainties. Of particular interest is the sensitivity of optical performance to uncertainties in temperature predictions and variations in metal properties. The paper concludes with a discussion of modeling uncertainty as it pertains to STOP analysis.
RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis
Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab
2012-01-01
RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. Availability http://www.cemb.edu.pk/sw.html Abbreviations RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language. PMID:23055611
2013-01-01
Background The present study aimed to develop an artificial neural network (ANN) based prediction model for cardiovascular autonomic (CA) dysfunction in the general population. Methods We analyzed a previous dataset based on a population sample consisted of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN analysis. Performances of these prediction models were evaluated in the validation set. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with CA dysfunction (P < 0.05). The mean area under the receiver-operating curve was 0.762 (95% CI 0.732–0.793) for prediction model developed using ANN analysis. The mean sensitivity, specificity, positive and negative predictive values were similar in the prediction models was 0.751, 0.665, 0.330 and 0.924, respectively. All HL statistics were less than 15.0. Conclusion ANN is an effective tool for developing prediction models with high value for predicting CA dysfunction among the general population. PMID:23902963
Prediction of Recidivism in Juvenile Offenders Based on Discriminant Analysis.
ERIC Educational Resources Information Center
Proefrock, David W.
The recent development of strong statistical techniques has made accurate predictions of recidivism possible. To investigate the utility of discriminant analysis methodology in making predictions of recidivism in juvenile offenders, the court records of 271 male and female juvenile offenders, aged 12-16, were reviewed. A cross validation group…
Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.
2012-01-01
The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387
ERIC Educational Resources Information Center
Ousley, Chris
2010-01-01
This study sought to provide empirical evidence regarding the use of spatial analysis in enrollment management to predict persistence and graduation. The research utilized data from the 2000 U.S. Census and applicant records from The University of Arizona to study the spatial distributions of enrollments. Based on the initial results, stepwise…
Cloud Based Metalearning System for Predictive Modeling of Biomedical Data
Vukićević, Milan
2014-01-01
Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data. PMID:24892101
Statistical prediction of space motion sickness
NASA Technical Reports Server (NTRS)
Reschke, Millard F.
1990-01-01
Studies designed to empirically examine the etiology of motion sickness to develop a foundation for enhancing its prediction are discussed. Topics addressed include early attempts to predict space motion sickness, multiple test data base that uses provocative and vestibular function tests, and data base subjects; reliability of provocative tests of motion sickness susceptibility; prediction of space motion sickness using linear discriminate analysis; and prediction of space motion sickness susceptibility using the logistic model.
Automated Cognitive Health Assessment From Smart Home-Based Behavior Data.
Dawadi, Prafulla Nath; Cook, Diane Joyce; Schmitter-Edgecombe, Maureen
2016-07-01
Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behavior in the home and predicting clinical scores of the residents. To accomplish this goal, we propose a clinical assessment using activity behavior (CAAB) approach to model a smart home resident's daily behavior and predict the corresponding clinical scores. CAAB uses statistical features that describe characteristics of a resident's daily activity performance to train machine learning algorithms that predict the clinical scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years. We obtain a statistically significant correlation ( r=0.72) between CAAB-predicted and clinician-provided cognitive scores and a statistically significant correlation ( r=0.45) between CAAB-predicted and clinician-provided mobility scores. These prediction results suggest that it is feasible to predict clinical scores using smart home sensor data and learning-based data analysis.
Kwon, Andrew T.; Chou, Alice Yi; Arenillas, David J.; Wasserman, Wyeth W.
2011-01-01
We performed a genome-wide scan for muscle-specific cis-regulatory modules (CRMs) using three computational prediction programs. Based on the predictions, 339 candidate CRMs were tested in cell culture with NIH3T3 fibroblasts and C2C12 myoblasts for capacity to direct selective reporter gene expression to differentiated C2C12 myotubes. A subset of 19 CRMs validated as functional in the assay. The rate of predictive success reveals striking limitations of computational regulatory sequence analysis methods for CRM discovery. Motif-based methods performed no better than predictions based only on sequence conservation. Analysis of the properties of the functional sequences relative to inactive sequences identifies nucleotide sequence composition can be an important characteristic to incorporate in future methods for improved predictive specificity. Muscle-related TFBSs predicted within the functional sequences display greater sequence conservation than non-TFBS flanking regions. Comparison with recent MyoD and histone modification ChIP-Seq data supports the validity of the functional regions. PMID:22144875
Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo
2013-01-01
Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593
Detector location selection based on VIP analysis in near-infrared detection of dural hematoma.
Sun, Qiuming; Zhang, Yanjun; Ma, Jun; Tian, Feng; Wang, Huiquan; Liu, Dongyuan
2018-03-01
Detection of dural hematoma based on multi-channel near-infrared differential absorbance has the advantages of rapid and non-invasive detection. The location and number of detectors around the light source are critical for reducing the pathological characteristics of the prediction model on dural hematoma degree. Therefore, rational selection of detector numbers and their distances from the light source is very important. In this paper, a detector position screening method based on Variable Importance in the Projection (VIP) analysis is proposed. A preliminary modeling based on Partial Least Squares method (PLS) for the prediction of dural position μ a was established using light absorbance information from 30 detectors located 2.0-5.0 cm from the light source with a 0.1 cm interval. The mean relative error (MRE) of the dural position μ a prediction model was 4.08%. After VIP analysis, the number of detectors was reduced from 30 to 4 and the MRE of the dural position μ a prediction was reduced from 4.08% to 2.06% after the reduction in detector numbers. The prediction model after VIP detector screening still showed good prediction of the epidural position μ a . This study provided a new approach and important reference on the selection of detector location in near-infrared dural hematoma detection.
Henne, Melinda B; Stegmann, Barbara J; Neithardt, Adrienne B; Catherino, William H; Armstrong, Alicia Y; Kao, Tzu-Cheg; Segars, James H
2008-01-01
To predict the cost of a delivery following assisted reproductive technologies (ART). Cost analysis based on retrospective chart analysis. University-based ART program. Women aged >or=26 and
Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.
Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam
2015-06-22
A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.
On Bi-Grid Local Mode Analysis of Solution Techniques for 3-D Euler and Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Ibraheem, S. O.; Demuren, A. O.
1994-01-01
A procedure is presented for utilizing a bi-grid stability analysis as a practical tool for predicting multigrid performance in a range of numerical methods for solving Euler and Navier-Stokes equations. Model problems based on the convection, diffusion and Burger's equation are used to illustrate the superiority of the bi-grid analysis as a predictive tool for multigrid performance in comparison to the smoothing factor derived from conventional von Neumann analysis. For the Euler equations, bi-grid analysis is presented for three upwind difference based factorizations, namely Spatial, Eigenvalue and Combination splits, and two central difference based factorizations, namely LU and ADI methods. In the former, both the Steger-Warming and van Leer flux-vector splitting methods are considered. For the Navier-Stokes equations, only the Beam-Warming (ADI) central difference scheme is considered. In each case, estimates of multigrid convergence rates from the bi-grid analysis are compared to smoothing factors obtained from single-grid stability analysis. Effects of grid aspect ratio and flow skewness are examined. Both predictions are compared with practical multigrid convergence rates for 2-D Euler and Navier-Stokes solutions based on the Beam-Warming central scheme.
Enhancing emotional-based target prediction
NASA Astrophysics Data System (ADS)
Gosnell, Michael; Woodley, Robert
2008-04-01
This work extends existing agent-based target movement prediction to include key ideas of behavioral inertia, steady states, and catastrophic change from existing psychological, sociological, and mathematical work. Existing target prediction work inherently assumes a single steady state for target behavior, and attempts to classify behavior based on a single emotional state set. The enhanced, emotional-based target prediction maintains up to three distinct steady states, or typical behaviors, based on a target's operating conditions and observed behaviors. Each steady state has an associated behavioral inertia, similar to the standard deviation of behaviors within that state. The enhanced prediction framework also allows steady state transitions through catastrophic change and individual steady states could be used in an offline analysis with additional modeling efforts to better predict anticipated target reactions.
MetaDP: a comprehensive web server for disease prediction of 16S rRNA metagenomic datasets.
Xu, Xilin; Wu, Aiping; Zhang, Xinlei; Su, Mingming; Jiang, Taijiao; Yuan, Zhe-Ming
2016-01-01
High-throughput sequencing-based metagenomics has garnered considerable interest in recent years. Numerous methods and tools have been developed for the analysis of metagenomic data. However, it is still a daunting task to install a large number of tools and complete a complicated analysis, especially for researchers with minimal bioinformatics backgrounds. To address this problem, we constructed an automated software named MetaDP for 16S rRNA sequencing data analysis, including data quality control, operational taxonomic unit clustering, diversity analysis, and disease risk prediction modeling. Furthermore, a support vector machine-based prediction model for intestinal bowel syndrome (IBS) was built by applying MetaDP to microbial 16S sequencing data from 108 children. The success of the IBS prediction model suggests that the platform may also be applied to other diseases related to gut microbes, such as obesity, metabolic syndrome, or intestinal cancer, among others (http://metadp.cn:7001/).
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
NASA Technical Reports Server (NTRS)
Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue
1993-01-01
A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.
Consideration of Moving Tooth Load in Gear Crack Propagation Predictions
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Handschuh, Robert F.; Spievak, Lisa E.; Wawrzynek, Paul A.; Ingraffea, Anthony R.
2001-01-01
Robust gear designs consider not only crack initiation, but crack propagation trajectories for a fail-safe design. In actual gear operation, the magnitude as well as the position of the force changes as the gear rotates through the mesh. A study to determine the effect of moving gear tooth load on crack propagation predictions was performed. Two-dimensional analysis of an involute spur gear and three-dimensional analysis of a spiral-bevel pinion gear using the finite element method and boundary element method were studied and compared to experiments. A modified theory for predicting gear crack propagation paths based on the criteria of Erdogan and Sih was investigated. Crack simulation based on calculated stress intensity factors and mixed mode crack angle prediction techniques using a simple static analysis in which the tooth load was located at the highest point of single tooth contact was validated. For three-dimensional analysis, however, the analysis was valid only as long as the crack did not approach the contact region on the tooth.
Clinical Trials for Predictive Medicine—New Challenges and Paradigms*
Simon, Richard
2014-01-01
Background Developments in biotechnology and genomics have increased the focus of biostatisticians on prediction problems. This has led to many exciting developments for predictive modeling where the number of variables is larger than the number of cases. Heterogeneity of human diseases and new technology for characterizing them presents new opportunities and challenges for the design and analysis of clinical trials. Purpose In oncology, treatment of broad populations with regimens that do not benefit most patients is less economically sustainable with expensive molecularly targeted therapeutics. The established molecular heterogeneity of human diseases requires the development of new paradigms for the design and analysis of randomized clinical trials as a reliable basis for predictive medicine[1, 2]. Results We have reviewed prospective designs for the development of new therapeutics with candidate predictive biomarkers. We have also outlined a prediction based approach to the analysis of randomized clinical trials that both preserves the type I error and provides a reliable internally validated basis for predicting which patients are most likely or unlikely to benefit from the new regimen. Conclusions Developing new treatments with predictive biomarkers for identifying the patients who are most likely or least likely to benefit makes drug development more complex. But for many new oncology drugs it is the only science based approach and should increase the chance of success. It may also lead to more consistency in results among trials and has obvious benefits for reducing the number of patients who ultimately receive expensive drugs which expose them risks of adverse events but no benefit. This approach also has great potential value for controlling societal expenditures on health care. Development of treatments with predictive biomarkers requires major changes in the standard paradigms for the design and analysis of clinical trials. Some of the key assumptions upon which current methods are based are no longer valid. In addition to reviewing a variety of new clinical trial designs for co-development of treatments and predictive biomarkers, we have outlined a prediction based approach to the analysis of randomized clinical trials. This is a very structured approach whose use requires careful prospective planning. It requires further development but may serve as a basis for a new generation of predictive clinical trials which provide the kinds of reliable individualized information which physicians and patients have long sought, but which have not been available from the past use of post-hoc subset analysis. PMID:20338899
Study of Earthquake Disaster Prediction System of Langfang city Based on GIS
NASA Astrophysics Data System (ADS)
Huang, Meng; Zhang, Dian; Li, Pan; Zhang, YunHui; Zhang, RuoFei
2017-07-01
In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.
NASA Technical Reports Server (NTRS)
Egolf, T. A.; Landgrebe, A. J.
1982-01-01
A user's manual is provided which includes the technical approach for the Prescribed Wake Rotor Inflow and Flow Field Prediction Analysis. The analysis is used to provide the rotor wake induced velocities at the rotor blades for use in blade airloads and response analyses and to provide induced velocities at arbitrary field points such as at a tail surface. This analysis calculates the distribution of rotor wake induced velocities based on a prescribed wake model. Section operating conditions are prescribed from blade motion and controls determined by a separate blade response analysis. The analysis represents each blade by a segmented lifting line, and the rotor wake by discrete segmented trailing vortex filaments. Blade loading and circulation distributions are calculated based on blade element strip theory including the local induced velocity predicted by the numerical integration of the Biot-Savart Law applied to the vortex wake model.
Prediction of advertisement preference by fusing EEG response and sentiment analysis.
Gauba, Himaanshu; Kumar, Pradeep; Roy, Partha Pratim; Singh, Priyanka; Dogra, Debi Prosad; Raman, Balasubramanian
2017-08-01
This paper presents a novel approach to predict rating of video-advertisements based on a multimodal framework combining physiological analysis of the user and global sentiment-rating available on the internet. We have fused Electroencephalogram (EEG) waves of user and corresponding global textual comments of the video to understand the user's preference more precisely. In our framework, the users were asked to watch the video-advertisement and simultaneously EEG signals were recorded. Valence scores were obtained using self-report for each video. A higher valence corresponds to intrinsic attractiveness of the user. Furthermore, the multimedia data that comprised of the comments posted by global viewers, were retrieved and processed using Natural Language Processing (NLP) technique for sentiment analysis. Textual contents from review comments were analyzed to obtain a score to understand sentiment nature of the video. A regression technique based on Random forest was used to predict the rating of an advertisement using EEG data. Finally, EEG based rating is combined with NLP-based sentiment score to improve the overall prediction. The study was carried out using 15 video clips of advertisements available online. Twenty five participants were involved in our study to analyze our proposed system. The results are encouraging and these suggest that the proposed multimodal approach can achieve lower RMSE in rating prediction as compared to the prediction using only EEG data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Particle Pollution Estimation Based on Image Analysis
Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian
2016-01-01
Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757
Particle Pollution Estimation Based on Image Analysis.
Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian
2016-01-01
Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.
Testing and analysis of internal hardwood log defect prediction models
R. Edward Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Business Planning in the Light of Neuro-fuzzy and Predictive Forecasting
NASA Astrophysics Data System (ADS)
Chakrabarti, Prasun; Basu, Jayanta Kumar; Kim, Tai-Hoon
In this paper we have pointed out gain sensing on forecast based techniques.We have cited an idea of neural based gain forecasting. Testing of sequence of gain pattern is also verifies using statsistical analysis of fuzzy value assignment. The paper also suggests realization of stable gain condition using K-Means clustering of data mining. A new concept of 3D based gain sensing has been pointed out. The paper also reveals what type of trend analysis can be observed for probabilistic gain prediction.
Ganga, G M D; Esposto, K F; Braatz, D
2012-01-01
The occupational exposure limits of different risk factors for development of low back disorders (LBDs) have not yet been established. One of the main problems in setting such guidelines is the limited understanding of how different risk factors for LBDs interact in causing injury, since the nature and mechanism of these disorders are relatively unknown phenomena. Industrial ergonomists' role becomes further complicated because the potential risk factors that may contribute towards the onset of LBDs interact in a complex manner, which makes it difficult to discriminate in detail among the jobs that place workers at high or low risk of LBDs. The purpose of this paper was to develop a comparative study between predictions based on the neural network-based model proposed by Zurada, Karwowski & Marras (1997) and a linear discriminant analysis model, for making predictions about industrial jobs according to their potential risk of low back disorders due to workplace design. The results obtained through applying the discriminant analysis-based model proved that it is as effective as the neural network-based model. Moreover, the discriminant analysis-based model proved to be more advantageous regarding cost and time savings for future data gathering.
NASA Astrophysics Data System (ADS)
Davis, S. J.; Egolf, T. A.
1980-07-01
Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.
Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.
1997-01-01
A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.
[Proximate analysis of straw by near infrared spectroscopy (NIRS)].
Huang, Cai-jin; Han, Lu-jia; Liu, Xian; Yang, Zeng-ling
2009-04-01
Proximate analysis is one of the routine analysis procedures in utilization of straw for biomass energy use. The present paper studied the applicability of rapid proximate analysis of straw by near infrared spectroscopy (NIRS) technology, in which the authors constructed the first NIRS models to predict volatile matter and fixed carbon contents of straw. NIRS models were developed using Foss 6500 spectrometer with spectra in the range of 1,108-2,492 nm to predict the contents of moisture, ash, volatile matter and fixed carbon in the directly cut straw samples; to predict ash, volatile matter and fixed carbon in the dried milled straw samples. For the models based on directly cut straw samples, the determination coefficient of independent validation (R2v) and standard error of prediction (SEP) were 0.92% and 0.76% for moisture, 0.94% and 0.84% for ash, 0.88% and 0.82% for volatile matter, and 0.75% and 0.65% for fixed carbon, respectively. For the models based on dried milled straw samples, the determination coefficient of independent validation (R2v) and standard error of prediction (SEP) were 0.98% and 0.54% for ash, 0.95% and 0.57% for volatile matter, and 0.78% and 0.61% for fixed carbon, respectively. It was concluded that NIRS models can predict accurately as an alternative analysis method, therefore rapid and simultaneous analysis of multicomponents can be achieved by NIRS technology, decreasing the cost of proximate analysis for straw.
Sun, Libo; Wan, Ying
2018-04-22
Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.
modPDZpep: a web resource for structure based analysis of human PDZ-mediated interaction networks.
Sain, Neetu; Mohanty, Debasisa
2016-09-21
PDZ domains recognize short sequence stretches usually present in C-terminal of their interaction partners. Because of the involvement of PDZ domains in many important biological processes, several attempts have been made for developing bioinformatics tools for genome-wide identification of PDZ interaction networks. Currently available tools for prediction of interaction partners of PDZ domains utilize machine learning approach. Since, they have been trained using experimental substrate specificity data for specific PDZ families, their applicability is limited to PDZ families closely related to the training set. These tools also do not allow analysis of PDZ-peptide interaction interfaces. We have used a structure based approach to develop modPDZpep, a program to predict the interaction partners of human PDZ domains and analyze structural details of PDZ interaction interfaces. modPDZpep predicts interaction partners by using structural models of PDZ-peptide complexes and evaluating binding energy scores using residue based statistical pair potentials. Since, it does not require training using experimental data on peptide binding affinity, it can predict substrates for diverse PDZ families. Because of the use of simple scoring function for binding energy, it is also fast enough for genome scale structure based analysis of PDZ interaction networks. Benchmarking using artificial as well as real negative datasets indicates good predictive power with ROC-AUC values in the range of 0.7 to 0.9 for a large number of human PDZ domains. Another novel feature of modPDZpep is its ability to map novel PDZ mediated interactions in human protein-protein interaction networks, either by utilizing available experimental phage display data or by structure based predictions. In summary, we have developed modPDZpep, a web-server for structure based analysis of human PDZ domains. It is freely available at http://www.nii.ac.in/modPDZpep.html or http://202.54.226.235/modPDZpep.html . This article was reviewed by Michael Gromiha and Zoltán Gáspári.
NASA Astrophysics Data System (ADS)
Zhang, Jingqiong; Zhang, Wenbiao; He, Yuting; Yan, Yong
2016-11-01
The amount of coke deposition on catalyst pellets is one of the most important indexes of catalytic property and service life. As a result, it is essential to measure this and analyze the active state of the catalysts during a continuous production process. This paper proposes a new method to predict the amount of coke deposition on catalyst pellets based on image analysis and soft computing. An image acquisition system consisting of a flatbed scanner and an opaque cover is used to obtain catalyst images. After imaging processing and feature extraction, twelve effective features are selected and two best feature sets are determined by the prediction tests. A neural network optimized by a particle swarm optimization algorithm is used to establish the prediction model of the coke amount based on various datasets. The root mean square error of the prediction values are all below 0.021 and the coefficient of determination R 2, for the model, are all above 78.71%. Therefore, a feasible, effective and precise method is demonstrated, which may be applied to realize the real-time measurement of coke deposition based on on-line sampling and fast image analysis.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.
2003-01-01
The use of stress predictions from equivalent linearization analyses in the computation of high-cycle fatigue life is examined. Stresses so obtained differ in behavior from the fully nonlinear analysis in both spectral shape and amplitude. Consequently, fatigue life predictions made using this data will be affected. Comparisons of fatigue life predictions based upon the stress response obtained from equivalent linear and numerical simulation analyses are made to determine the range over which the equivalent linear analysis is applicable.
Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Sun, Lei; Jia, Yun-xian; Cai, Li-ying; Lin, Guo-yu; Zhao, Jin-song
2013-09-01
The spectrometric oil analysis(SOA) is an important technique for machine state monitoring, fault diagnosis and prognosis, and SOA based remaining useful life(RUL) prediction has an advantage of finding out the optimal maintenance strategy for machine system. Because the complexity of machine system, its health state degradation process can't be simply characterized by linear model, while particle filtering(PF) possesses obvious advantages over traditional Kalman filtering for dealing nonlinear and non-Gaussian system, the PF approach was applied to state forecasting by SOA, and the RUL prediction technique based on SOA and PF algorithm is proposed. In the prediction model, according to the estimating result of system's posterior probability, its prior probability distribution is realized, and the multi-step ahead prediction model based on PF algorithm is established. Finally, the practical SOA data of some engine was analyzed and forecasted by the above method, and the forecasting result was compared with that of traditional Kalman filtering method. The result fully shows the superiority and effectivity of the
Flight test derived heating math models for critical locations on the orbiter during reentry
NASA Technical Reports Server (NTRS)
Hertzler, E. K.; Phillips, P. W.
1983-01-01
An analysis technique was developed for expanding the aerothermodynamic envelope of the Space Shuttle without subjecting the vehicle to sustained flight at more stressing heating conditions. A transient analysis program was developed to take advantage of the transient maneuvers that were flown as part of this analysis technique. Heat rates were derived from flight test data for various locations on the orbiter. The flight derived heat rates were used to update heating models based on predicted data. Future missions were then analyzed based on these flight adjusted models. A technique for comparing flight and predicted heating rate data and the extrapolation of the data to predict the aerothermodynamic environment of future missions is presented.
NASA Technical Reports Server (NTRS)
Davis, S. J.; Egolf, T. A.
1980-01-01
Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.
Study on SOC wavelet analysis for LiFePO4 battery
NASA Astrophysics Data System (ADS)
Liu, Xuepeng; Zhao, Dongmei
2017-08-01
Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.
Analysis of factors influencing hydration site prediction based on molecular dynamics simulations.
Yang, Ying; Hu, Bingjie; Lill, Markus A
2014-10-27
Water contributes significantly to the binding of small molecules to proteins in biochemical systems. Molecular dynamics (MD) simulation based programs such as WaterMap and WATsite have been used to probe the locations and thermodynamic properties of hydration sites at the surface or in the binding site of proteins generating important information for structure-based drug design. However, questions associated with the influence of the simulation protocol on hydration site analysis remain. In this study, we use WATsite to investigate the influence of factors such as simulation length and variations in initial protein conformations on hydration site prediction. We find that 4 ns MD simulation is appropriate to obtain a reliable prediction of the locations and thermodynamic properties of hydration sites. In addition, hydration site prediction can be largely affected by the initial protein conformations used for MD simulations. Here, we provide a first quantification of this effect and further indicate that similar conformations of binding site residues (RMSD < 0.5 Å) are required to obtain consistent hydration site predictions.
Fusion of multiscale wavelet-based fractal analysis on retina image for stroke prediction.
Che Azemin, M Z; Kumar, Dinesh K; Wong, T Y; Wang, J J; Kawasaki, R; Mitchell, P; Arjunan, Sridhar P
2010-01-01
In this paper, we present a novel method of analyzing retinal vasculature using Fourier Fractal Dimension to extract the complexity of the retinal vasculature enhanced at different wavelet scales. Logistic regression was used as a fusion method to model the classifier for 5-year stroke prediction. The efficacy of this technique has been tested using standard pattern recognition performance evaluation, Receivers Operating Characteristics (ROC) analysis and medical prediction statistics, odds ratio. Stroke prediction model was developed using the proposed system.
Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger
2012-09-01
Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.
Emura, Takeshi; Nakatochi, Masahiro; Matsui, Shigeyuki; Michimae, Hirofumi; Rondeau, Virginie
2017-01-01
Developing a personalized risk prediction model of death is fundamental for improving patient care and touches on the realm of personalized medicine. The increasing availability of genomic information and large-scale meta-analytic data sets for clinicians has motivated the extension of traditional survival prediction based on the Cox proportional hazards model. The aim of our paper is to develop a personalized risk prediction formula for death according to genetic factors and dynamic tumour progression status based on meta-analytic data. To this end, we extend the existing joint frailty-copula model to a model allowing for high-dimensional genetic factors. In addition, we propose a dynamic prediction formula to predict death given tumour progression events possibly occurring after treatment or surgery. For clinical use, we implement the computation software of the prediction formula in the joint.Cox R package. We also develop a tool to validate the performance of the prediction formula by assessing the prediction error. We illustrate the method with the meta-analysis of individual patient data on ovarian cancer patients.
Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polly, B.
2011-09-01
This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.
NASA Astrophysics Data System (ADS)
Tárnok, Attila; Mittag, Anja; Lenz, Dominik
2006-02-01
The goal of predictive medicine is the detection of changes in patient's state prior to the clinical manifestation of the deterioration of the patients current status. Therefore, both the diagnostic of diseases like cancer, coronary atherosclerosis or congenital heart failure and the prognosis of the effect specific therapeutics on patients outcome are the main fields of predictive medicine. Clinical Cytomcs is based on the analysis of specimens from the patient by Cytomic technologies that are mainly imaging based techniques and their combinations with other assays. Predictive medicine aims at the recognition of the "fate" of each individual patients in order to yield unequivocal indications for decision making (i.e. how does the patient respond to therapy, react to medication etc.). This individualized prediction is based on the Predictive Medicine by Clinical Cytomics concept. These considerations have recently stimulated the idea of the Human Cytome Project. A major focus of the Human Cytome Project is multiplexed cy-tomic analysis of individual cells of the patient, extraction of predictive information and individual prediction that merges into individualized therapy. Although still at the beginning, Clinical Cytomics is a promising new field that may change therapy in the near future for the benefit of the patients.
NASA Astrophysics Data System (ADS)
Tao, Yulong; Miao, Yunshui; Han, Jiaqi; Yan, Feiyun
2018-05-01
Aiming at the low accuracy of traditional forecasting methods such as linear regression method, this paper presents a prediction method for predicting the relationship between bridge steel box girder and its displacement with wavelet neural network. Compared with traditional forecasting methods, this scheme has better local characteristics and learning ability, which greatly improves the prediction ability of deformation. Through analysis of the instance and found that after compared with the traditional prediction method based on wavelet neural network, the rigid beam deformation prediction accuracy is higher, and is superior to the BP neural network prediction results, conform to the actual demand of engineering design.
Tertiary structure-based analysis of microRNA–target interactions
Gan, Hin Hark; Gunsalus, Kristin C.
2013-01-01
Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009
Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.
Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel
2016-01-01
One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.
Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science
Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel
2016-01-01
One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883
2010-01-01
Background The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. Results In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. Conclusion High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data. PMID:20122245
Seok, Junhee; Kaushal, Amit; Davis, Ronald W; Xiao, Wenzhong
2010-01-18
The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data.
A method for identifying EMI critical circuits during development of a large C3
NASA Astrophysics Data System (ADS)
Barr, Douglas H.
The circuit analysis methods and process Boeing Aerospace used on a large, ground-based military command, control, and communications (C3) system are described. This analysis was designed to help identify electromagnetic interference (EMI) critical circuits. The methodology used the MIL-E-6051 equipment criticality categories as the basis for defining critical circuits, relational database technology to help sort through and account for all of the approximately 5000 system signal cables, and Macintosh Plus personal computers to predict critical circuits based on safety margin analysis. The EMI circuit analysis process systematically examined all system circuits to identify which ones were likely to be EMI critical. The process used two separate, sequential safety margin analyses to identify critical circuits (conservative safety margin analysis, and detailed safety margin analysis). These analyses used field-to-wire and wire-to-wire coupling models using both worst-case and detailed circuit parameters (physical and electrical) to predict circuit safety margins. This process identified the predicted critical circuits that could then be verified by test.
A Comparison of Three Strategies for Scale Construction to Predict a Specific Behavioral Outcome
ERIC Educational Resources Information Center
Garb, Howard N.; Wood, James M.; Fiedler, Edna R.
2011-01-01
Using 65 items from a mental health screening questionnaire, the History Opinion Inventory-Revised (HOI-R), the present study compared three strategies of scale construction--(1) internal (based on factor analysis), (2) external (based on empirical performance) and (3) intuitive (based on clinicians' opinion)--to predict whether 203,595 U.S. Air…
The influence of weather on migraine – are migraine attacks predictable?
Hoffmann, Jan; Schirra, Tonio; Lo, Hendra; Neeb, Lars; Reuter, Uwe; Martus, Peter
2015-01-01
Objective The study aimed at elucidating a potential correlation between specific meteorological variables and the prevalence and intensity of migraine attacks as well as exploring a potential individual predictability of a migraine attack based on meteorological variables and their changes. Methods Attack prevalence and intensity of 100 migraineurs were correlated with atmospheric pressure, relative air humidity, and ambient temperature in 4-h intervals over 12 consecutive months. For each correlation, meteorological parameters at the time of the migraine attack as well as their variation within the preceding 24 h were analyzed. For migraineurs showing a positive correlation, logistic regression analysis was used to assess the predictability of a migraine attack based on meteorological information. Results In a subgroup of migraineurs, a significant weather sensitivity could be observed. In contrast, pooled analysis of all patients did not reveal a significant association. An individual prediction of a migraine attack based on meteorological data was not possible, mainly as a result of the small prevalence of attacks. Interpretation The results suggest that only a subgroup of migraineurs is sensitive to specific weather conditions. Our findings may provide an explanation as to why previous studies, which commonly rely on a pooled analysis, show inconclusive results. The lack of individual attack predictability indicates that the use of preventive measures based on meteorological conditions is not feasible. PMID:25642431
Predictability of Seasonal Rainfall over the Greater Horn of Africa
NASA Astrophysics Data System (ADS)
Ngaina, J. N.
2016-12-01
The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions
Yang, Jing; Jin, Qi-Yu; Zhang, Biao; Shen, Hong-Bin
2016-08-15
Inter-residue contacts in proteins dictate the topology of protein structures. They are crucial for protein folding and structural stability. Accurate prediction of residue contacts especially for long-range contacts is important to the quality of ab inito structure modeling since they can enforce strong restraints to structure assembly. In this paper, we present a new Residue-Residue Contact predictor called R2C that combines machine learning-based and correlated mutation analysis-based methods, together with a two-dimensional Gaussian noise filter to enhance the long-range residue contact prediction. Our results show that the outputs from the machine learning-based method are concentrated with better performance on short-range contacts; while for correlated mutation analysis-based approach, the predictions are widespread with higher accuracy on long-range contacts. An effective query-driven dynamic fusion strategy proposed here takes full advantages of the two different methods, resulting in an impressive overall accuracy improvement. We also show that the contact map directly from the prediction model contains the interesting Gaussian noise, which has not been discovered before. Different from recent studies that tried to further enhance the quality of contact map by removing its transitive noise, we designed a new two-dimensional Gaussian noise filter, which was especially helpful for reinforcing the long-range residue contact prediction. Tested on recent CASP10/11 datasets, the overall top L/5 accuracy of our final R2C predictor is 17.6%/15.5% higher than the pure machine learning-based method and 7.8%/8.3% higher than the correlated mutation analysis-based approach for the long-range residue contact prediction. http://www.csbio.sjtu.edu.cn/bioinf/R2C/Contact:hbshen@sjtu.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Fatigue criterion to system design, life and reliability
NASA Technical Reports Server (NTRS)
Zaretsky, E. V.
1985-01-01
A generalized methodology to structural life prediction, design, and reliability based upon a fatigue criterion is advanced. The life prediction methodology is based in part on work of W. Weibull and G. Lundberg and A. Palmgren. The approach incorporates the computed life of elemental stress volumes of a complex machine element to predict system life. The results of coupon fatigue testing can be incorporated into the analysis allowing for life prediction and component or structural renewal rates with reasonable statistical certainty.
Saad, E W; Prokhorov, D V; Wunsch, D C
1998-01-01
Three networks are compared for low false alarm stock trend predictions. Short-term trends, particularly attractive for neural network analysis, can be used profitably in scenarios such as option trading, but only with significant risk. Therefore, we focus on limiting false alarms, which improves the risk/reward ratio by preventing losses. To predict stock trends, we exploit time delay, recurrent, and probabilistic neural networks (TDNN, RNN, and PNN, respectively), utilizing conjugate gradient and multistream extended Kalman filter training for TDNN and RNN. We also discuss different predictability analysis techniques and perform an analysis of predictability based on a history of daily closing price. Our results indicate that all the networks are feasible, the primary preference being one of convenience.
Literature-based condition-specific miRNA-mRNA target prediction.
Oh, Minsik; Rhee, Sungmin; Moon, Ji Hwan; Chae, Heejoon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun
2017-01-01
miRNAs are small non-coding RNAs that regulate gene expression by binding to the 3'-UTR of genes. Many recent studies have reported that miRNAs play important biological roles by regulating specific mRNAs or genes. Many sequence-based target prediction algorithms have been developed to predict miRNA targets. However, these methods are not designed for condition-specific target predictions and produce many false positives; thus, expression-based target prediction algorithms have been developed for condition-specific target predictions. A typical strategy to utilize expression data is to leverage the negative control roles of miRNAs on genes. To control false positives, a stringent cutoff value is typically set, but in this case, these methods tend to reject many true target relationships, i.e., false negatives. To overcome these limitations, additional information should be utilized. The literature is probably the best resource that we can utilize. Recent literature mining systems compile millions of articles with experiments designed for specific biological questions, and the systems provide a function to search for specific information. To utilize the literature information, we used a literature mining system, BEST, that automatically extracts information from the literature in PubMed and that allows the user to perform searches of the literature with any English words. By integrating omics data analysis methods and BEST, we developed Context-MMIA, a miRNA-mRNA target prediction method that combines expression data analysis results and the literature information extracted based on the user-specified context. In the pathway enrichment analysis using genes included in the top 200 miRNA-targets, Context-MMIA outperformed the four existing target prediction methods that we tested. In another test on whether prediction methods can re-produce experimentally validated target relationships, Context-MMIA outperformed the four existing target prediction methods. In summary, Context-MMIA allows the user to specify a context of the experimental data to predict miRNA targets, and we believe that Context-MMIA is very useful for predicting condition-specific miRNA targets.
Turbine blade forced response prediction using FREPS
NASA Technical Reports Server (NTRS)
Murthy, Durbha, V.; Morel, Michael R.
1993-01-01
This paper describes a software system called FREPS (Forced REsponse Prediction System) that integrates structural dynamic, steady and unsteady aerodynamic analyses to efficiently predict the forced response dynamic stresses in axial flow turbomachinery blades due to aerodynamic and mechanical excitations. A flutter analysis capability is also incorporated into the system. The FREPS system performs aeroelastic analysis by modeling the motion of the blade in terms of its normal modes. The structural dynamic analysis is performed by a finite element code such as MSC/NASTRAN. The steady aerodynamic analysis is based on nonlinear potential theory and the unsteady aerodynamic analyses is based on the linearization of the non-uniform potential flow mean. The program description and presentation of the capabilities are reported herein. The effectiveness of the FREPS package is demonstrated on the High Pressure Oxygen Turbopump turbine of the Space Shuttle Main Engine. Both flutter and forced response analyses are performed and typical results are illustrated.
Learning representative features for facial images based on a modified principal component analysis
NASA Astrophysics Data System (ADS)
Averkin, Anton; Potapov, Alexey
2013-05-01
The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.
Control and prediction of the course of brewery fermentations by gravimetric analysis.
Kosín, P; Savel, J; Broz, A; Sigler, K
2008-01-01
A simple, fast and cheap test suitable for predicting the course of brewery fermentations based on mass analysis is described and its efficiency is evaluated. Compared to commonly used yeast vitality tests, this analysis takes into account wort composition and other factors that influence fermentation performance. It can be used to predict the shape of the fermentation curve in brewery fermentations and in research and development projects concerning yeast vitality, fermentation conditions and wort composition. It can also be a useful tool for homebrewers to control their fermentations.
Prediction of gas-liquid two-phase flow regime in microgravity
NASA Technical Reports Server (NTRS)
Lee, Jinho; Platt, Jonathan A.
1993-01-01
An attempt is made to predict gas-liquid two-phase flow regime in a pipe in a microgravity environment through scaling analysis based on dominant physical mechanisms. Simple inlet geometry is adopted in the analysis to see the effect of inlet configuration on flow regime transitions. Comparison of the prediction with the existing experimental data shows good agreement, though more work is required to better define some physical parameters. The analysis clarifies much of the physics involved in this problem and can be applied to other configurations.
Sun Series program for the REEDA System. [predicting orbital lifetime using sunspot values
NASA Technical Reports Server (NTRS)
Shankle, R. W.
1980-01-01
Modifications made to data bases and to four programs in a series of computer programs (Sun Series) which run on the REEDA HP minicomputer system to aid NASA's solar activity predictions used in orbital life time predictions are described. These programs utilize various mathematical smoothing technique and perform statistical and graphical analysis of various solar activity data bases residing on the REEDA System.
Pretreatment data is highly predictive of liver chemistry signals in clinical trials.
Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T
2012-01-01
The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy's law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones.
Entrance and exit region friction factor models for annular seal analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Elrod, David Alan
1988-01-01
The Mach number definition and boundary conditions in Nelson's nominally-centered, annular gas seal analysis are revised. A method is described for determining the wall shear stress characteristics of an annular gas seal experimentally. Two friction factor models are developed for annular seal analysis; one model is based on flat-plate flow theory; the other uses empirical entrance and exit region friction factors. The friction factor predictions of the models are compared to experimental results. Each friction model is used in an annular gas seal analysis. The seal characteristics predicted by the two seal analyses are compared to experimental results and to the predictions of Nelson's analysis. The comparisons are for smooth-rotor seals with smooth and honeycomb stators. The comparisons show that the analysis which uses empirical entrance and exit region shear stress models predicts the static and stability characteristics of annular gas seals better than the other analyses. The analyses predict direct stiffness poorly.
Numerical Simulation of Monitoring Corrosion in Reinforced Concrete Based on Ultrasonic Guided Waves
Zheng, Zhupeng; Lei, Ying; Xue, Xin
2014-01-01
Numerical simulation based on finite element method is conducted to predict the location of pitting corrosion in reinforced concrete. Simulation results show that it is feasible to predict corrosion monitoring based on ultrasonic guided wave in reinforced concrete, and wavelet analysis can be used for the extremely weak signal of guided waves due to energy leaking into concrete. The characteristic of time-frequency localization of wavelet transform is adopted in the corrosion monitoring of reinforced concrete. Guided waves can be successfully used to identify corrosion defects in reinforced concrete with the analysis of suitable wavelet-based function and its scale. PMID:25013865
A web server for analysis, comparison and prediction of protein ligand binding sites.
Singh, Harinder; Srivastava, Hemant Kumar; Raghava, Gajendra P S
2016-03-25
One of the major challenges in the field of system biology is to understand the interaction between a wide range of proteins and ligands. In the past, methods have been developed for predicting binding sites in a protein for a limited number of ligands. In order to address this problem, we developed a web server named 'LPIcom' to facilitate users in understanding protein-ligand interaction. Analysis, comparison and prediction modules are available in the "LPIcom' server to predict protein-ligand interacting residues for 824 ligands. Each ligand must have at least 30 protein binding sites in PDB. Analysis module of the server can identify residues preferred in interaction and binding motif for a given ligand; for example residues glycine, lysine and arginine are preferred in ATP binding sites. Comparison module of the server allows comparing protein-binding sites of multiple ligands to understand the similarity between ligands based on their binding site. This module indicates that ATP, ADP and GTP ligands are in the same cluster and thus their binding sites or interacting residues exhibit a high level of similarity. Propensity-based prediction module has been developed for predicting ligand-interacting residues in a protein for more than 800 ligands. In addition, a number of web-based tools have been integrated to facilitate users in creating web logo and two-sample between ligand interacting and non-interacting residues. In summary, this manuscript presents a web-server for analysis of ligand interacting residue. This server is available for public use from URL http://crdd.osdd.net/raghava/lpicom .
Development of a percutaneous penetration predictive model by SR-FTIR.
Jungman, E; Laugel, C; Rutledge, D N; Dumas, P; Baillet-Guffroy, A
2013-01-30
This work focused on developing a new evaluation criterion of percutaneous penetration, in complement to Log Pow and MW and based on high spatial resolution Fourier transformed infrared (FTIR) microspectroscopy with a synchrotron source (SR-FTIR). Classic Franz cell experiments were run and after 22 h molecule distribution in skin was determined either by HPLC or by SR-FTIR. HPLC data served as reference. HPLC and SR-FTIR results were compared and a new predictive criterion based from SR-FTIR results, named S(index), was determined using a multi-block data analysis technique (ComDim). A predictive cartography of the distribution of molecules in the skin was built and compared to OECD predictive cartography. This new criterion S(index) and the cartography using SR-FTIR/HPLC results provides relevant information for risk analysis regarding prediction of percutaneous penetration and could be used to build a new mathematical model. Copyright © 2012 Elsevier B.V. All rights reserved.
Chemical and protein structural basis for biological crosstalk between PPAR α and COX enzymes
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2015-02-01
We have previously validated a probabilistic framework that combined computational approaches for predicting the biological activities of small molecule drugs. Molecule comparison methods included molecular structural similarity metrics and similarity computed from lexical analysis of text in drug package inserts. Here we present an analysis of novel drug/target predictions, focusing on those that were not obvious based on known pharmacological crosstalk. Considering those cases where the predicted target was an enzyme with known 3D structure allowed incorporation of information from molecular docking and protein binding pocket similarity in addition to ligand-based comparisons. Taken together, the combination of orthogonal information sources led to investigation of a surprising predicted relationship between a transcription factor and an enzyme, specifically, PPAR α and the cyclooxygenase enzymes. These predictions were confirmed by direct biochemical experiments which validate the approach and show for the first time that PPAR α agonists are cyclooxygenase inhibitors.
Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander R; Taraldsen, Gunnar; Grunewaldt, Kristine H; Støen, Ragnhild
2010-08-01
The aim of this study was to investigate the predictive value of a computer-based video analysis of the development of cerebral palsy (CP) in young infants. A prospective study of general movements used recordings from 30 high-risk infants (13 males, 17 females; mean gestational age 31wks, SD 6wks; range 23-42wks) between 10 and 15 weeks post term when fidgety movements should be present. Recordings were analysed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analyses. CP status was reported at 5 years. Thirteen infants developed CP (eight hemiparetic, four quadriparetic, one dyskinetic; seven ambulatory, three non-ambulatory, and three unknown function), of whom one had fidgety movements. Variability of the centroid of motion had a sensitivity of 85% and a specificity of 71% in identifying CP. By combining this with variables reflecting the amount of motion, specificity increased to 88%. Nine out of 10 children with CP, and for whom information about functional level was available, were correctly predicted with regard to ambulatory and non-ambulatory function. Prediction of CP can be provided by computer-based video analysis in young infants. The method may serve as an objective and feasible tool for early prediction of CP in high-risk infants.
Improvement of Quench Factor Analysis in Phase and Hardness Prediction of a Quenched Steel
NASA Astrophysics Data System (ADS)
Kianezhad, M.; Sajjadi, S. A.
2013-05-01
The accurate prediction of alloys' properties introduced by heat treatment has been considered by many researchers. The advantages of such predictions are reduction of test trails and materials' consumption as well as time and energy saving. One of the most important methods to predict hardness in quenched steel parts is Quench Factor Analysis (QFA). Classical QFA is based on the Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation. In this study, a modified form of the QFA based on the work by Rometsch et al. is compared with the classical QFA, and they are applied to prediction of hardness of steels. For this purpose, samples of CK60 steel were utilized as raw material. They were austenitized at 1103 K (830 °C). After quenching in different environments, they were cut and their hardness was determined. In addition, the hardness values of the samples were fitted using the classical and modified equations for the quench factor analysis and the results were compared. Results showed a significant improvement in fitted values of the hardness and proved the higher efficiency of the new method.
Dolled-Filhart, Marisa P; Gustavson, Mark D
2012-11-01
Translational oncology has been improved by using tissue microarrays (TMAs), which facilitate biomarker analysis of large cohorts on a single slide. This has allowed for rapid analysis and validation of potential biomarkers for prognostic and predictive value, as well as for evaluation of biomarker prevalence. Coupled with quantitative analysis of immunohistochemical (IHC) staining, objective and standardized biomarker data from tumor samples can further advance companion diagnostic approaches for the identification of drug-responsive or resistant patient subpopulations. This review covers the advantages, disadvantages and applications of TMAs for biomarker research. Research literature and reviews of TMAs and quantitative image analysis methodology have been surveyed for this review (with an AQUA® analysis focus). Applications such as multi-marker diagnostic development and pathway-based biomarker subpopulation analyses are described. Tissue microarrays are a useful tool for biomarker analyses including prevalence surveys, disease progression assessment and addressing potential prognostic or predictive value. By combining quantitative image analysis with TMAs, analyses will be more objective and reproducible, allowing for more robust IHC-based diagnostic test development. Quantitative multi-biomarker IHC diagnostic tests that can predict drug response will allow for greater success of clinical trials for targeted therapies and provide more personalized clinical decision making.
Diagnostic value of highly-sensitive chimerism analysis after allogeneic stem cell transplantation.
Sellmann, Lea; Rabe, Kim; Bünting, Ivonne; Dammann, Elke; Göhring, Gudrun; Ganser, Arnold; Stadler, Michael; Weissinger, Eva M; Hambach, Lothar
2018-05-02
Conventional analysis of host chimerism (HC) frequently fails to detect relapse before its clinical manifestation in patients with hematological malignancies after allogeneic stem cell transplantation (allo-SCT). Quantitative PCR (qPCR)-based highly-sensitive chimerism analysis extends the detection limit of conventional (short tandem repeats-based) chimerism analysis from 1 to 0.01% host cells in whole blood. To date, the diagnostic value of highly-sensitive chimerism analysis is hardly defined. Here, we applied qPCR-based chimerism analysis to 901 blood samples of 71 out-patients with hematological malignancies after allo-SCT. Receiver operating characteristics (ROC) curves were calculated for absolute HC values and for the increments of HC before relapse. Using the best cut-offs, relapse was detected with sensitivities of 74 or 85% and specificities of 69 or 75%, respectively. Positive predictive values (PPVs) were only 12 or 18%, but the respective negative predictive values were 98 or 99%. Relapse was detected median 38 or 45 days prior to clinical diagnosis, respectively. Considering also durations of steadily increasing HC of more than 28 days improved PPVs to more than 28 or 59%, respectively. Overall, highly-sensitive chimerism analysis excludes relapses with high certainty and predicts relapses with high sensitivity and specificity more than a month prior to clinical diagnosis.
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Lark, R. F.; Sinclair, J. H.
1977-01-01
An integrated theory is developed for predicting the hydrothermomechanical (HDTM) response of fiber composite components. The integrated theory is based on a combined theoretical and experimental investigation. In addition to predicting the HDTM response of components, the theory is structured to assess the combined hydrothermal effects on the mechanical properties of unidirectional composites loaded along the material axis and off-axis, and those of angleplied laminates. The theory developed predicts values which are in good agreement with measured data at the micromechanics, macromechanics, laminate analysis and structural analysis levels.
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002
NASA Astrophysics Data System (ADS)
Sasmita, Yoga; Darmawan, Gumgum
2017-08-01
This research aims to evaluate the performance of forecasting by Fourier Series Analysis (FSA) and Singular Spectrum Analysis (SSA) which are more explorative and not requiring parametric assumption. Those methods are applied to predicting the volume of motorcycle sales in Indonesia from January 2005 to December 2016 (monthly). Both models are suitable for seasonal and trend component data. Technically, FSA defines time domain as the result of trend and seasonal component in different frequencies which is difficult to identify in the time domain analysis. With the hidden period is 2,918 ≈ 3 and significant model order is 3, FSA model is used to predict testing data. Meanwhile, SSA has two main processes, decomposition and reconstruction. SSA decomposes the time series data into different components. The reconstruction process starts with grouping the decomposition result based on similarity period of each component in trajectory matrix. With the optimum of window length (L = 53) and grouping effect (r = 4), SSA predicting testing data. Forecasting accuracy evaluation is done based on Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The result shows that in the next 12 month, SSA has MAPE = 13.54 percent, MAE = 61,168.43 and RMSE = 75,244.92 and FSA has MAPE = 28.19 percent, MAE = 119,718.43 and RMSE = 142,511.17. Therefore, to predict volume of motorcycle sales in the next period should use SSA method which has better performance based on its accuracy.
Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram
2008-04-01
A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.
Automated lithology prediction from PGNAA and other geophysical logs.
Borsaru, M; Zhou, B; Aizawa, T; Karashima, H; Hashimoto, T
2006-02-01
Different methods of lithology predictions from geophysical data have been developed in the last 15 years. The geophysical logs used for predicting lithology are the conventional logs: sonic, neutron-neutron, gamma (total natural-gamma) and density (backscattered gamma-gamma). The prompt gamma neutron activation analysis (PGNAA) is another established geophysical logging technique for in situ element analysis of rocks in boreholes. The work described in this paper was carried out to investigate the application of PGNAA to the lithology interpretation. The data interpretation was conducted using the automatic interpretation program LogTrans based on statistical analysis. Limited test suggests that PGNAA logging data can be used to predict the lithology. A success rate of 73% for lithology prediction was achieved from PGNAA logging data only. It can also be used in conjunction with the conventional geophysical logs to enhance the lithology prediction.
Wan, Cen; Lees, Jonathan G; Minneci, Federico; Orengo, Christine A; Jones, David T
2017-10-01
Accurate gene or protein function prediction is a key challenge in the post-genome era. Most current methods perform well on molecular function prediction, but struggle to provide useful annotations relating to biological process functions due to the limited power of sequence-based features in that functional domain. In this work, we systematically evaluate the predictive power of temporal transcription expression profiles for protein function prediction in Drosophila melanogaster. Our results show significantly better performance on predicting protein function when transcription expression profile-based features are integrated with sequence-derived features, compared with the sequence-derived features alone. We also observe that the combination of expression-based and sequence-based features leads to further improvement of accuracy on predicting all three domains of gene function. Based on the optimal feature combinations, we then propose a novel multi-classifier-based function prediction method for Drosophila melanogaster proteins, FFPred-fly+. Interpreting our machine learning models also allows us to identify some of the underlying links between biological processes and developmental stages of Drosophila melanogaster.
A Predictive Model of Anesthesia Depth Based on SVM in the Primary Visual Cortex
Shi, Li; Li, Xiaoyuan; Wan, Hong
2013-01-01
In this paper, a novel model for predicting anesthesia depth is put forward based on local field potentials (LFPs) in the primary visual cortex (V1 area) of rats. The model is constructed using a Support Vector Machine (SVM) to realize anesthesia depth online prediction and classification. The raw LFP signal was first decomposed into some special scaling components. Among these components, those containing higher frequency information were well suited for more precise analysis of the performance of the anesthetic depth by wavelet transform. Secondly, the characteristics of anesthetized states were extracted by complexity analysis. In addition, two frequency domain parameters were selected. The above extracted features were used as the input vector of the predicting model. Finally, we collected the anesthesia samples from the LFP recordings under the visual stimulus experiments of Long Evans rats. Our results indicate that the predictive model is accurate and computationally fast, and that it is also well suited for online predicting. PMID:24044024
Cranial base evolution within the hominin clade
Nevell, L; Wood, B
2008-01-01
The base of the cranium (i.e. the basioccipital, the sphenoid and the temporal bones) is of particular interest because it undergoes significant morphological change within the hominin clade, and because basicranial morphology features in several hominin species diagnoses. We use a parsimony analysis of published cranial and dental data to predict the cranial base morphology expected in the hypothetical last common ancestor of the Pan–Homo clade. We also predict the primitive condition of the cranial base for the hominin clade, and document the evolution of the cranial base within the major subclades within the hominin clade. This analysis suggests that cranial base morphology has continued to evolve in the hominin clade, both before and after the emergence of the genus Homo. PMID:18380865
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
Kondo, M; Nagao, Y; Mahbub, M H; Tanabe, T; Tanizawa, Y
2018-04-29
To identify factors predicting early postpartum glucose intolerance in Japanese women with gestational diabetes mellitus, using decision-curve analysis. A retrospective cohort study was performed. The participants were 123 Japanese women with gestational diabetes who underwent 75-g oral glucose tolerance tests at 8-12 weeks after delivery. They were divided into a glucose intolerance and a normal glucose tolerance group based on postpartum oral glucose tolerance test results. Analysis of the pregnancy oral glucose tolerance test results showed predictive factors for postpartum glucose intolerance. We also evaluated the clinical usefulness of the prediction model based on decision-curve analysis. Of 123 women, 78 (63.4%) had normoglycaemia and 45 (36.6%) had glucose intolerance. Multivariable logistic regression analysis showed insulinogenic index/fasting immunoreactive insulin and summation of glucose levels, assessed during pregnancy oral glucose tolerance tests (total glucose), to be independent risk factors for postpartum glucose intolerance. Evaluating the regression models, the best discrimination (area under the curve 0.725) was obtained using the basic model (i.e. age, family history of diabetes, BMI ≥25 kg/m 2 and use of insulin during pregnancy) plus insulinogenic index/fasting immunoreactive insulin <1.1. Decision-curve analysis showed that combining insulinogenic index/fasting immunoreactive insulin <1.1 with basic clinical information resulted in superior net benefits for prediction of postpartum glucose intolerance. Insulinogenic index/fasting immunoreactive insulin calculated using oral glucose tolerance test results during pregnancy is potentially useful for predicting early postpartum glucose intolerance in Japanese women with gestational diabetes. © 2018 Diabetes UK.
A burnout prediction model based around char morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao Wu; Edward Lester; Michael Cloke
Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coalmore » particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.« less
Probabilistic Assessment of Above Zone Pressure Predictions at a Geologic Carbon Storage Site
Namhata, Argha; Oladyshkin, Sergey; Dilmore, Robert M.; Zhang, Liwei; Nakles, David V.
2016-01-01
Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices based on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites. PMID:27996043
Probabilistic Assessment of Above Zone Pressure Predictions at a Geologic Carbon Storage Site
NASA Astrophysics Data System (ADS)
Namhata, Argha; Oladyshkin, Sergey; Dilmore, Robert M.; Zhang, Liwei; Nakles, David V.
2016-12-01
Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices based on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites.
NASA Astrophysics Data System (ADS)
Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine
2017-06-01
The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.
Analysis of Factors Influencing Hydration Site Prediction Based on Molecular Dynamics Simulations
2015-01-01
Water contributes significantly to the binding of small molecules to proteins in biochemical systems. Molecular dynamics (MD) simulation based programs such as WaterMap and WATsite have been used to probe the locations and thermodynamic properties of hydration sites at the surface or in the binding site of proteins generating important information for structure-based drug design. However, questions associated with the influence of the simulation protocol on hydration site analysis remain. In this study, we use WATsite to investigate the influence of factors such as simulation length and variations in initial protein conformations on hydration site prediction. We find that 4 ns MD simulation is appropriate to obtain a reliable prediction of the locations and thermodynamic properties of hydration sites. In addition, hydration site prediction can be largely affected by the initial protein conformations used for MD simulations. Here, we provide a first quantification of this effect and further indicate that similar conformations of binding site residues (RMSD < 0.5 Å) are required to obtain consistent hydration site predictions. PMID:25252619
Haitsma, Jack J.; Furmli, Suleiman; Masoom, Hussain; Liu, Mingyao; Imai, Yumiko; Slutsky, Arthur S.; Beyene, Joseph; Greenwood, Celia M. T.; dos Santos, Claudia
2012-01-01
Objectives To perform a meta-analysis of gene expression microarray data from animal studies of lung injury, and to identify an injury-specific gene expression signature capable of predicting the development of lung injury in humans. Methods We performed a microarray meta-analysis using 77 microarray chips across six platforms, two species and different animal lung injury models exposed to lung injury with or/and without mechanical ventilation. Individual gene chips were classified and grouped based on the strategy used to induce lung injury. Effect size (change in gene expression) was calculated between non-injurious and injurious conditions comparing two main strategies to pool chips: (1) one-hit and (2) two-hit lung injury models. A random effects model was used to integrate individual effect sizes calculated from each experiment. Classification models were built using the gene expression signatures generated by the meta-analysis to predict the development of lung injury in human lung transplant recipients. Results Two injury-specific lists of differentially expressed genes generated from our meta-analysis of lung injury models were validated using external data sets and prospective data from animal models of ventilator-induced lung injury (VILI). Pathway analysis of gene sets revealed that both new and previously implicated VILI-related pathways are enriched with differentially regulated genes. Classification model based on gene expression signatures identified in animal models of lung injury predicted development of primary graft failure (PGF) in lung transplant recipients with larger than 80% accuracy based upon injury profiles from transplant donors. We also found that better classifier performance can be achieved by using meta-analysis to identify differentially-expressed genes than using single study-based differential analysis. Conclusion Taken together, our data suggests that microarray analysis of gene expression data allows for the detection of “injury" gene predictors that can classify lung injury samples and identify patients at risk for clinically relevant lung injury complications. PMID:23071521
Non-Fourier based thermal-mechanical tissue damage prediction for thermal ablation.
Li, Xin; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2017-01-02
Prediction of tissue damage under thermal loads plays important role for thermal ablation planning. A new methodology is presented in this paper by combing non-Fourier bio-heat transfer, constitutive elastic mechanics as well as non-rigid motion of dynamics to predict and analyze thermal distribution, thermal-induced mechanical deformation and thermal-mechanical damage of soft tissues under thermal loads. Simulations and comparison analysis demonstrate that the proposed methodology based on the non-Fourier bio-heat transfer can account for the thermal-induced mechanical behaviors of soft tissues and predict tissue thermal damage more accurately than classical Fourier bio-heat transfer based model.
Non-Fourier based thermal-mechanical tissue damage prediction for thermal ablation
Li, Xin; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2017-01-01
ABSTRACT Prediction of tissue damage under thermal loads plays important role for thermal ablation planning. A new methodology is presented in this paper by combing non-Fourier bio-heat transfer, constitutive elastic mechanics as well as non-rigid motion of dynamics to predict and analyze thermal distribution, thermal-induced mechanical deformation and thermal-mechanical damage of soft tissues under thermal loads. Simulations and comparison analysis demonstrate that the proposed methodology based on the non-Fourier bio-heat transfer can account for the thermal-induced mechanical behaviors of soft tissues and predict tissue thermal damage more accurately than classical Fourier bio-heat transfer based model. PMID:27690290
Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.
2007-01-01
To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.
Complex networks as a unified framework for descriptive analysis and predictive modeling in climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinhaeuser, Karsten J K; Chawla, Nitesh; Ganguly, Auroop R
The analysis of climate data has relied heavily on hypothesis-driven statistical methods, while projections of future climate are based primarily on physics-based computational models. However, in recent years a wealth of new datasets has become available. Therefore, we take a more data-centric approach and propose a unified framework for studying climate, with an aim towards characterizing observed phenomena as well as discovering new knowledge in the climate domain. Specifically, we posit that complex networks are well-suited for both descriptive analysis and predictive modeling tasks. We show that the structural properties of climate networks have useful interpretation within the domain. Further,more » we extract clusters from these networks and demonstrate their predictive power as climate indices. Our experimental results establish that the network clusters are statistically significantly better predictors than clusters derived using a more traditional clustering approach. Using complex networks as data representation thus enables the unique opportunity for descriptive and predictive modeling to inform each other.« less
Weather Research and Forecasting Model Wind Sensitivity Study at Edwards Air Force Base, CA
NASA Technical Reports Server (NTRS)
Watson, Leela R.; Bauman, William H., III; Hoeth, Brian
2009-01-01
This abstract describes work that will be done by the Applied Meteorology Unit (AMU) in assessing the success of different model configurations in predicting "wind cycling" cases at Edwards Air Force Base, CA (EAFB), in which the wind speeds and directions oscillate among towers near the EAFB runway. The Weather Research and Forecasting (WRF) model allows users to choose among two dynamical cores - the Advanced Research WRF (ARW) and the Non-hydrostatic Mesoscale Model (NMM). There are also data assimilation analysis packages available for the initialization of the WRF model - the Local Analysis and Prediction System (LAPS) and the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS). Having a series of initialization options and WRF cores, as well as many options within each core, creates challenges for local forecasters, such as determining which configuration options are best to address specific forecast concerns. The goal of this project is to assess the different configurations available and determine which configuration will best predict surface wind speed and direction at EAFB.
Dana, Alexandra; Tuller, Tamir
2014-12-01
Gene translation modeling and prediction is a fundamental problem that has numerous biomedical implementations. In this work we present a novel, user-friendly tool/index for calculating the mean of the typical decoding rates that enables predicting translation elongation efficiency of protein coding genes for different tissue types, developmental stages, and experimental conditions. The suggested translation efficiency index is based on the analysis of the organism's ribosome profiling data. This index could be used for example to predict changes in translation elongation efficiency of lowly expressed genes that usually have relatively low and/or biased ribosomal densities and protein levels measurements, or can be used for example for predicting translation efficiency of new genetically engineered genes. We demonstrate the usability of this index via the analysis of six organisms in different tissues and developmental stages. Distributable cross platform application and guideline are available for download at: http://www.cs.tau.ac.il/~tamirtul/MTDR/MTDR_Install.html. Copyright © 2015 Dana and Tuller.
NASA Technical Reports Server (NTRS)
Bourgeois, S. V.
1973-01-01
This report described an analysis of Skylab Experiments M551 (Metals Melting), M552 (Exothermic Brazing), and M553 (Sphere Forming). The primary objective is the study of convection in the molten metals and their attendant solidification theory. Particular attention is given to clarifying the effects of reduced gravity on molten metal flow and solidification. Based on an analysis of physical forces and solidification theory expected for ground-based and Skylab processing, low-g variations were predicted for each experiment. A comparison was then made with the Skylab results available to date. Both metallurgical analyses of other investigators and movies of ground-based and Skylab samples were utilized. Several low-g variations in Skylab processed materials were successfully predicted based on expected variations in physical forces and fluid convection. The same analysis also successfully predicted several features in the Skylab-processed materials which were identical to terrestrially-processed materials. These results are summarized in the conclusion section for each experiment.
Lienemann, Kai; Plötz, Thomas; Pestel, Sabine
2008-01-01
The aim of safety pharmacology is early detection of compound-induced side-effects. NMR-based urine analysis followed by multivariate data analysis (metabonomics) identifies efficiently differences between toxic and non-toxic compounds; but in most cases multiple administrations of the test compound are necessary. We tested the feasibility of detecting proximal tubule kidney toxicity and phospholipidosis with metabonomics techniques after single compound administration as an early safety pharmacology approach. Rats were treated orally, intravenously, inhalatively or intraperitoneally with different test compounds. Urine was collected at 0-8 h and 8-24 h after compound administration, and (1)H NMR-patterns were recorded from the samples. Variation of post-processing and feature extraction methods led to different views on the data. Support Vector Machines were trained on these different data sets and then aggregated as experts in an Ensemble. Finally, validity was monitored with a cross-validation study using a training, validation, and test data set. Proximal tubule kidney toxicity could be predicted with reasonable total classification accuracy (85%), specificity (88%) and sensitivity (78%). In comparison to alternative histological studies, results were obtained quicker, compound need was reduced, and very importantly fewer animals were needed. In contrast, the induction of phospholipidosis by the test compounds could not be predicted using NMR-based urine analysis or the previously published biomarker PAG. NMR-based urine analysis was shown to effectively predict proximal tubule kidney toxicity after single compound administration in rats. Thus, this experimental design allows early detection of toxicity risks with relatively low amounts of compound in a reasonably short period of time.
Based on BP Neural Network Stock Prediction
ERIC Educational Resources Information Center
Liu, Xiangwei; Ma, Xin
2012-01-01
The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…
Smart Extraction and Analysis System for Clinical Research.
Afzal, Muhammad; Hussain, Maqbool; Khan, Wajahat Ali; Ali, Taqdir; Jamshed, Arif; Lee, Sungyoung
2017-05-01
With the increasing use of electronic health records (EHRs), there is a growing need to expand the utilization of EHR data to support clinical research. The key challenge in achieving this goal is the unavailability of smart systems and methods to overcome the issue of data preparation, structuring, and sharing for smooth clinical research. We developed a robust analysis system called the smart extraction and analysis system (SEAS) that consists of two subsystems: (1) the information extraction system (IES), for extracting information from clinical documents, and (2) the survival analysis system (SAS), for a descriptive and predictive analysis to compile the survival statistics and predict the future chance of survivability. The IES subsystem is based on a novel permutation-based pattern recognition method that extracts information from unstructured clinical documents. Similarly, the SAS subsystem is based on a classification and regression tree (CART)-based prediction model for survival analysis. SEAS is evaluated and validated on a real-world case study of head and neck cancer. The overall information extraction accuracy of the system for semistructured text is recorded at 99%, while that for unstructured text is 97%. Furthermore, the automated, unstructured information extraction has reduced the average time spent on manual data entry by 75%, without compromising the accuracy of the system. Moreover, around 88% of patients are found in a terminal or dead state for the highest clinical stage of disease (level IV). Similarly, there is an ∼36% probability of a patient being alive if at least one of the lifestyle risk factors was positive. We presented our work on the development of SEAS to replace costly and time-consuming manual methods with smart automatic extraction of information and survival prediction methods. SEAS has reduced the time and energy of human resources spent unnecessarily on manual tasks.
Wu, Jia; Gong, Guanghua; Cui, Yi; Li, Ruijiang
2016-11-01
To predict pathological response of breast cancer to neoadjuvant chemotherapy (NAC) based on quantitative, multiregion analysis of dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). In this Institutional Review Board-approved study, 35 patients diagnosed with stage II/III breast cancer were retrospectively investigated using 3T DCE-MR images acquired before and after the first cycle of NAC. First, principal component analysis (PCA) was used to reduce the dimensionality of the DCE-MRI data with high temporal resolution. We then partitioned the whole tumor into multiple subregions using k-means clustering based on the PCA-defined eigenmaps. Within each tumor subregion, we extracted four quantitative Haralick texture features based on the gray-level co-occurrence matrix (GLCM). The change in texture features in each tumor subregion between pre- and during-NAC was used to predict pathological complete response after NAC. Three tumor subregions were identified through clustering, each with distinct enhancement characteristics. In univariate analysis, all imaging predictors except one extracted from the tumor subregion associated with fast washout were statistically significant (P < 0.05) after correcting for multiple testing, with area under the receiver operating characteristic (ROC) curve (AUC) or AUCs between 0.75 and 0.80. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.79 (P = 0.002) in leave-one-out cross-validation. This improved upon conventional imaging predictors such as tumor volume (AUC = 0.53) and texture features based on whole-tumor analysis (AUC = 0.65). The heterogeneity of the tumor subregion associated with fast washout on DCE-MRI predicted pathological response to NAC in breast cancer. J. Magn. Reson. Imaging 2016;44:1107-1115. © 2016 International Society for Magnetic Resonance in Medicine.
Evolutionary Game Theory Analysis of Tumor Progression
NASA Astrophysics Data System (ADS)
Wu, Amy; Liao, David; Sturm, James; Austin, Robert
2014-03-01
Evolutionary game theory applied to two interacting cell populations can yield quantitative prediction of the future densities of the two cell populations based on the initial interaction terms. We will discuss how in a complex ecology that evolutionary game theory successfully predicts the future densities of strains of stromal and cancer cells (multiple myeloma), and discuss the possible clinical use of such analysis for predicting cancer progression. Supported by the National Science Foundation and the National Cancer Institute.
Toward Hypertension Prediction Based on PPG-Derived HRV Signals: a Feasibility Study.
Lan, Kun-Chan; Raknim, Paweeya; Kao, Wei-Fong; Huang, Jyh-How
2018-04-21
Heart rate variability (HRV) is often used to assess the risk of cardiovascular disease, and data on this can be obtained via electrocardiography (ECG). However, collecting heart rate data via photoplethysmography (PPG) is now a lot easier. We investigate the feasibility of using the PPG-based heart rate to estimate HRV and predict diseases. We obtain three months of PPG-based heart rate data from subjects with and without hypertension, and calculate the HRV based on various forms of time and frequency domain analysis. We then apply a data mining technique to this estimated HRV data, to see if it is possible to correctly identify patients with hypertension. We use six HRV parameters to predict hypertension, and find SDNN has the best predictive power. We show that early disease prediction is possible through collecting one's PPG-based heart rate information.
NASA Astrophysics Data System (ADS)
Sun, Yuxing
2018-05-01
In this paper, a grey prediction model is used to predict the carbon emission in Hebei province, and the impact analysis model based on TermCo2 is established. At the same time, we read a lot about CGE and study on how to build the scene, the selection of key parameters, and sensitivity analysis of application scenarios do industry for reference.
Li, Zai-Shang; Chen, Peng; Yao, Kai; Wang, Bin; Li, Jing; Mi, Qi-Wu; Chen, Xiao-Feng; Zhao, Qi; Li, Yong-Hong; Chen, Jie-Ping; Deng, Chuang-Zhong; Ye, Yun-Lin; Zhong, Ming-Zhu; Liu, Zhuo-Wei; Qin, Zi-Ke; Lin, Xiang-Tian; Liang, Wei-Cong; Han, Hui; Zhou, Fang-Jian
2016-04-12
To determine the predictive value and feasibility of the new outcome prediction model for Chinese patients with penile squamous cell carcinoma. The 3-year disease-specific survival (DSS) survival (DSS) was 92.3% in patients with < 8.70 mg/L CRP and 54.9% in those with elevated CRP (P < 0.001). The 3-year DSS was 86.5% in patients with a BMI < 22.6 Kg/m2 and 69.9% in those with a higher BMI (P = 0.025). In a multivariate analysis, pathological T stage (P < 0.001), pathological N stage (P = 0.002), BMI (P = 0.002), and CRP (P = 0.004) were independent predictors of DSS. A new scoring model was developed, consisting of BMI, CRP, and tumor T and N classification. In our study, we found that the addition of the above-mentioned parameters significantly increased the predictive accuracy of the system of the American Joint Committee on Cancer (AJCC) anatomic stage group. The accuracy of the new prediction category was verified. A total of 172 Chinese patients with penile squamous cell cancer were analyzed retrospectively between November 2005 and November 2014. Statistical data analysis was conducted using the nonparametric method. Survival analysis was performed with the log-rank test and the Cox proportional hazard model. Based on regression estimates of significant parameters in multivariate analysis, a new BMI-, CRP- and pathologic factors-based scoring model was developed to predict disease--specific outcomes. The predictive accuracy of the model was evaluated using the internal and external validation. The present study demonstrated that the TNCB score group system maybe a precise and easy to use tool for predicting outcomes in Chinese penile squamous cell carcinoma patients.
Fink, Günther; Victora, Cesar G; Harttgen, Kenneth; Vollmer, Sebastian; Vidaletti, Luís Paulo; Barros, Aluisio J D
2017-04-01
To compare the predictive power of synthetic absolute income measures with that of asset-based wealth quintiles in low- and middle-income countries (LMICs) using child stunting as an outcome. We pooled data from 239 nationally representative household surveys from LMICs and computed absolute incomes in US dollars based on households' asset rank as well as data on national consumption and inequality levels. We used multivariable regression models to compare the predictive power of the created income measure with the predictive power of existing asset indicator measures. In cross-country analysis, log absolute income predicted 54.5% of stunting variation observed, compared with 20% of variation explained by wealth quintiles. For within-survey analysis, we also found absolute income gaps to be predictive of the gaps between stunting in the wealthiest and poorest households (P < .001). Our results suggest that absolute income levels can greatly improve the prediction of stunting levels across and within countries over time, compared with models that rely solely on relative wealth quintiles.
Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo
2017-09-01
Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.
Modeling of short fiber reinforced injection moulded composite
NASA Astrophysics Data System (ADS)
Kulkarni, A.; Aswini, N.; Dandekar, C. R.; Makhe, S.
2012-09-01
A micromechanics based finite element model (FEM) is developed to facilitate the design of a new production quality fiber reinforced plastic injection molded part. The composite part under study is composed of a polyetheretherketone (PEEK) matrix reinforced with 30% by volume fraction of short carbon fibers. The constitutive material models are obtained by using micromechanics based homogenization theories. The analysis is carried out by successfully coupling two commercial codes, Moldflow and ANSYS. Moldflow software is used to predict the fiber orientation by considering the flow kinetics and molding parameters. Material models are inputted into the commercial software ANSYS as per the predicted fiber orientation and the structural analysis is carried out. Thus in the present approach a coupling between two commercial codes namely Moldflow and ANSYS has been established to enable the analysis of the short fiber reinforced injection moulded composite parts. The load-deflection curve is obtained based on three constitutive material model namely an isotropy, transversely isotropy and orthotropy. Average values of the predicted quantities are compared to experimental results, obtaining a good correlation. In this manner, the coupled Moldflow-ANSYS model successfully predicts the load deflection curve of a composite injection molded part.
ASME V\\&V challenge problem: Surrogate-based V&V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beghini, Lauren L.; Hough, Patricia D.
2015-12-18
The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less
NASA Astrophysics Data System (ADS)
Monfared, Vahid
2016-12-01
Analytically based model is presented for behavioral analysis of the plastic deformations in the reinforced materials using the circular (trigonometric) functions. The analytical method is proposed to predict creep behavior of the fibrous composites based on basic and constitutive equations under a tensile axial stress. New insight of the work is to predict some important behaviors of the creeping matrix. In the present model, the prediction of the behaviors is simpler than the available methods. Principal creep strain rate behaviors are very noteworthy for designing the fibrous composites in the creeping composites. Analysis of the mentioned parameter behavior in the reinforced materials is necessary to analyze failure, fracture, and fatigue studies in the creep of the short fiber composites. Shuttles, spaceships, turbine blades and discs, and nozzle guide vanes are commonly subjected to the creep effects. Also, predicting the creep behavior is significant to design the optoelectronic and photonic advanced composites with optical fibers. As a result, the uniform behavior with constant gradient is seen in the principal creep strain rate behavior, and also creep rupture may happen at the fiber end. Finally, good agreements are found through comparing the obtained analytical and FEM results.
NASA Astrophysics Data System (ADS)
Murrill, Steven R.; Jacobs, Eddie L.; Franck, Charmaine C.; Petkie, Douglas T.; De Lucia, Frank C.
2015-10-01
The U.S. Army Research Laboratory (ARL) has continued to develop and enhance a millimeter-wave (MMW) and submillimeter- wave (SMMW)/terahertz (THz)-band imaging system performance prediction and analysis tool for both the detection and identification of concealed weaponry, and for pilotage obstacle avoidance. The details of the MATLAB-based model which accounts for the effects of all critical sensor and display components, for the effects of atmospheric attenuation, concealment material attenuation, and active illumination, were reported on at the 2005 SPIE Europe Security and Defence Symposium (Brugge). An advanced version of the base model that accounts for both the dramatic impact that target and background orientation can have on target observability as related to specular and Lambertian reflections captured by an active-illumination-based imaging system, and for the impact of target and background thermal emission, was reported on at the 2007 SPIE Defense and Security Symposium (Orlando). Further development of this tool that includes a MODTRAN-based atmospheric attenuation calculator and advanced system architecture configuration inputs that allow for straightforward performance analysis of active or passive systems based on scanning (single- or line-array detector element(s)) or staring (focal-plane-array detector elements) imaging architectures was reported on at the 2011 SPIE Europe Security and Defence Symposium (Prague). This paper provides a comprehensive review of a newly enhanced MMW and SMMW/THz imaging system analysis and design tool that now includes an improved noise sub-model for more accurate and reliable performance predictions, the capability to account for postcapture image contrast enhancement, and the capability to account for concealment material backscatter with active-illumination- based systems. Present plans for additional expansion of the model's predictive capabilities are also outlined.
PrePhyloPro: phylogenetic profile-based prediction of whole proteome linkages
Niu, Yulong; Liu, Chengcheng; Moghimyfiroozabad, Shayan; Yang, Yi
2017-01-01
Direct and indirect functional links between proteins as well as their interactions as part of larger protein complexes or common signaling pathways may be predicted by analyzing the correlation of their evolutionary patterns. Based on phylogenetic profiling, here we present a highly scalable and time-efficient computational framework for predicting linkages within the whole human proteome. We have validated this method through analysis of 3,697 human pathways and molecular complexes and a comparison of our results with the prediction outcomes of previously published co-occurrency model-based and normalization methods. Here we also introduce PrePhyloPro, a web-based software that uses our method for accurately predicting proteome-wide linkages. We present data on interactions of human mitochondrial proteins, verifying the performance of this software. PrePhyloPro is freely available at http://prephylopro.org/phyloprofile/. PMID:28875072
Aaron Weiskittel; Jereme Frank; David Walker; Phil Radtke; David Macfarlane; James Westfall
2015-01-01
Prediction of forest biomass and carbon is becoming important issues in the United States. However, estimating forest biomass and carbon is difficult and relies on empirically-derived regression equations. Based on recent findings from a national gap analysis and comprehensive assessment of the USDA Forest Service Forest Inventory and Analysis (USFS-FIA) component...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Taraj; Brasseur, James; Vijayakumar, Ganesh
2016-01-04
This study is aimed at gaining insight into the nonsteady transitional boundary layer dynamics of wind turbine blades and the predictive capabilities of URANS based transition and turbulence models for similar physics through the analysis of a controlled flow with similar nonsteady parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magome, T; Haga, A; Igaki, H
Purpose: Although many outcome prediction models based on dose-volume information have been proposed, it is well known that the prognosis may be affected also by multiple clinical factors. The purpose of this study is to predict the survival time after radiotherapy for high-grade glioma patients based on features including clinical and dose-volume histogram (DVH) information. Methods: A total of 35 patients with high-grade glioma (oligodendroglioma: 2, anaplastic astrocytoma: 3, glioblastoma: 30) were selected in this study. All patients were treated with prescribed dose of 30–80 Gy after surgical resection or biopsy from 2006 to 2013 at The University of Tokyomore » Hospital. All cases were randomly separated into training dataset (30 cases) and test dataset (5 cases). The survival time after radiotherapy was predicted based on a multiple linear regression analysis and artificial neural network (ANN) by using 204 candidate features. The candidate features included the 12 clinical features (tumor location, extent of surgical resection, treatment duration of radiotherapy, etc.), and the 192 DVH features (maximum dose, minimum dose, D95, V60, etc.). The effective features for the prediction were selected according to a step-wise method by using 30 training cases. The prediction accuracy was evaluated by a coefficient of determination (R{sup 2}) between the predicted and actual survival time for the training and test dataset. Results: In the multiple regression analysis, the value of R{sup 2} between the predicted and actual survival time was 0.460 for the training dataset and 0.375 for the test dataset. On the other hand, in the ANN analysis, the value of R{sup 2} was 0.806 for the training dataset and 0.811 for the test dataset. Conclusion: Although a large number of patients would be needed for more accurate and robust prediction, our preliminary Result showed the potential to predict the outcome in the patients with high-grade glioma. This work was partly supported by the JSPS Core-to-Core Program(No. 23003) and Grant-in-aid from the JSPS Fellows.« less
A new method to predict anatomical outcome after idiopathic macular hole surgery.
Liu, Peipei; Sun, Yaoyao; Dong, Chongya; Song, Dan; Jiang, Yanrong; Liang, Jianhong; Yin, Hong; Li, Xiaoxin; Zhao, Mingwei
2016-04-01
To investigate whether a new macular hole closure index (MHCI) could predict anatomic outcome of macular hole surgery. A vitrectomy with internal limiting membrane peeling, air-fluid exchange, and gas tamponade were performed on all patients. The postoperative anatomic status of the macular hole was defined by spectral-domain OCT. MHCI was calculated as (M+N)/BASE based on the preoperative OCT status. M and N were the curve lengths of the detached photoreceptor arms, and BASE was the length of the retinal pigment epithelial layer (RPE layer) detaching from the photoreceptors. Postoperative anatomical outcomes were divided into three grades: A (bridge-like closure), B (good closure), and C (poor closure or no closure). Correlation analysis was performed between anatomical outcomes and MHCI. Receiver operating characteristic (ROC) curves were derived for MHCI, indicating good model discrimination. ROC curves were also assessed by the area under the curve, and cut-offs were calculated. Other predictive parameters reported previously, which included the MH minimum, the MH height, the macular hole index (MHI), the diameter hole index (DHI), and the tractional hole index (THI) had been compared as well. MHCI correlated significantly with postoperative anatomical outcomes (r = 0.543, p = 0.000), but other predictive parameters did not. The areas under the curves indicated that MHCI could be used as an effective predictor of anatomical outcome. Cut-off values of 0.7 and 1.0 were obtained for MHCI from ROC curve analysis. MHCI demonstrated a better predictive effect than other parameters, both in the correlation analysis and ROC analysis. MHCI could be an easily measured and accurate predictive index for postoperative anatomical outcomes.
Ferrero, Alejandro; Rabal, Ana; Campos, Joaquín; Martínez-Verdú, Francisco; Chorro, Elísabet; Perales, Esther; Pons, Alicia; Hernanz, María Luisa
2013-02-01
A reduced set of measurement geometries allows the spectral reflectance of special effect coatings to be predicted for any other geometry. A physical model based on flake-related parameters has been used to determine nonredundant measurement geometries for the complete description of the spectral bidirectional reflectance distribution function (BRDF). The analysis of experimental spectral BRDF was carried out by means of principal component analysis. From this analysis, a set of nine measurement geometries was proposed to characterize special effect coatings. It was shown that, for two different special effect coatings, these geometries provide a good prediction of their complete color shift.
NASA Astrophysics Data System (ADS)
Ghaemi, Z.; Farnaghi, M.; Alimohammadi, A.
2015-12-01
The critical impact of air pollution on human health and environment in one hand and the complexity of pollutant concentration behavior in the other hand lead the scientists to look for advance techniques for monitoring and predicting the urban air quality. Additionally, recent developments in data measurement techniques have led to collection of various types of data about air quality. Such data is extremely voluminous and to be useful it must be processed at high velocity. Due to the complexity of big data analysis especially for dynamic applications, online forecasting of pollutant concentration trends within a reasonable processing time is still an open problem. The purpose of this paper is to present an online forecasting approach based on Support Vector Machine (SVM) to predict the air quality one day in advance. In order to overcome the computational requirements for large-scale data analysis, distributed computing based on the Hadoop platform has been employed to leverage the processing power of multiple processing units. The MapReduce programming model is adopted for massive parallel processing in this study. Based on the online algorithm and Hadoop framework, an online forecasting system is designed to predict the air pollution of Tehran for the next 24 hours. The results have been assessed on the basis of Processing Time and Efficiency. Quite accurate predictions of air pollutant indicator levels within an acceptable processing time prove that the presented approach is very suitable to tackle large scale air pollution prediction problems.
NASA Technical Reports Server (NTRS)
Beck, L. R.; Rodriguez, M. H.; Dister, S. W.; Rodriguez, A. D.; Washino, R. K.; Roberts, D. R.; Spanner, M. A.
1997-01-01
A blind test of two remote sensing-based models for predicting adult populations of Anopheles albimanus in villages, an indicator of malaria transmission risk, was conducted in southern Chiapas, Mexico. One model was developed using a discriminant analysis approach, while the other was based on regression analysis. The models were developed in 1992 for an area around Tapachula, Chiapas, using Landsat Thematic Mapper (TM) satellite data and geographic information system functions. Using two remotely sensed landscape elements, the discriminant model was able to successfully distinguish between villages with high and low An. albimanus abundance with an overall accuracy of 90%. To test the predictive capability of the models, multitemporal TM data were used to generate a landscape map of the Huixtla area, northwest of Tapachula, where the models were used to predict risk for 40 villages. The resulting predictions were not disclosed until the end of the test. Independently, An. albimanus abundance data were collected in the 40 randomly selected villages for which the predictions had been made. These data were subsequently used to assess the models' accuracies. The discriminant model accurately predicted 79% of the high-abundance villages and 50% of the low-abundance villages, for an overall accuracy of 70%. The regression model correctly identified seven of the 10 villages with the highest mosquito abundance. This test demonstrated that remote sensing-based models generated for one area can be used successfully in another, comparable area.
Pretreatment data is highly predictive of liver chemistry signals in clinical trials
Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T
2012-01-01
Purpose The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Patients and methods Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Results Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy’s law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. Conclusion It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones. PMID:23226004
NASA Astrophysics Data System (ADS)
Ozheredov, V. A.; Breus, T. K.; Obridko, V. N.
2012-12-01
As follows from the statement of the Third Official Solar Cycle 24 Prediction Panel created by the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), and the International Space Environment Service (ISES) based on the results of an analysis of many solar cycle 24 predictions, there has been no consensus on the amplitude and time of the maximum. There are two different scenarios: 90 units and August 2012 or 140 units and October 2011. The aim of our study is to revise the solar cycle 24 predictions by a comparative analysis of data obtained by three different methods: the singular spectral method, the nonlinear neural-based method, and the precursor method. As a precursor for solar cycle 24, we used the dynamics of the solar magnetic fields forming solar spots with Wolf numbers Rz. According to the prediction on the basis of the neural-based approach, it was established that the maximum of solar cycle 24 is expected to be 70. The precursor method predicted 50 units for the amplitude and April of 2012 for the time of the maximum. In view of the fact that the data used in the precursor method were averaged over 4.4 years, the amplitude of the maximum can be 20-30% larger (i.e., around 60-70 units), which is close to the values predicted by the neural-based method. The protracted minimum of solar cycle 23 and predicted low values of the maximum of solar cycle 24 are reminiscent of the historical Dalton minimum.
US Intergroup Anal Carcinoma Trial: Tumor Diameter Predicts for Colostomy
Ajani, Jaffer A.; Winter, Kathryn A.; Gunderson, Leonard L.; Pedersen, John; Benson, Al B.; Thomas, Charles R.; Mayer, Robert J.; Haddock, Michael G.; Rich, Tyvin A.; Willett, Christopher G.
2009-01-01
Purpose The US Gastrointestinal Intergroup Radiation Therapy Oncology Group 98-11 anal carcinoma trial showed that cisplatin-based concurrent chemoradiotherapy resulted in a significantly higher rate of colostomy compared with mitomycin-based therapy. Established prognostic variables for patients with anal carcinoma include tumor diameter, clinical nodal status, and sex, but pretreatment variables that would predict the likelihood of colostomy are unknown. Methods A secondary analysis was performed by combining patients in the two treatment arms to evaluate whether new predictive and prognostic variables would emerge. Univariate and multivariate analyses were carried out to correlate overall survival (OS), disease-free survival, and time to colostomy (TTC) with pretreatment and treatment variables. Results Of 682 patients enrolled, 644 patients were assessable and analyzed. In the multivariate analysis, tumor-related prognosticators for poorer OS included node-positive cancer (P ≤ .0001), large (> 5 cm) tumor diameter (P = .01), and male sex (P = .016). In the treatment-related categories, cisplatin-based therapy was statistically significantly associated with a higher rate of colostomy (P = .03) than was mitomycin-based therapy. In the pretreatment variables category, only large tumor diameter independently predicted for TTC (P = .008). Similarly, the cumulative 5-year colostomy rate was statistically significantly higher for large tumor diameter than for small tumor diameter (Gray's test; P = .0074). Clinical nodal status and sex were not predictive of TTC. Conclusion The combined analysis of the two arms of RTOG 98-11, representing the largest prospective database, reveals that tumor diameter (irrespective of the nodal status) is the only independent pretreatment variable that predicts TTC and 5-year colostomy rate in patients with anal carcinoma. PMID:19139424
NASA Astrophysics Data System (ADS)
Srivastava, P. K.; Han, D.; Rico-Ramirez, M. A.; Bray, M.; Islam, T.; Petropoulos, G.; Gupta, M.
2015-12-01
Hydro-meteorological variables such as Precipitation and Reference Evapotranspiration (ETo) are the most important variables for discharge prediction. However, it is not always possible to get access to them from ground based measurements, particularly in ungauged catchments. The mesoscale model WRF (Weather Research & Forecasting model) can be used for prediction of hydro-meteorological variables. However, hydro-meteorologists would like to know how well the downscaled global data products are as compared to ground based measurements and whether it is possible to use the downscaled data for ungauged catchments. Even with gauged catchments, most of the stations have only rain and flow gauges installed. Measurements of other weather hydro-meteorological variables such as solar radiation, wind speed, air temperature, and dew point are usually missing and thus complicate the problems. In this study, for downscaling the global datasets, the WRF model is setup over the Brue catchment with three nested domains (D1, D2 and D3) of horizontal grid spacing of 81 km, 27 km and 9 km are used. The hydro-meteorological variables are downscaled using the WRF model from the National Centers for Enviromental Prediction (NCEP) reanalysis datasets and subsequently used for the ETo estimation using the Penman Monteith equation. The analysis of weather variables and precipitation are compared against the ground based datasets, which indicate that the datasets are in agreement with the observed datasets for complete monitoring period as well as during the seasons except precipitation whose performance is poorer in comparison to the measured rainfall. After a comparison, the WRF estimated precipitation and ETo are then used as a input parameter in the Probability Distributed Model (PDM) for discharge prediction. The input data and model parameter sensitivity analysis and uncertainty estimation are also taken into account for the PDM calibration and prediction following the Generalised Likelihood Uncertainty Estimation (GLUE) approach. The overall analysis suggests that the uncertainty estimates in predicted discharge using WRF downscaled ETo have comparable performance to ground based observed datasets and hence is promising for discharge prediction in the absence of ground based measurements.
Barbieri, Christopher E; Cha, Eugene K; Chromecki, Thomas F; Dunning, Allison; Lotan, Yair; Svatek, Robert S; Scherr, Douglas S; Karakiewicz, Pierre I; Sun, Maxine; Mazumdar, Madhu; Shariat, Shahrokh F
2012-03-01
• To employ decision curve analysis to determine the impact of nuclear matrix protein 22 (NMP22) on clinical decision making in the detection of bladder cancer using data from a prospective trial. • The study included 1303 patients at risk for bladder cancer who underwent cystoscopy, urine cytology and measurement of urinary NMP22 levels. • We constructed several prediction models to estimate risk of bladder cancer. The base model was generated using patient characteristics (age, gender, race, smoking and haematuria); cytology and NMP22 were added to the base model to determine effects on predictive accuracy. • Clinical net benefit was calculated by summing the benefits and subtracting the harms and weighting these by the threshold probability at which a patient or clinician would opt for cystoscopy. • In all, 72 patients were found to have bladder cancer (5.5%). In univariate analyses, NMP22 was the strongest predictor of bladder cancer presence (predictive accuracy 71.3%), followed by age (67.5%) and cytology (64.3%). • In multivariable prediction models, NMP22 improved the predictive accuracy of the base model by 8.2% (area under the curve 70.2-78.4%) and of the base model plus cytology by 4.2% (area under the curve 75.9-80.1%). • Decision curve analysis revealed that adding NMP22 to other models increased clinical benefit, particularly at higher threshold probabilities. • NMP22 is a strong, independent predictor of bladder cancer. • Addition of NMP22 improves the accuracy of standard predictors by a statistically and clinically significant margin. • Decision curve analysis suggests that integration of NMP22 into clinical decision making helps avoid unnecessary cystoscopies, with minimal increased risk of missing a cancer. © 2011 THE AUTHORS. BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.
Laser-Based Trespassing Prediction in Restrictive Environments: A Linear Approach
Cheein, Fernando Auat; Scaglia, Gustavo
2012-01-01
Stationary range laser sensors for intruder monitoring, restricted space violation detections and workspace determination are extensively used in risky environments. In this work we present a linear based approach for predicting the presence of moving agents before they trespass a laser-based restricted space. Our approach is based on the Taylor's series expansion of the detected objects' movements. The latter makes our proposal suitable for embedded applications. In the experimental results (carried out in different scenarios) presented herein, our proposal shows 100% of effectiveness in predicting trespassing situations. Several implementation results and statistics analysis showing the performance of our proposal are included in this work.
Schilirò, Luca; Montrasio, Lorella; Scarascia Mugnozza, Gabriele
2016-11-01
In recent years, physically-based numerical models have frequently been used in the framework of early-warning systems devoted to rainfall-induced landslide hazard monitoring and mitigation. For this reason, in this work we describe the potential of SLIP (Shallow Landslides Instability Prediction), a simplified physically-based model for the analysis of shallow landslide occurrence. In order to test the reliability of this model, a back analysis of recent landslide events occurred in the study area (located SW of Messina, northeastern Sicily, Italy) on October 1st, 2009 was performed. The simulation results have been compared with those obtained for the same event by using TRIGRS, another well-established model for shallow landslide prediction. Afterwards, a simulation over a 2-year span period has been performed for the same area, with the aim of evaluating the performance of SLIP as early warning tool. The results confirm the good predictive capability of the model, both in terms of spatial and temporal prediction of the instability phenomena. For this reason, we recommend an operating procedure for the real-time definition of shallow landslide triggering scenarios at the catchment scale, which is based on the use of SLIP calibrated through a specific multi-methodological approach. Copyright © 2016 Elsevier B.V. All rights reserved.
Axisymmetric computational fluid dynamics analysis of a film/dump-cooled rocket nozzle plume
NASA Technical Reports Server (NTRS)
Tucker, P. K.; Warsi, S. A.
1993-01-01
Prediction of convective base heating rates for a new launch vehicle presents significant challenges to analysts concerned with base environments. The present effort seeks to augment classical base heating scaling techniques via a detailed investigation of the exhaust plume shear layer of a single H2/O2 Space Transportation Main Engine (STME). Use of fuel-rich turbine exhaust to cool the STME nozzle presented concerns regarding potential recirculation of these gases to the base region with attendant increase in the base heating rate. A pressure-based full Navier-Stokes computational fluid dynamics (CFD) code with finite rate chemistry is used to predict plumes for vehicle altitudes of 10 kft and 50 kft. Levels of combustible species within the plume shear layers are calculated in order to assess assumptions made in the base heating analysis.
Salvaggio, C N; Forman, E J; Garnsey, H M; Treff, N R; Scott, R T
2014-09-01
Polar body (polar body) biopsy represents one possible solution to performing comprehensive chromosome screening (CCS). This study adds to what is known about the predictive value of polar body based testing for the genetic status of the resulting embryo, but more importantly, provides the first evaluation of the predictive value for actual clinical outcomes after embryo transfer. SNP array was performed on first polar body, second polar body, and either a blastomere or trophectoderm biopsy, or the entire arrested embryo. Concordance of the polar body-based prediction with the observed diagnoses in the embryos was assessed. In addition, the predictive value of the polar body -based diagnosis for the specific clinical outcome of transferred embryos was evaluated through the use of DNA fingerprinting to track individual embryos. There were 459 embryos analyzed from 96 patients with a mean maternal age of 35.3. The polar body-based predictive value for the embryo based diagnosis was 70.3%. The blastocyst implantation predictive value of a euploid trophectoderm was higher than from euploid polar bodies (51% versus 40%). The cleavage stage embryo implantation predictive value of a euploid blastomere was also higher than from euploid polar bodies (31% versus 22%). Polar body based aneuploidy screening results were less predictive of actual clinical outcomes than direct embryo assessment and may not be adequate to improve sustained implantation rates. In nearly one-third of cases the polar body based analysis failed to predict the ploidy of the embryo. This imprecision may hinder efforts for polar body based CCS to improve IVF clinical outcomes.
RNA-based determination of ESR1 and HER2 expression and response to neoadjuvant chemotherapy.
Denkert, C; Loibl, S; Kronenwett, R; Budczies, J; von Törne, C; Nekljudova, V; Darb-Esfahani, S; Solbach, C; Sinn, B V; Petry, C; Müller, B M; Hilfrich, J; Altmann, G; Staebler, A; Roth, C; Ataseven, B; Kirchner, T; Dietel, M; Untch, M; von Minckwitz, G
2013-03-01
Hormone and human epidermal growth factor receptor 2 (HER2) receptors are the most important breast cancer biomarkers, and additional objective and quantitative test methods such as messenger RNA (mRNA)-based quantitative analysis are urgently needed. In this study, we investigated the clinical validity of RT-PCR-based evaluation of estrogen receptor (ESR1) and HER2 mRNA expression. A total of 1050 core biopsies from two retrospective (GeparTrio, GeparQuattro) and one prospective (PREDICT) neoadjuvant studies were evaluated by quantitative RT-PCR for ESR1 and HER2. ESR1 mRNA was significantly predictive for reduced response to neoadjuvant chemotherapy in univariate and multivariate analysis in all three cohorts. The complete pathologically documented response (pathological complete response, pCR) rate for ESR1+/HER2- tumors was 7.3%, 8.0% and 8.6%; for ESR1-/HER2- tumors it was 34.4%, 33.7% and 37.3% in GeparTrio, GeparQuattro and PREDICT, respectively (P < 0.001 in each cohort). In the Kaplan-Meier analysis in GeparTrio patients with ESR1+/HER2- tumors had the best prognosis, compared with ESR1-/HER2- and ESR1-/HER2+ tumors [disease-free survival (DFS): P < 0.0005, overall survival (OS): P < 0.0005]. Our results suggest that mRNA levels of ESR1 and HER2 predict response to neoadjuvant chemotherapy and are significantly associated with long-term outcome. As an additional option to standard immunohistochemistry and gene-array-based analysis, quantitative RT-PCR analysis might be useful for determination of the receptor status in breast cancer.
Predicting pedestrian flow: a methodology and a proof of concept based on real-life data.
Davidich, Maria; Köster, Gerta
2013-01-01
Building a reliable predictive model of pedestrian motion is very challenging: Ideally, such models should be based on observations made in both controlled experiments and in real-world environments. De facto, models are rarely based on real-world observations due to the lack of available data; instead, they are largely based on intuition and, at best, literature values and laboratory experiments. Such an approach is insufficient for reliable simulations of complex real-life scenarios: For instance, our analysis of pedestrian motion under natural conditions at a major German railway station reveals that the values for free-flow velocities and the flow-density relationship differ significantly from widely used literature values. It is thus necessary to calibrate and validate the model against relevant real-life data to make it capable of reproducing and predicting real-life scenarios. In this work we aim at constructing such realistic pedestrian stream simulation. Based on the analysis of real-life data, we present a methodology that identifies key parameters and interdependencies that enable us to properly calibrate the model. The success of the approach is demonstrated for a benchmark model, a cellular automaton. We show that the proposed approach significantly improves the reliability of the simulation and hence the potential prediction accuracy. The simulation is validated by comparing the local density evolution of the measured data to that of the simulated data. We find that for our model the most sensitive parameters are: the source-target distribution of the pedestrian trajectories, the schedule of pedestrian appearances in the scenario and the mean free-flow velocity. Our results emphasize the need for real-life data extraction and analysis to enable predictive simulations.
Bonetti, Debbie; Johnston, Marie; Clarkson, Jan E; Grimshaw, Jeremy; Pitts, Nigel B; Eccles, Martin; Steen, Nick; Thomas, Ruth; Maclennan, Graeme; Glidewell, Liz; Walker, Anne
2010-04-08
Psychological models are used to understand and predict behaviour in a wide range of settings, but have not been consistently applied to health professional behaviours, and the contribution of differing theories is not clear. This study explored the usefulness of a range of models to predict an evidence-based behaviour -- the placing of fissure sealants. Measures were collected by postal questionnaire from a random sample of general dental practitioners (GDPs) in Scotland. Outcomes were behavioural simulation (scenario decision-making), and behavioural intention. Predictor variables were from the Theory of Planned Behaviour (TPB), Social Cognitive Theory (SCT), Common Sense Self-regulation Model (CS-SRM), Operant Learning Theory (OLT), Implementation Intention (II), Stage Model, and knowledge (a non-theoretical construct). Multiple regression analysis was used to examine the predictive value of each theoretical model individually. Significant constructs from all theories were then entered into a 'cross theory' stepwise regression analysis to investigate their combined predictive value. Behavioural simulation - theory level variance explained was: TPB 31%; SCT 29%; II 7%; OLT 30%. Neither CS-SRM nor stage explained significant variance. In the cross theory analysis, habit (OLT), timeline acute (CS-SRM), and outcome expectancy (SCT) entered the equation, together explaining 38% of the variance. Behavioural intention - theory level variance explained was: TPB 30%; SCT 24%; OLT 58%, CS-SRM 27%. GDPs in the action stage had significantly higher intention to place fissure sealants. In the cross theory analysis, habit (OLT) and attitude (TPB) entered the equation, together explaining 68% of the variance in intention. The study provides evidence that psychological models can be useful in understanding and predicting clinical behaviour. Taking a theory-based approach enables the creation of a replicable methodology for identifying factors that may predict clinical behaviour and so provide possible targets for knowledge translation interventions. Results suggest that more evidence-based behaviour may be achieved by influencing beliefs about the positive outcomes of placing fissure sealants and building a habit of placing them as part of patient management. However a number of conceptual and methodological challenges remain.
Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.
2010-12-01
The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
Copula based prediction models: an application to an aortic regurgitation study
Kumar, Pranesh; Shoukri, Mohamed M
2007-01-01
Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction); p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808). From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots) are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0.8907 × (Pre-operative ejection fraction); p = 0.00008 ; 95% confidence interval for slope coefficient (0.4810, 1.3003). For both models differences in the predicted post-operative ejection fractions in the lower range of pre-operative ejection measurements are considerably different and prediction errors due to copula model are smaller. To validate the copula methodology we have re-sampled with replacement fifty independent bootstrap samples and have estimated concordance statistics 0.7722 (p = 0.0224) for the copula model and 0.7237 (p = 0.0604) for the correlation model. The predicted and observed measurements are concordant for both models. The estimates of accuracy components are 0.9233 and 0.8654 for copula and correlation models respectively. Conclusion: Copula-based prediction modeling is demonstrated to be an appropriate alternative to the conventional correlation-based prediction modeling since the correlation-based prediction models are not appropriate to model the dependence in populations with asymmetrical tails. Proposed copula-based prediction model has been validated using the independent bootstrap samples. PMID:17573974
Li, Huixia; Luo, Miyang; Luo, Jiayou; Zheng, Jianfei; Zeng, Rong; Du, Qiyun; Fang, Junqun; Ouyang, Na
2016-11-23
A risk prediction model of non-syndromic cleft lip with or without cleft palate (NSCL/P) was established by a discriminant analysis to predict the individual risk of NSCL/P in pregnant women. A hospital-based case-control study was conducted with 113 cases of NSCL/P and 226 controls without NSCL/P. The cases and the controls were obtained from 52 birth defects' surveillance hospitals in Hunan Province, China. A questionnaire was administered in person to collect the variables relevant to NSCL/P by face to face interviews. Logistic regression models were used to analyze the influencing factors of NSCL/P, and a stepwise Fisher discriminant analysis was subsequently used to construct the prediction model. In the univariate analysis, 13 influencing factors were related to NSCL/P, of which the following 8 influencing factors as predictors determined the discriminant prediction model: family income, maternal occupational hazards exposure, premarital medical examination, housing renovation, milk/soymilk intake in the first trimester of pregnancy, paternal occupational hazards exposure, paternal strong tea drinking, and family history of NSCL/P. The model had statistical significance (lambda = 0.772, chi-square = 86.044, df = 8, P < 0.001). Self-verification showed that 83.8 % of the participants were correctly predicted to be NSCL/P cases or controls with a sensitivity of 74.3 % and a specificity of 88.5 %. The area under the receiver operating characteristic curve (AUC) was 0.846. The prediction model that was established using the risk factors of NSCL/P can be useful for predicting the risk of NSCL/P. Further research is needed to improve the model, and confirm the validity and reliability of the model.
Beltrame, T.; Amelard, R.; Wong, A.; Hughson, R. L.
2017-01-01
Currently, oxygen uptake () is the most precise means of investigating aerobic fitness and level of physical activity; however, can only be directly measured in supervised conditions. With the advancement of new wearable sensor technologies and data processing approaches, it is possible to accurately infer work rate and predict during activities of daily living (ADL). The main objective of this study was to develop and verify the methods required to predict and investigate the dynamics during ADL. The variables derived from the wearable sensors were used to create a predictor based on a random forest method. The temporal dynamics were assessed by the mean normalized gain amplitude (MNG) obtained from frequency domain analysis. The MNG provides a means to assess aerobic fitness. The predicted during ADL was strongly correlated (r = 0.87, P < 0.001) with the measured and the prediction bias was 0.2 ml·min−1·kg−1. The MNG calculated based on predicted was strongly correlated (r = 0.71, P < 0.001) with MNG calculated based on measured data. This new technology provides an important advance in ambulatory and continuous assessment of aerobic fitness with potential for future applications such as the early detection of deterioration of physical health. PMID:28378815
SGP-1: Prediction and Validation of Homologous Genes Based on Sequence Alignments
Wiehe, Thomas; Gebauer-Jung, Steffi; Mitchell-Olds, Thomas; Guigó, Roderic
2001-01-01
Conventional methods of gene prediction rely on the recognition of DNA-sequence signals, the coding potential or the comparison of a genomic sequence with a cDNA, EST, or protein database. Reasons for limited accuracy in many circumstances are species-specific training and the incompleteness of reference databases. Lately, comparative genome analysis has attracted increasing attention. Several analysis tools that are based on human/mouse comparisons are already available. Here, we present a program for the prediction of protein-coding genes, termed SGP-1 (Syntenic Gene Prediction), which is based on the similarity of homologous genomic sequences. In contrast to most existing tools, the accuracy of SGP-1 depends little on species-specific properties such as codon usage or the nucleotide distribution. SGP-1 may therefore be applied to nonstandard model organisms in vertebrates as well as in plants, without the need for extensive parameter training. In addition to predicting genes in large-scale genomic sequences, the program may be useful to validate gene structure annotations from databases. To this end, SGP-1 output also contains comparisons between predicted and annotated gene structures in HTML format. The program can be accessed via a Web server at http://soft.ice.mpg.de/sgp-1. The source code, written in ANSI C, is available on request from the authors. PMID:11544202
Beltrame, T; Amelard, R; Wong, A; Hughson, R L
2017-04-05
Currently, oxygen uptake () is the most precise means of investigating aerobic fitness and level of physical activity; however, can only be directly measured in supervised conditions. With the advancement of new wearable sensor technologies and data processing approaches, it is possible to accurately infer work rate and predict during activities of daily living (ADL). The main objective of this study was to develop and verify the methods required to predict and investigate the dynamics during ADL. The variables derived from the wearable sensors were used to create a predictor based on a random forest method. The temporal dynamics were assessed by the mean normalized gain amplitude (MNG) obtained from frequency domain analysis. The MNG provides a means to assess aerobic fitness. The predicted during ADL was strongly correlated (r = 0.87, P < 0.001) with the measured and the prediction bias was 0.2 ml·min -1 ·kg -1 . The MNG calculated based on predicted was strongly correlated (r = 0.71, P < 0.001) with MNG calculated based on measured data. This new technology provides an important advance in ambulatory and continuous assessment of aerobic fitness with potential for future applications such as the early detection of deterioration of physical health.
Design and experiment of vehicular charger AC/DC system based on predictive control algorithm
NASA Astrophysics Data System (ADS)
He, Guangbi; Quan, Shuhai; Lu, Yuzhang
2018-06-01
For the car charging stage rectifier uncontrollable system, this paper proposes a predictive control algorithm of DC/DC converter based on the prediction model, established by the state space average method and its prediction model, obtained by the optimal mathematical description of mathematical calculation, to analysis prediction algorithm by Simulink simulation. The design of the structure of the car charging, at the request of the rated output power and output voltage adjustable control circuit, the first stage is the three-phase uncontrolled rectifier DC voltage Ud through the filter capacitor, after by using double-phase interleaved buck-boost circuit with wide range output voltage required value, analyzing its working principle and the the parameters for the design and selection of components. The analysis of current ripple shows that the double staggered parallel connection has the advantages of reducing the output current ripple and reducing the loss. The simulation experiment of the whole charging circuit is carried out by software, and the result is in line with the design requirements of the system. Finally combining the soft with hardware circuit to achieve charging of the system according to the requirements, experimental platform proved the feasibility and effectiveness of the proposed predictive control algorithm based on the car charging of the system, which is consistent with the simulation results.
Large-scale optimization-based classification models in medicine and biology.
Lee, Eva K
2007-06-01
We present novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and (5) successive multi-stage classification capability to handle data points placed in the reserved-judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multi-group prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80 to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.
Manikandan, Narayanan; Subha, Srinivasan
2016-01-01
Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used.
Manikandan, Narayanan; Subha, Srinivasan
2016-01-01
Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used. PMID:26881271
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core-Concrete Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Farmer, Mitchell T.; Francis, Matthew W.
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for the analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, in this paper an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 weremore » used as input. MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH. For the MELCOR-based melt pour scenario, CORQUENCH predicted the melt would readily cool within 2.5 h after the pour, and the sumps would experience limited ablation (approximately 18 cm) under water-flooded conditions. Finally, for the MAAP-based melt pour scenarios, CORQUENCH predicted that the melt would cool in approximately 22.5 h, and the sumps would experience approximately 65 cm of concrete ablation under water-flooded conditions.« less
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core-Concrete Interaction
Robb, Kevin R.; Farmer, Mitchell T.; Francis, Matthew W.
2016-10-31
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for the analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, in this paper an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 weremore » used as input. MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH. For the MELCOR-based melt pour scenario, CORQUENCH predicted the melt would readily cool within 2.5 h after the pour, and the sumps would experience limited ablation (approximately 18 cm) under water-flooded conditions. Finally, for the MAAP-based melt pour scenarios, CORQUENCH predicted that the melt would cool in approximately 22.5 h, and the sumps would experience approximately 65 cm of concrete ablation under water-flooded conditions.« less
NASA Technical Reports Server (NTRS)
LOVE EUGENE S
1957-01-01
An analysis has been made of available experimental data to show the effects of most of the variables that are more predominant in determining base pressure at supersonic speeds. The analysis covers base pressures for two-dimensional airfoils and for bodies of revolution with and without stabilizing fins and is restricted to turbulent boundary layers. The present status of available experimental information is summarized as are the existing methods for predicting base pressure. A simple semiempirical method is presented for estimating base pressure. For two-dimensional bases, this method stems from an analogy established between the base-pressure phenomena and the peak pressure rise associated with the separation of the boundary layer. An analysis made for axially symmetric flow indicates that the base pressure for bodies of revolution is subject to the same analogy. Based upon the methods presented, estimations are made of such effects as Mach number, angle of attack, boattailing, fineness ratio, and fins. These estimations give fair predictions of experimental results. (author)
KFC Server: interactive forecasting of protein interaction hot spots.
Darnell, Steven J; LeGault, Laura; Mitchell, Julie C
2008-07-01
The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model-a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein-protein or protein-DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org.
KFC Server: interactive forecasting of protein interaction hot spots
Darnell, Steven J.; LeGault, Laura; Mitchell, Julie C.
2008-01-01
The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model—a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein–protein or protein–DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org. PMID:18539611
NASA Astrophysics Data System (ADS)
Faramarzi, Farhad; Mansouri, Hamid; Farsangi, Mohammad Ali Ebrahimi
2014-07-01
The environmental effects of blasting must be controlled in order to comply with regulatory limits. Because of safety concerns and risk of damage to infrastructures, equipment, and property, and also having a good fragmentation, flyrock control is crucial in blasting operations. If measures to decrease flyrock are taken, then the flyrock distance would be limited, and, in return, the risk of damage can be reduced or eliminated. This paper deals with modeling the level of risk associated with flyrock and, also, flyrock distance prediction based on the rock engineering systems (RES) methodology. In the proposed models, 13 effective parameters on flyrock due to blasting are considered as inputs, and the flyrock distance and associated level of risks as outputs. In selecting input data, the simplicity of measuring input data was taken into account as well. The data for 47 blasts, carried out at the Sungun copper mine, western Iran, were used to predict the level of risk and flyrock distance corresponding to each blast. The obtained results showed that, for the 47 blasts carried out at the Sungun copper mine, the level of estimated risks are mostly in accordance with the measured flyrock distances. Furthermore, a comparison was made between the results of the flyrock distance predictive RES-based model, the multivariate regression analysis model (MVRM), and, also, the dimensional analysis model. For the RES-based model, R 2 and root mean square error (RMSE) are equal to 0.86 and 10.01, respectively, whereas for the MVRM and dimensional analysis, R 2 and RMSE are equal to (0.84 and 12.20) and (0.76 and 13.75), respectively. These achievements confirm the better performance of the RES-based model over the other proposed models.
NASA Astrophysics Data System (ADS)
Norinder, Ulf
1990-12-01
An experimental design based 3-D QSAR analysis using a combination of principal component and PLS analysis is presented and applied to human corticosteroid-binding globulin complexes. The predictive capability of the created model is good. The technique can also be used as guidance when selecting new compounds to be investigated.
77 FR 66149 - Significant New Use Rules on Certain Chemical Substances
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-02
... ecological structural activity relationship (EcoSAR) analysis of test data on analogous esters, EPA predicts... milligram/cubic meter (mg/m\\3\\) as an 8-hour time-weighted average. In addition, based on EcoSAR analysis of... the PMN substance via the inhalation route. In addition, based on EcoSAR analysis of test data on...
Xiong, Zheng; He, Yinyan; Hattrick-Simpers, Jason R; Hu, Jianjun
2017-03-13
The creation of composition-processing-structure relationships currently represents a key bottleneck for data analysis for high-throughput experimental (HTE) material studies. Here we propose an automated phase diagram attribution algorithm for HTE data analysis that uses a graph-based segmentation algorithm and Delaunay tessellation to create a crystal phase diagram from high throughput libraries of X-ray diffraction (XRD) patterns. We also propose the sample-pair based objective evaluation measures for the phase diagram prediction problem. Our approach was validated using 278 diffraction patterns from a Fe-Ga-Pd composition spread sample with a prediction precision of 0.934 and a Matthews Correlation Coefficient score of 0.823. The algorithm was then applied to the open Ni-Mn-Al thin-film composition spread sample to obtain the first predicted phase diagram mapping for that sample.
Life Prediction Issues in Thermal/Environmental Barrier Coatings in Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Brewer, David N.; Murthy, Pappu L. N.
2001-01-01
Issues and design requirements for the environmental barrier coating (EBC)/thermal barrier coating (TBC) life that are general and those specific to the NASA Ultra-Efficient Engine Technology (UEET) development program have been described. The current state and trend of the research, methods in vogue related to the failure analysis, and long-term behavior and life prediction of EBCITBC systems are reported. Also, the perceived failure mechanisms, variables, and related uncertainties governing the EBCITBC system life are summarized. A combined heat transfer and structural analysis approach based on the oxidation kinetics using the Arrhenius theory is proposed to develop a life prediction model for the EBC/TBC systems. Stochastic process-based reliability approach that includes the physical variables such as gas pressure, temperature, velocity, moisture content, crack density, oxygen content, etc., is suggested. Benefits of the reliability-based approach are also discussed in the report.
NASA Technical Reports Server (NTRS)
Cull, R. C.; Eltimsahy, A. H.
1983-01-01
The present investigation is concerned with the formulation of energy management strategies for stand-alone photovoltaic (PV) systems, taking into account a basic control algorithm for a possible predictive, (and adaptive) controller. The control system controls the flow of energy in the system according to the amount of energy available, and predicts the appropriate control set-points based on the energy (insolation) available by using an appropriate system model. Aspects of adaptation to the conditions of the system are also considered. Attention is given to a statistical analysis technique, the analysis inputs, the analysis procedure, and details regarding the basic control algorithm.
Numerical Analysis of Base Flowfield for a Four-Engine Clustered Nozzle Configuration
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1995-01-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust inside and at the lip of the nozzle, the potential burning of the turbine exhaust in the base region can be of great concern. Accurate prediction of the base environment at altitudes is therefore very important during the vehicle design phase. Otherwise, undesirable consequences may occur. In this study, the turbulent base flowfield of a cold flow experimental investigation for a four-engine clustered nozzle was numerically benchmarked using a pressure-based computational fluid dynamics (CFD) method. This is a necessary step before the benchmarking of hot flow and combustion flow tests can be considered. Since the medium was unheated air, reasonable prediction of the base pressure distribution at high altitude was the main goal. Several physical phenomena pertaining to the multiengine clustered nozzle base flow physics were deduced from the analysis.
Srinivasulu, Yerukala Sathipati; Wang, Jyun-Rong; Hsu, Kai-Ti; Tsai, Ming-Ju; Charoenkwan, Phasit; Huang, Wen-Lin; Huang, Hui-Ling; Ho, Shinn-Ying
2015-01-01
Protein-protein interactions (PPIs) are involved in various biological processes, and underlying mechanism of the interactions plays a crucial role in therapeutics and protein engineering. Most machine learning approaches have been developed for predicting the binding affinity of protein-protein complexes based on structure and functional information. This work aims to predict the binding affinity of heterodimeric protein complexes from sequences only. This work proposes a support vector machine (SVM) based binding affinity classifier, called SVM-BAC, to classify heterodimeric protein complexes based on the prediction of their binding affinity. SVM-BAC identified 14 of 580 sequence descriptors (physicochemical, energetic and conformational properties of the 20 amino acids) to classify 216 heterodimeric protein complexes into low and high binding affinity. SVM-BAC yielded the training accuracy, sensitivity, specificity, AUC and test accuracy of 85.80%, 0.89, 0.83, 0.86 and 83.33%, respectively, better than existing machine learning algorithms. The 14 features and support vector regression were further used to estimate the binding affinities (Pkd) of 200 heterodimeric protein complexes. Prediction performance of a Jackknife test was the correlation coefficient of 0.34 and mean absolute error of 1.4. We further analyze three informative physicochemical properties according to their contribution to prediction performance. Results reveal that the following properties are effective in predicting the binding affinity of heterodimeric protein complexes: apparent partition energy based on buried molar fractions, relations between chemical structure and biological activity in principal component analysis IV, and normalized frequency of beta turn. The proposed sequence-based prediction method SVM-BAC uses an optimal feature selection method to identify 14 informative features to classify and predict binding affinity of heterodimeric protein complexes. The characterization analysis revealed that the average numbers of beta turns and hydrogen bonds at protein-protein interfaces in high binding affinity complexes are more than those in low binding affinity complexes.
2015-01-01
Background Protein-protein interactions (PPIs) are involved in various biological processes, and underlying mechanism of the interactions plays a crucial role in therapeutics and protein engineering. Most machine learning approaches have been developed for predicting the binding affinity of protein-protein complexes based on structure and functional information. This work aims to predict the binding affinity of heterodimeric protein complexes from sequences only. Results This work proposes a support vector machine (SVM) based binding affinity classifier, called SVM-BAC, to classify heterodimeric protein complexes based on the prediction of their binding affinity. SVM-BAC identified 14 of 580 sequence descriptors (physicochemical, energetic and conformational properties of the 20 amino acids) to classify 216 heterodimeric protein complexes into low and high binding affinity. SVM-BAC yielded the training accuracy, sensitivity, specificity, AUC and test accuracy of 85.80%, 0.89, 0.83, 0.86 and 83.33%, respectively, better than existing machine learning algorithms. The 14 features and support vector regression were further used to estimate the binding affinities (Pkd) of 200 heterodimeric protein complexes. Prediction performance of a Jackknife test was the correlation coefficient of 0.34 and mean absolute error of 1.4. We further analyze three informative physicochemical properties according to their contribution to prediction performance. Results reveal that the following properties are effective in predicting the binding affinity of heterodimeric protein complexes: apparent partition energy based on buried molar fractions, relations between chemical structure and biological activity in principal component analysis IV, and normalized frequency of beta turn. Conclusions The proposed sequence-based prediction method SVM-BAC uses an optimal feature selection method to identify 14 informative features to classify and predict binding affinity of heterodimeric protein complexes. The characterization analysis revealed that the average numbers of beta turns and hydrogen bonds at protein-protein interfaces in high binding affinity complexes are more than those in low binding affinity complexes. PMID:26681483
NASA Astrophysics Data System (ADS)
Hardinata, Lingga; Warsito, Budi; Suparti
2018-05-01
Complexity of bankruptcy causes the accurate models of bankruptcy prediction difficult to be achieved. Various prediction models have been developed to improve the accuracy of bankruptcy predictions. Machine learning has been widely used to predict because of its adaptive capabilities. Artificial Neural Networks (ANN) is one of machine learning which proved able to complete inference tasks such as prediction and classification especially in data mining. In this paper, we propose the implementation of Jordan Recurrent Neural Networks (JRNN) to classify and predict corporate bankruptcy based on financial ratios. Feedback interconnection in JRNN enable to make the network keep important information well allowing the network to work more effectively. The result analysis showed that JRNN works very well in bankruptcy prediction with average success rate of 81.3785%.
A Feature Fusion Based Forecasting Model for Financial Time Series
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455
NASA Technical Reports Server (NTRS)
Sobel, Larry; Buttitta, Claudio; Suarez, James
1993-01-01
Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.
NASA Astrophysics Data System (ADS)
Sabeerali, C. T.; Ajayamohan, R. S.; Giannakis, Dimitrios; Majda, Andrew J.
2017-11-01
An improved index for real-time monitoring and forecast verification of monsoon intraseasonal oscillations (MISOs) is introduced using the recently developed nonlinear Laplacian spectral analysis (NLSA) technique. Using NLSA, a hierarchy of Laplace-Beltrami (LB) eigenfunctions are extracted from unfiltered daily rainfall data from the Global Precipitation Climatology Project over the south Asian monsoon region. Two modes representing the full life cycle of the northeastward-propagating boreal summer MISO are identified from the hierarchy of LB eigenfunctions. These modes have a number of advantages over MISO modes extracted via extended empirical orthogonal function analysis including higher memory and predictability, stronger amplitude and higher fractional explained variance over the western Pacific, Western Ghats, and adjoining Arabian Sea regions, and more realistic representation of the regional heat sources over the Indian and Pacific Oceans. Real-time prediction of NLSA-derived MISO indices is demonstrated via extended-range hindcasts based on NCEP Coupled Forecast System version 2 operational output. It is shown that in these hindcasts the NLSA MISO indices remain predictable out to ˜3 weeks.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
Liu, Xiu-ying; Wang, Li; Chang, Qing-rui; Wang, Xiao-xing; Shang, Yan
2015-07-01
Wuqi County of Shaanxi Province, where the vegetation recovering measures have been carried out for years, was taken as the study area. A total of 100 loess samples from 24 different profiles were collected. Total nitrogen (TN) and alkali hydrolysable nitrogen (AHN) contents of the soil samples were analyzed, and the soil samples were scanned in the visible/near-infrared (VNIR) region of 350-2500 nm in the laboratory. The calibration models were developed between TN and AHN contents and VNIR values based on correlation analysis (CA) and partial least squares regression (PLS). Independent samples validated the calibration models. The results indicated that the optimum model for predicting TN of loess was established by using first derivative of reflectance. The best model for predicting AHN of loess was established by using normal derivative spectra. The optimum TN model could effectively predict TN in loess from 0 to 40 cm, but the optimum AHN model could only roughly predict AHN at the same depth. This study provided a good method for rapidly predicting TN of loess where vegetation recovering measures have been adopted, but prediction of AHN needs to be further studied.
Estimation of the Driving Style Based on the Users' Activity and Environment Influence.
Sysoev, Mikhail; Kos, Andrej; Guna, Jože; Pogačnik, Matevž
2017-10-21
New models and methods have been designed to predict the influence of the user's environment and activity information to the driving style in standard automotive environments. For these purposes, an experiment was conducted providing two types of analysis: (i) the evaluation of a self-assessment of the driving style; (ii) the prediction of aggressive driving style based on drivers' activity and environment parameters. Sixty seven h of driving data from 10 drivers were collected for analysis in this study. The new parameters used in the experiment are the car door opening and closing manner, which were applied to improve the prediction accuracy. An Android application called Sensoric was developed to collect low-level smartphone data about the users' activity. The driving style was predicted from the user's environment and activity data collected before driving. The prediction was tested against the actual driving style, calculated from objective driving data. The prediction has shown encouraging results, with precision values ranging from 0.727 up to 0.909 for aggressive driving recognition rate. The obtained results lend support to the hypothesis that user's environment and activity data could be used for the prediction of the aggressive driving style in advance, before the driving starts.
Denisova, Galina F; Denisov, Dimitri A; Yeung, Jeffrey; Loeb, Mark B; Diamond, Michael S; Bramson, Jonathan L
2008-11-01
Understanding antibody function is often enhanced by knowledge of the specific binding epitope. Here, we describe a computer algorithm that permits epitope prediction based on a collection of random peptide epitopes (mimotopes) isolated by antibody affinity purification. We applied this methodology to the prediction of epitopes for five monoclonal antibodies against the West Nile virus (WNV) E protein, two of which exhibit therapeutic activity in vivo. This strategy was validated by comparison of our results with existing F(ab)-E protein crystal structures and mutational analysis by yeast surface display. We demonstrate that by combining the results of the mimotope method with our data from mutational analysis, epitopes could be predicted with greater certainty. The two methods displayed great complementarity as the mutational analysis facilitated epitope prediction when the results with the mimotope method were equivocal and the mimotope method revealed a broader number of residues within the epitope than the mutational analysis. Our results demonstrate that the combination of these two prediction strategies provides a robust platform for epitope characterization.
Wang, Li; Wang, Xiaoyi; Jin, Xuebo; Xu, Jiping; Zhang, Huiyan; Yu, Jiabin; Sun, Qian; Gao, Chong; Wang, Lingbin
2017-03-01
The formation process of algae is described inaccurately and water blooms are predicted with a low precision by current methods. In this paper, chemical mechanism of algae growth is analyzed, and a correlation analysis of chlorophyll-a and algal density is conducted by chemical measurement. Taking into account the influence of multi-factors on algae growth and water blooms, the comprehensive prediction method combined with multivariate time series and intelligent model is put forward in this paper. Firstly, through the process of photosynthesis, the main factors that affect the reproduction of the algae are analyzed. A compensation prediction method of multivariate time series analysis based on neural network and Support Vector Machine has been put forward which is combined with Kernel Principal Component Analysis to deal with dimension reduction of the influence factors of blooms. Then, Genetic Algorithm is applied to improve the generalization ability of the BP network and Least Squares Support Vector Machine. Experimental results show that this method could better compensate the prediction model of multivariate time series analysis which is an effective way to improve the description accuracy of algae growth and prediction precision of water blooms.
Predictive analysis effectiveness in determining the epidemic disease infected area
NASA Astrophysics Data System (ADS)
Ibrahim, Najihah; Akhir, Nur Shazwani Md.; Hassan, Fadratul Hafinaz
2017-10-01
Epidemic disease outbreak had caused nowadays community to raise their great concern over the infectious disease controlling, preventing and handling methods to diminish the disease dissemination percentage and infected area. Backpropagation method was used for the counter measure and prediction analysis of the epidemic disease. The predictive analysis based on the backpropagation method can be determine via machine learning process that promotes the artificial intelligent in pattern recognition, statistics and features selection. This computational learning process will be integrated with data mining by measuring the score output as the classifier to the given set of input features through classification technique. The classification technique is the features selection of the disease dissemination factors that likely have strong interconnection between each other in causing infectious disease outbreaks. The predictive analysis of epidemic disease in determining the infected area was introduced in this preliminary study by using the backpropagation method in observation of other's findings. This study will classify the epidemic disease dissemination factors as the features for weight adjustment on the prediction of epidemic disease outbreaks. Through this preliminary study, the predictive analysis is proven to be effective method in determining the epidemic disease infected area by minimizing the error value through the features classification.
Yude Pan; John Hom; Jennifer Jenkins; Richard Birdsey
2004-01-01
To assess what difference it might make to include spatially defined estimates of foliar nitrogen in the regional application of a forest ecosystem model (PnET-II), we composed model predictions of wood production from extensive ground-based forest inventory analysis data across the Mid-Atlantic region. Spatial variation in foliar N concentration was assigned based on...
ERIC Educational Resources Information Center
Fletcher, Edward C., Jr.
2012-01-01
The purpose of this study was to predict occupational choices based on demographic variables and high school curriculum tracks. Based on an analysis of the 1997 National Longitudinal Survey of Youth (NLSY) data set that examined high school graduates' occupational choices in 2006, findings indicated that CTE graduates were 2.7 times more likely to…
Lawrence, Stephen J.
2012-01-01
Regression analyses show that E. coli density in samples was strongly related to turbidity, streamflow characteristics, and season at both sites. The regression equation chosen for the Norcross data showed that 78 percent of the variability in E. coli density (in log base 10 units) was explained by the variability in turbidity values (in log base 10 units), streamflow event (dry-weather flow or stormflow), season (cool or warm), and an interaction term that is the cross product of streamflow event and turbidity. The regression equation chosen for the Atlanta data showed that 76 percent of the variability in E. coli density (in log base 10 units) was explained by the variability in turbidity values (in log base 10 units), water temperature, streamflow event, and an interaction term that is the cross product of streamflow event and turbidity. Residual analysis and model confirmation using new data indicated the regression equations selected at both sites predicted E. coli density within the 90 percent prediction intervals of the equations and could be used to predict E. coli density in real time at both sites.
NASA Technical Reports Server (NTRS)
Starnes, James H., Jr.; Newman, James C., Jr.; Harris, Charles E.; Piascik, Robert S.; Young, Richard D.; Rose, Cheryl A.
2003-01-01
Analysis methodologies for predicting fatigue-crack growth from rivet holes in panels subjected to cyclic loads and for predicting the residual strength of aluminum fuselage structures with cracks and subjected to combined internal pressure and mechanical loads are described. The fatigue-crack growth analysis methodology is based on small-crack theory and a plasticity induced crack-closure model, and the effect of a corrosive environment on crack-growth rate is included. The residual strength analysis methodology is based on the critical crack-tip-opening-angle fracture criterion that characterizes the fracture behavior of a material of interest, and a geometric and material nonlinear finite element shell analysis code that performs the structural analysis of the fuselage structure of interest. The methodologies have been verified experimentally for structures ranging from laboratory coupons to full-scale structural components. Analytical and experimental results based on these methodologies are described and compared for laboratory coupons and flat panels, small-scale pressurized shells, and full-scale curved stiffened panels. The residual strength analysis methodology is sufficiently general to include the effects of multiple-site damage on structural behavior.
Computer-based analysis of general movements reveals stereotypies predicting cerebral palsy.
Philippi, Heike; Karch, Dominik; Kang, Keun-Sun; Wochner, Katarzyna; Pietz, Joachim; Dickhaus, Hartmut; Hadders-Algra, Mijna
2014-10-01
To evaluate a kinematic paradigm of automatic general movements analysis in comparison to clinical assessment in 3-month-old infants and its prediction for neurodevelopmental outcome. Preterm infants at high risk (n=49; 26 males, 23 females) and term infants at low risk (n=18; eight males, 10 females) of developmental impairment were recruited from hospitals around Heidelberg, Germany. Kinematic analysis of general movements by magnet tracking and clinical video-based assessment of general movements were performed at 3 months of age. Neurodevelopmental outcome was evaluated at 2 years. By comparing the general movements of small samples of children with and without cerebral palsy (CP), we developed a kinematic paradigm typical for infants at risk of developing CP. We tested the validity of this paradigm as a tool to predict CP and neurodevelopmental impairment. Clinical assessment correctly identified almost all infants with neurodevelopmental impairment including CP, but did not predict if the infant would be affected by CP or not. The kinematic analysis, in particular the stereotypy score of arm movements, was an excellent predictor of CP, whereas stereotyped repetitive movements of the legs predicted any neurodevelopmental impairment. The automatic assessment of the stereotypy score by magnet tracking in 3-month-old spontaneously moving infants at high risk of developmental abnormalities allowed a valid detection of infants affected and unaffected by CP. © 2014 Mac Keith Press.
File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.-S.
1987-01-01
A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Gagné, Mathieu; Moore, Lynne; Beaudoin, Claudia; Batomen Kuimi, Brice Lionel; Sirois, Marie-Josée
2016-03-01
The International Classification of Diseases (ICD) is the main classification system used for population-based injury surveillance activities but does not contain information on injury severity. ICD-based injury severity measures can be empirically derived or mapped, but no single approach has been formally recommended. This study aimed to compare the performance of ICD-based injury severity measures to predict in-hospital mortality among injury-related admissions. A systematic review and a meta-analysis were conducted. MEDLINE, EMBASE, and Global Health databases were searched from their inception through September 2014. Observational studies that assessed the performance of ICD-based injury severity measures to predict in-hospital mortality and reported discriminative ability using the area under a receiver operating characteristic curve (AUC) were included. Metrics of model performance were extracted. Pooled AUC were estimated under random-effects models. Twenty-two eligible studies reported 72 assessments of discrimination on ICD-based injury severity measures. Reported AUC ranged from 0.681 to 0.958. Of the 72 assessments, 46 showed excellent (0.80 ≤ AUC < 0.90) and 6 outstanding (AUC ≥ 0.90) discriminative ability. Pooled AUC for ICD-based Injury Severity Score (ICISS) based on the product of traditional survival proportions was significantly higher than measures based on ICD mapped to Abbreviated Injury Scale (AIS) scores (0.863 vs. 0.825 for ICDMAP-ISS [p = 0.005] and ICDMAP-NISS [p = 0.016]). Similar results were observed when studies were stratified by the type of data used (trauma registry or hospital discharge) or the provenance of survival proportions (internally or externally derived). However, among studies published after 2003 the Trauma Mortality Prediction Model based on ICD-9 codes (TMPM-9) demonstrated superior discriminative ability than ICISS using the product of traditional survival proportions (0.850 vs. 0.802, p = 0.002). Models generally showed poor calibration. ICISS using the product of traditional survival proportions and TMPM-9 predict mortality more accurately than those mapped to AIS codes and should be preferred for describing injury severity when ICD is used to record injury diagnoses. Systematic review and meta-analysis, level III.
NASA Astrophysics Data System (ADS)
Ni, X. Y.; Huang, H.; Du, W. P.
2017-02-01
The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.
Prediction of properties of intraply hybrid composites
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1979-01-01
Equations based on the mixtures rule are presented for predicting the physical, thermal, hygral, and mechanical properties of unidirectional intraply hybrid composites (UIHC) from the corresponding properties of their constituent composites. Bounds were derived for uniaxial longitudinal strengths, tension, compression, and flexure of UIHC. The equations predict shear and flexural properties which agree with experimental data from UIHC. Use of these equations in a composites mechanics computer code predicted flexural moduli which agree with experimental data from various intraply hybrid angleplied laminates (IHAL). It is indicated, briefly, how these equations can be used in conjunction with composite mechanics and structural analysis during the analysis/design process.
Anti-inflammatory drugs and prediction of new structures by comparative analysis.
Bartzatt, Ronald
2012-01-01
Nonsteroidal anti-inflammatory drugs (NSAIDs) are a group of agents important for their analgesic, anti-inflammatory, and antipyretic properties. This study presents several approaches to predict and elucidate new molecular structures of NSAIDs based on 36 known and proven anti-inflammatory compounds. Based on 36 known NSAIDs the mean value of Log P is found to be 3.338 (standard deviation= 1.237), mean value of polar surface area is 63.176 Angstroms2 (standard deviation = 20.951 A2), and the mean value of molecular weight is 292.665 (standard deviation = 55.627). Nine molecular properties are determined for these 36 NSAID agents, including Log P, number of -OH and -NHn, violations of Rule of 5, number of rotatable bonds, and number of oxygens and nitrogens. Statistical analysis of these nine molecular properties provides numerical parameters to conform to in the design of novel NSAID drug candidates. Multiple regression analysis is accomplished using these properties of 36 agents followed with examples of predicted molecular weight based on minimum and maximum property values. Hierarchical cluster analysis indicated that licofelone, tolfenamic acid, meclofenamic acid, droxicam, and aspirin are substantially distinct from all remaining NSAIDs. Analysis of similarity (ANOSIM) produced R = 0.4947, which indicates low to moderate level of dissimilarity between these 36 NSAIDs. Non-hierarchical K-means cluster analysis separated the 36 NSAIDs into four groups having members of greatest similarity. Likewise, discriminant analysis divided the 36 agents into two groups indicating the greatest level of distinction (discrimination) based on nine properties. These two multivariate methods together provide investigators a means to compare and elucidate novel drug designs to 36 proven compounds and ascertain to which of those are most analogous in pharmacodynamics. In addition, artificial neural network modeling is demonstrated as an approach to predict numerous molecular properties of new drug designs that is based on neural training from 36 proven NSAIDs. Comprehensive and effective approaches are presented in this study for the design of new NSAID type agents which are so very important for inhibition of COX-2 and COX-1 isoenzymes.
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; Jackson, Wade C.
2008-01-01
A simple analysis method has been developed for predicting the residual compressive strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compressive loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; Jackson, Wade C.
2008-01-01
A simple analysis method has been developed for predicting the residual compression strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compression loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.
Polygenic risk score analysis of pathologically confirmed Alzheimer disease.
Escott-Price, Valentina; Myers, Amanda J; Huentelman, Matt; Hardy, John
2017-08-01
Previous estimates of the utility of polygenic risk score analysis for the prediction of Alzheimer disease have given area under the curve (AUC) estimates of <80%. However, these have been based on the genetic analysis of clinical case-control series. Here, we apply the same analytic approaches to a pathological case-control series and show a predictive AUC of 84%. We suggest that this analysis has clinical utility and that there is limited room for further improvement using genetic data. Ann Neurol 2017;82:311-314. © 2017 American Neurological Association.
Are EMS call volume predictions based on demand pattern analysis accurate?
Brown, Lawrence H; Lerner, E Brooke; Larmon, Baxter; LeGassick, Todd; Taigman, Michael
2007-01-01
Most EMS systems determine the number of crews they will deploy in their communities and when those crews will be scheduled based on anticipated call volumes. Many systems use historical data to calculate their anticipated call volumes, a method of prediction known as demand pattern analysis. To evaluate the accuracy of call volume predictions calculated using demand pattern analysis. Seven EMS systems provided 73 consecutive weeks of hourly call volume data. The first 20 weeks of data were used to calculate three common demand pattern analysis constructs for call volume prediction: average peak demand (AP), smoothed average peak demand (SAP), and 90th percentile rank (90%R). The 21st week served as a buffer. Actual call volumes in the last 52 weeks were then compared to the predicted call volumes by using descriptive statistics. There were 61,152 hourly observations in the test period. All three constructs accurately predicted peaks and troughs in call volume but not exact call volume. Predictions were accurate (+/-1 call) 13% of the time using AP, 10% using SAP, and 19% using 90%R. Call volumes were overestimated 83% of the time using AP, 86% using SAP, and 74% using 90%R. When call volumes were overestimated, predictions exceeded actual call volume by a median (Interquartile range) of 4 (2-6) calls for AP, 4 (2-6) for SAP, and 3 (2-5) for 90%R. Call volumes were underestimated 4% of time using AP, 4% using SAP, and 7% using 90%R predictions. When call volumes were underestimated, call volumes exceeded predictions by a median (Interquartile range; maximum under estimation) of 1 (1-2; 18) call for AP, 1 (1-2; 18) for SAP, and 2 (1-3; 20) for 90%R. Results did not vary between systems. Generally, demand pattern analysis estimated or overestimated call volume, making it a reasonable predictor for ambulance staffing patterns. However, it did underestimate call volume between 4% and 7% of the time. Communities need to determine if these rates of over-and underestimation are acceptable given their resources and local priorities.
The predictive value of mean serum uric acid levels for developing prediabetes.
Zhang, Qing; Bao, Xue; Meng, Ge; Liu, Li; Wu, Hongmei; Du, Huanmin; Shi, Hongbin; Xia, Yang; Guo, Xiaoyan; Liu, Xing; Li, Chunlei; Su, Qian; Gu, Yeqing; Fang, Liyun; Yu, Fei; Yang, Huijun; Yu, Bin; Sun, Shaomei; Wang, Xing; Zhou, Ming; Jia, Qiyu; Zhao, Honglin; Huang, Guowei; Song, Kun; Niu, Kaijun
2016-08-01
We aimed to assess the predictive value of mean serum uric acid (SUA) levels for incident prediabetes. Normoglycemic adults (n=39,353) were followed for a median of 3.0years. Prediabetes is defined as impaired fasting glucose (IFG), impaired glucose tolerance (IGT), or impaired HbA1c (IA1c), based on the American Diabetes Association criteria. Serum SUA levels were measured annually. Four diagnostic strategies were used to detect prediabetes in four separate analyses (Analysis 1: IFG. Analysis 2: IFG+IGT. Analysis 3: IFG+IA1c. Analysis 4: IFG+IGT+IA1c). Cox proportional hazards regression models were used to assess the relationship between SUA quintiles and prediabetes. C-statistic was additionally used in the final analysis to assess the accuracy of predictions based upon baseline SUA and mean SUA, respectively. After adjustment for potential confounders, the hazard ratios (95% confidence interval) of prediabetes for the highest versus lowest quintile of mean SUA were 1.22 (1.10, 1.36) in analysis 1; 1.59 (1.23, 2.05) in analysis 2; 1.62 (1.34, 1.95) in analysis 3 and 1.67 (1.31, 2.13) in analysis 4. In contrast, for baseline SUA, significance was only reached in analyses 3 and 4. Moreover, compared with baseline SUA, mean SUA value was associated with a significant increase in the C-statistic (P<0.001). Mean SUA value was strongly and positively related to prediabetes risk, and showed better predictive ability for prediabetes than baseline SUA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan
2018-02-01
Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.
Finite element prediction on the chassis design of UniART4 racing car
NASA Astrophysics Data System (ADS)
Zaman, Z. I.; Basaruddin, K. S.; Basha, M. H.; Rahman, M. T. Abd; Daud, R.
2017-09-01
This paper presents the analysis and evaluation of the chassis design for University Automotive Racing Team No. 4 (UniART4) car based on finite element analysis. The existing UniART4 car chassis was measured and modelled geometrically using Solidwork before analysed in FEA software (ANSYS). Four types of static structural analysis were used to predict the chassis design capability under four different loading conditions; vertical bending, lateral bending, lateral torsion and horizontal lozenging. The results showed the chassis subjected to the highest stress and strain under horizontal lozenging, whereas the minimum stress and strain response was obtained under lateral bending. The present analysis result could provide valuable information in predicting the sustainability of the current UniART car chassis design.
CABS-fold: Server for the de novo and consensus-based prediction of protein structure.
Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej
2013-07-01
The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold.
CABS-fold: server for the de novo and consensus-based prediction of protein structure
Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej
2013-01-01
The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold. PMID:23748950
NASA Astrophysics Data System (ADS)
Izhari, F.; Dhany, H. W.; Zarlis, M.; Sutarman
2018-03-01
A good age in optimizing aspects of development is at the age of 4-6 years, namely with psychomotor development. Psychomotor is broader, more difficult to monitor but has a meaningful value for the child's life because it directly affects his behavior and deeds. Therefore, there is a problem to predict the child's ability level based on psychomotor. This analysis uses backpropagation method analysis with artificial neural network to predict the ability of the child on the psychomotor aspect by generating predictions of the child's ability on psychomotor and testing there is a mean squared error (MSE) value at the end of the training of 0.001. There are 30% of children aged 4-6 years have a good level of psychomotor ability, excellent, less good, and good enough.
Prediction and analysis of beta-turns in proteins by support vector machine.
Pham, Tho Hoan; Satou, Kenji; Ho, Tu Bao
2003-01-01
Tight turn has long been recognized as one of the three important features of proteins after the alpha-helix and beta-sheet. Tight turns play an important role in globular proteins from both the structural and functional points of view. More than 90% tight turns are beta-turns. Analysis and prediction of beta-turns in particular and tight turns in general are very useful for the design of new molecules such as drugs, pesticides, and antigens. In this paper, we introduce a support vector machine (SVM) approach to prediction and analysis of beta-turns. We have investigated two aspects of applying SVM to the prediction and analysis of beta-turns. First, we developed a new SVM method, called BTSVM, which predicts beta-turns of a protein from its sequence. The prediction results on the dataset of 426 non-homologous protein chains by sevenfold cross-validation technique showed that our method is superior to the other previous methods. Second, we analyzed how amino acid positions support (or prevent) the formation of beta-turns based on the "multivariable" classification model of a linear SVM. This model is more general than the other ones of previous statistical methods. Our analysis results are more comprehensive and easier to use than previously published analysis results.
Peleato, Nicolás M; Andrews, Robert C
2015-01-01
This work investigated the application of several fluorescence excitation-emission matrix analysis methods as natural organic matter (NOM) indicators for use in predicting the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). Waters from four different sources (two rivers and two lakes) were subjected to jar testing followed by 24hr disinfection by-product formation tests using chlorine. NOM was quantified using three common measures: dissolved organic carbon, ultraviolet absorbance at 254 nm, and specific ultraviolet absorbance as well as by principal component analysis, peak picking, and parallel factor analysis of fluorescence spectra. Based on multi-linear modeling of THMs and HAAs, principle component (PC) scores resulted in the lowest mean squared prediction error of cross-folded test sets (THMs: 43.7 (μg/L)(2), HAAs: 233.3 (μg/L)(2)). Inclusion of principle components representative of protein-like material significantly decreased prediction error for both THMs and HAAs. Parallel factor analysis did not identify a protein-like component and resulted in prediction errors similar to traditional NOM surrogates as well as fluorescence peak picking. These results support the value of fluorescence excitation-emission matrix-principal component analysis as a suitable NOM indicator in predicting the formation of THMs and HAAs for the water sources studied. Copyright © 2014. Published by Elsevier B.V.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Tang, Zhongwen
2015-01-01
An analytical way to compute predictive probability of success (PPOS) together with credible interval at interim analysis (IA) is developed for big clinical trials with time-to-event endpoints. The method takes account of the fixed data up to IA, the amount of uncertainty in future data, and uncertainty about parameters. Predictive power is a special type of PPOS. The result is confirmed by simulation. An optimal design is proposed by finding optimal combination of analysis time and futility cutoff based on some PPOS criteria.
Impact of Cloud Analysis on Numerical Weather Prediction in the Galician Region of Spain.
NASA Astrophysics Data System (ADS)
Souto, M. J.; Balseiro, C. F.; Pérez-Muñuzuri, V.; Xue, M.; Brewster, K.
2003-01-01
The Advanced Regional Prediction System (ARPS) is applied to operational numerical weather prediction in Galicia, northwest Spain. The model is run daily for 72-h forecasts at a 10-km horizontal spacing. Located on the northwest coast of Spain and influenced by the Atlantic weather systems, Galicia has a high percentage (nearly 50%) of rainy days per year. For these reasons, the precipitation processes and the initialization of moisture and cloud fields are very important. Even though the ARPS model has a sophisticated data analysis system (`ADAS') that includes a 3D cloud analysis package, because of operational constraints, the current forecast starts from the 12-h forecast of the National Centers for Environmental Prediction Aviation Model (AVN). Still, procedures from the ADAS cloud analysis are being used to construct the cloud fields based on AVN data and then are applied to initialize the microphysical variables in ARPS. Comparisons of the ARPS predictions with local observations show that ARPS can predict very well both the daily total precipitation and its spatial distribution. ARPS also shows skill in predicting heavy rains and high winds, as observed during November 2000, and especially in the prediction of the 5 November 2000 storm that caused widespread wind and rain damage in Galicia. It is demonstrated that the cloud analysis contributes to the success of the precipitation forecasts.
47 CFR 74.793 - Digital low power TV and TV translator station protection of broadcast stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... this section, interference prediction analysis is based on the interference thresholds (D/U signal.... Predictions of interference to co-channel DTV broadcast, digital Class A TV, digital LPTV and digital TV....” Predictions of interference to co-channel TV broadcast, Class A TV, LPTV and TV translator stations will be...
47 CFR 74.793 - Digital low power TV and TV translator station protection of broadcast stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... this section, interference prediction analysis is based on the interference thresholds (D/U signal.... Predictions of interference to co-channel DTV broadcast, digital Class A TV, digital LPTV and digital TV....” Predictions of interference to co-channel TV broadcast, Class A TV, LPTV and TV translator stations will be...
47 CFR 74.793 - Digital low power TV and TV translator station protection of broadcast stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... this section, interference prediction analysis is based on the interference thresholds (D/U signal.... Predictions of interference to co-channel DTV broadcast, digital Class A TV, digital LPTV and digital TV....” Predictions of interference to co-channel TV broadcast, Class A TV, LPTV and TV translator stations will be...
47 CFR 74.793 - Digital low power TV and TV translator station protection of broadcast stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... this section, interference prediction analysis is based on the interference thresholds (D/U signal.... Predictions of interference to co-channel DTV broadcast, digital Class A TV, digital LPTV and digital TV....” Predictions of interference to co-channel TV broadcast, Class A TV, LPTV and TV translator stations will be...
Humphries, Stephen M; Yagihashi, Kunihiro; Huckleberry, Jason; Rho, Byung-Hak; Schroeder, Joyce D; Strand, Matthew; Schwarz, Marvin I; Flaherty, Kevin R; Kazerooni, Ella A; van Beek, Edwin J R; Lynch, David A
2017-10-01
Purpose To evaluate associations between pulmonary function and both quantitative analysis and visual assessment of thin-section computed tomography (CT) images at baseline and at 15-month follow-up in subjects with idiopathic pulmonary fibrosis (IPF). Materials and Methods This retrospective analysis of preexisting anonymized data, collected prospectively between 2007 and 2013 in a HIPAA-compliant study, was exempt from additional institutional review board approval. The extent of lung fibrosis at baseline inspiratory chest CT in 280 subjects enrolled in the IPF Network was evaluated. Visual analysis was performed by using a semiquantitative scoring system. Computer-based quantitative analysis included CT histogram-based measurements and a data-driven textural analysis (DTA). Follow-up CT images in 72 of these subjects were also analyzed. Univariate comparisons were performed by using Spearman rank correlation. Multivariate and longitudinal analyses were performed by using a linear mixed model approach, in which models were compared by using asymptotic χ 2 tests. Results At baseline, all CT-derived measures showed moderate significant correlation (P < .001) with pulmonary function. At follow-up CT, changes in DTA scores showed significant correlation with changes in both forced vital capacity percentage predicted (ρ = -0.41, P < .001) and diffusing capacity for carbon monoxide percentage predicted (ρ = -0.40, P < .001). Asymptotic χ 2 tests showed that inclusion of DTA score significantly improved fit of both baseline and longitudinal linear mixed models in the prediction of pulmonary function (P < .001 for both). Conclusion When compared with semiquantitative visual assessment and CT histogram-based measurements, DTA score provides additional information that can be used to predict diminished function. Automatic quantification of lung fibrosis at CT yields an index of severity that correlates with visual assessment and functional change in subjects with IPF. © RSNA, 2017.
1992-12-21
in preparation). Foundations of artificial intelligence. Cambridge, MA: MIT Press. O’Reilly, R. C. (1991). X3DNet: An X- Based Neural Network ...2.2.3 Trace based protocol analysis 19 2.2A Summary of important data features 21 2.3 Tools related to process model testing 23 2.3.1 Tools for building...algorithm 57 3. Requirements for testing process models using trace based protocol 59 analysis 3.1 Definition of trace based protocol analysis (TBPA) 59
Rotor Broadband Noise Prediction with Comparison to Model Data
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Burley, Casey L.
2001-01-01
This paper reports an analysis and prediction development of rotor broadband noise. The two primary components of this noise are Blade-Wake Interaction (BWI) noise, due to the blades' interaction with the turbulent wakes of the preceding blades, and "Self" noise, due to the development and shedding of turbulence within the blades' boundary layers. Emphasized in this report is the new code development for Self noise. The analysis and validation employs data from the HART program, a model BO-105 rotor wind tunnel test conducted in the German-Dutch Wind Tunnel (DNW). The BWI noise predictions are based on measured pressure response coherence functions using cross-spectral methods. The Self noise predictions are based on previously reported semiempirical modeling of Self noise obtained from isolated airfoil sections and the use of CAMRAD.Modl to define rotor performance and local blade segment flow conditions. Both BWI and Self noise from individual blade segments are Doppler shifted and summed at the observer positions. Prediction comparisons with measurements show good agreement for a range of rotor operating conditions from climb to steep descent. The broadband noise predictions, along with those of harmonic and impulsive Blade-Vortex Interaction (BVI) noise predictions, demonstrate a significant advance in predictive capability for main rotor noise.
Analysis of Free Modeling Predictions by RBO Aleph in CASP11
Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver
2015-01-01
The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue–residue contact prediction by EPC-map and contact–guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. PMID:26492194
NASA Astrophysics Data System (ADS)
Xu, Wenbo; Jing, Shaocai; Yu, Wenjuan; Wang, Zhaoxian; Zhang, Guoping; Huang, Jianxi
2013-11-01
In this study, the high risk areas of Sichuan Province with debris flow, Panzhihua and Liangshan Yi Autonomous Prefecture, were taken as the studied areas. By using rainfall and environmental factors as the predictors and based on the different prior probability combinations of debris flows, the prediction of debris flows was compared in the areas with statistical methods: logistic regression (LR) and Bayes discriminant analysis (BDA). The results through the comprehensive analysis show that (a) with the mid-range scale prior probability, the overall predicting accuracy of BDA is higher than those of LR; (b) with equal and extreme prior probabilities, the overall predicting accuracy of LR is higher than those of BDA; (c) the regional predicting models of debris flows with rainfall factors only have worse performance than those introduced environmental factors, and the predicting accuracies of occurrence and nonoccurrence of debris flows have been changed in the opposite direction as the supplemented information.
Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine
NASA Astrophysics Data System (ADS)
Santoso, Noviyanti; Wibowo, Wahyu
2018-03-01
A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.
The Importance of Bacterial Culture to Food Microbiology in the Age of Genomics.
Gill, Alexander
2017-01-01
Culture-based and genomics methods provide different insights into the nature and behavior of bacteria. Maximizing the usefulness of both approaches requires recognizing their limitations and employing them appropriately. Genomic analysis excels at identifying bacteria and establishing the relatedness of isolates. Culture-based methods remain necessary for detection and enumeration, to determine viability, and to validate phenotype predictions made on the bias of genomic analysis. The purpose of this short paper is to discuss the application of culture-based analysis and genomics to the questions food microbiologists routinely need to ask regarding bacteria to ensure the safety of food and its economic production and distribution. To address these issues appropriate tools are required for the detection and enumeration of specific bacterial populations and the characterization of isolates for, identification, phylogenetics, and phenotype prediction.
An integrated weather and sea-state forecasting system for the Arabian Peninsula (WASSF)
NASA Astrophysics Data System (ADS)
Kallos, George; Galanis, George; Spyrou, Christos; Mitsakou, Christina; Solomos, Stavros; Bartsotas, Nikolaos; Kalogrei, Christina; Athanaselis, Ioannis; Sofianos, Sarantis; Vervatis, Vassios; Axaopoulos, Panagiotis; Papapostolou, Alexandros; Qahtani, Jumaan Al; Alaa, Elyas; Alexiou, Ioannis; Beard, Daniel
2013-04-01
Nowadays, large industrial conglomerates such as the Saudi ARAMCO, require a series of weather and sea state forecasting products that cannot be found in state meteorological offices or even commercial data providers. The two major objectives of the system is prevention and mitigation of environmental problems and of course early warning of local conditions associated with extreme weather events. The management and operations part is related to early warning of weather and sea-state events that affect operations of various facilities. The environmental part is related to air quality and especially the desert dust levels in the atmosphere. The components of the integrated system include: (i) a weather and desert dust prediction system with forecasting horizon of 5 days, (ii) a wave analysis and prediction component for Red Sea and Arabian Gulf, (iii) an ocean circulation and tidal analysis and prediction of both Red Sea and Arabian Gulf and (iv) an Aviation part specializing in the vertical structure of the atmosphere and extreme events that affect air transport and other operations. Specialized data sets required for on/offshore operations are provided ate regular basis. State of the art modeling components are integrated to a unique system that distributes the produced analysis and forecasts to each department. The weather and dust prediction system is SKIRON/Dust, the wave analysis and prediction system is based on WAM cycle 4 model from ECMWF, the ocean circulation model is MICOM while the tidal analysis and prediction is a development of the Ocean Physics and Modeling Group of University of Athens, incorporating the Tidal Model Driver. A nowcasting subsystem is included. An interactive system based on Google Maps gives the capability to extract and display the necessary information for any location of the Arabian Peninsula, the Red Sea and Arabian Gulf.
A Rational Analysis of Rule-Based Concept Learning
ERIC Educational Resources Information Center
Goodman, Noah D.; Tenenbaum, Joshua B.; Feldman, Jacob; Griffiths, Thomas L.
2008-01-01
This article proposes a new model of human concept learning that provides a rational analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space--a concept language of logical rules. This article compares the model predictions to human generalization judgments in several…
EUGENE'HOM: A generic similarity-based gene finder using multiple homologous sequences.
Foissac, Sylvain; Bardou, Philippe; Moisan, Annick; Cros, Marie-Josée; Schiex, Thomas
2003-07-01
EUGENE'HOM is a gene prediction software for eukaryotic organisms based on comparative analysis. EUGENE'HOM is able to take into account multiple homologous sequences from more or less closely related organisms. It integrates the results of TBLASTX analysis, splice site and start codon prediction and a robust coding/non-coding probabilistic model which allows EUGENE'HOM to handle sequences from a variety of organisms. The current target of EUGENE'HOM is plant sequences. The EUGENE'HOM web site is available at http://genopole.toulouse.inra.fr/bioinfo/eugene/EuGeneHom/cgi-bin/EuGeneHom.pl.
Bruce G. Marcot; Peter H. Singleton; Nathan H. Schumaker
2015-01-01
Sensitivity analysisâdetermination of how prediction variables affect response variablesâof individual-based models (IBMs) are few but important to the interpretation of model output. We present sensitivity analysis of a spatially explicit IBM (HexSim) of a threatened species, the Northern Spotted Owl (NSO; Strix occidentalis caurina) in Washington...
Short-term PV/T module temperature prediction based on PCA-RBF neural network
NASA Astrophysics Data System (ADS)
Li, Jiyong; Zhao, Zhendong; Li, Yisheng; Xiao, Jing; Tang, Yunfeng
2018-02-01
Aiming at the non-linearity and large inertia of temperature control in PV/T system, short-term temperature prediction of PV/T module is proposed, to make the PV/T system controller run forward according to the short-term forecasting situation to optimize control effect. Based on the analysis of the correlation between PV/T module temperature and meteorological factors, and the temperature of adjacent time series, the principal component analysis (PCA) method is used to pre-process the original input sample data. Combined with the RBF neural network theory, the simulation results show that the PCA method makes the prediction accuracy of the network model higher and the generalization performance stronger than that of the RBF neural network without the main component extraction.
Link prediction based on nonequilibrium cooperation effect
NASA Astrophysics Data System (ADS)
Li, Lanxi; Zhu, Xuzhen; Tian, Hui
2018-04-01
Link prediction in complex networks has become a common focus of many researchers. But most existing methods concentrate on neighbors, and rarely consider degree heterogeneity of two endpoints. Node degree represents the importance or status of endpoints. We describe the large-degree heterogeneity as the nonequilibrium between nodes. This nonequilibrium facilitates a stable cooperation between endpoints, so that two endpoints with large-degree heterogeneity tend to connect stably. We name such a phenomenon as the nonequilibrium cooperation effect. Therefore, this paper proposes a link prediction method based on the nonequilibrium cooperation effect to improve accuracy. Theoretical analysis will be processed in advance, and at the end, experiments will be performed in 12 real-world networks to compare the mainstream methods with our indices in the network through numerical analysis.
NASA Astrophysics Data System (ADS)
Wang, Jun; Wang, Yang; Zeng, Hui
2016-01-01
A key issue to address in synthesizing spatial data with variable-support in spatial analysis and modeling is the change-of-support problem. We present an approach for solving the change-of-support and variable-support data fusion problems. This approach is based on geostatistical inverse modeling that explicitly accounts for differences in spatial support. The inverse model is applied here to produce both the best predictions of a target support and prediction uncertainties, based on one or more measurements, while honoring measurements. Spatial data covering large geographic areas often exhibit spatial nonstationarity and can lead to computational challenge due to the large data size. We developed a local-window geostatistical inverse modeling approach to accommodate these issues of spatial nonstationarity and alleviate computational burden. We conducted experiments using synthetic and real-world raster data. Synthetic data were generated and aggregated to multiple supports and downscaled back to the original support to analyze the accuracy of spatial predictions and the correctness of prediction uncertainties. Similar experiments were conducted for real-world raster data. Real-world data with variable-support were statistically fused to produce single-support predictions and associated uncertainties. The modeling results demonstrate that geostatistical inverse modeling can produce accurate predictions and associated prediction uncertainties. It is shown that the local-window geostatistical inverse modeling approach suggested offers a practical way to solve the well-known change-of-support problem and variable-support data fusion problem in spatial analysis and modeling.
An Operational Model for the Prediction of Jet Blast
DOT National Transportation Integrated Search
2012-01-09
This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...
Barron, Daniel S; Fox, Peter T; Pardoe, Heath; Lancaster, Jack; Price, Larry R; Blackmon, Karen; Berry, Kristen; Cavazos, Jose E; Kuzniecky, Ruben; Devinsky, Orrin; Thesen, Thomas
2015-01-01
Noninvasive markers of brain function could yield biomarkers in many neurological disorders. Disease models constrained by coordinate-based meta-analysis are likely to increase this yield. Here, we evaluate a thalamic model of temporal lobe epilepsy that we proposed in a coordinate-based meta-analysis and extended in a diffusion tractography study of an independent patient population. Specifically, we evaluated whether thalamic functional connectivity (resting-state fMRI-BOLD) with temporal lobe areas can predict seizure onset laterality, as established with intracranial EEG. Twenty-four lesional and non-lesional temporal lobe epilepsy patients were studied. No significant differences in functional connection strength in patient and control groups were observed with Mann-Whitney Tests (corrected for multiple comparisons). Notwithstanding the lack of group differences, individual patient difference scores (from control mean connection strength) successfully predicted seizure onset zone as shown in ROC curves: discriminant analysis (two-dimensional) predicted seizure onset zone with 85% sensitivity and 91% specificity; logistic regression (four-dimensional) achieved 86% sensitivity and 100% specificity. The strongest markers in both analyses were left thalamo-hippocampal and right thalamo-entorhinal cortex functional connection strength. Thus, this study shows that thalamic functional connections are sensitive and specific markers of seizure onset laterality in individual temporal lobe epilepsy patients. This study also advances an overall strategy for the programmatic development of neuroimaging biomarkers in clinical and genetic populations: a disease model informed by coordinate-based meta-analysis was used to anatomically constrain individual patient analyses.
Seasonal forecasts in the Sahel region: the use of rainfall-based predictive variables
NASA Astrophysics Data System (ADS)
Lodoun, Tiganadaba; Sanon, Moussa; Giannini, Alessandra; Traoré, Pierre Sibiry; Somé, Léopold; Rasolodimby, Jeanne Millogo
2014-08-01
In the Sahel region, seasonal predictions are crucial to alleviate the impacts of climate variability on populations' livelihoods. Agricultural planning (e.g., decisions about sowing date, fertilizer application date, and choice of crop or cultivar) is based on empirical predictive indices whose accuracy to date has not been scientifically proven. This paper attempts to statistically test whether the pattern of rainfall distribution over the May-July period contributes to predicting the real onset date and the nature (wet or dry) of the rainy season, as farmers believe. To that end, we considered historical records of daily rainfall from 51 stations spanning the period 1920-2008 and the different agro-climatic zones in Burkina Faso. We performed (1) principal component analysis to identify climatic zones, based on the patterns of intra-seasonal rainfall, (2) and linear discriminant analysis to find the best rainfall-based variables to distinguish between real and false onset dates of the rainy season, and between wet and dry seasons in each climatic zone. A total of nine climatic zones were identified in each of which, based on rainfall records from May to July, we derived linear discriminant functions to correctly predict the nature of a potential onset date of the rainy season (real or false) and that of the rainy season (dry or wet) in at least three cases out of five. These functions should contribute to alleviating the negative impacts of climate variability in the different climatic zones of Burkina Faso.
Exhaled Breath Markers for Nonimaging and Noninvasive Measures for Detection of Multiple Sclerosis.
Broza, Yoav Y; Har-Shai, Lior; Jeries, Raneen; Cancilla, John C; Glass-Marmor, Lea; Lejbkowicz, Izabella; Torrecilla, José S; Yao, Xuelin; Feng, Xinliang; Narita, Akimitsu; Müllen, Klaus; Miller, Ariel; Haick, Hossam
2017-11-15
Multiple sclerosis (MS) is the most common chronic neurological disease affecting young adults. MS diagnosis is based on clinical characteristics and confirmed by examination of the cerebrospinal fluids (CSF) or by magnetic resonance imaging (MRI) of the brain or spinal cord or both. However, neither of the current diagnostic procedures are adequate as a routine tool to determine disease state. Thus, diagnostic biomarkers are needed. In the current study, a novel approach that could meet these expectations is presented. The approach is based on noninvasive analysis of volatile organic compounds (VOCs) in breath. Exhaled breath was collected from 204 participants, 146 MS and 58 healthy control individuals. Analysis was performed by gas-chromatography mass-spectrometry (GC-MS) and nanomaterial-based sensor array. Predictive models were derived from the sensors, using artificial neural networks (ANNs). GC-MS analysis revealed significant differences in VOC abundance between MS patients and controls. Sensor data analysis on training sets was able to discriminate in binary comparisons between MS patients and controls with accuracies up to 90%. Blinded sets showed 95% positive predictive value (PPV) between MS-remission and control, 100% sensitivity with 100% negative predictive value (NPV) between MS not-treated (NT) and control, and 86% NPV between relapse and control. Possible links between VOC biomarkers and the MS pathogenesis were established. Preliminary results suggest the applicability of a new nanotechnology-based method for MS diagnostics.
Sirius PSB: a generic system for analysis of biological sequences.
Koh, Chuan Hock; Lin, Sharene; Jedd, Gregory; Wong, Limsoon
2009-12-01
Computational tools are essential components of modern biological research. For example, BLAST searches can be used to identify related proteins based on sequence homology, or when a new genome is sequenced, prediction models can be used to annotate functional sites such as transcription start sites, translation initiation sites and polyadenylation sites and to predict protein localization. Here we present Sirius Prediction Systems Builder (PSB), a new computational tool for sequence analysis, classification and searching. Sirius PSB has four main operations: (1) Building a classifier, (2) Deploying a classifier, (3) Search for proteins similar to query proteins, (4) Preliminary and post-prediction analysis. Sirius PSB supports all these operations via a simple and interactive graphical user interface. Besides being a convenient tool, Sirius PSB has also introduced two novelties in sequence analysis. Firstly, genetic algorithm is used to identify interesting features in the feature space. Secondly, instead of the conventional method of searching for similar proteins via sequence similarity, we introduced searching via features' similarity. To demonstrate the capabilities of Sirius PSB, we have built two prediction models - one for the recognition of Arabidopsis polyadenylation sites and another for the subcellular localization of proteins. Both systems are competitive against current state-of-the-art models based on evaluation of public datasets. More notably, the time and effort required to build each model is greatly reduced with the assistance of Sirius PSB. Furthermore, we show that under certain conditions when BLAST is unable to find related proteins, Sirius PSB can identify functionally related proteins based on their biophysical similarities. Sirius PSB and its related supplements are available at: http://compbio.ddns.comp.nus.edu.sg/~sirius.
NASA Astrophysics Data System (ADS)
Aissaoui, Tayeb; Benguerba, Yacine; AlNashef, Inas M.
2017-08-01
The in-silico combination mechanism of triethylene glycol based DESs has been studied. COSMO-RS and graphical user interface TmoleX software were used to predict the interaction mechanism of hydrogen bond donors (HBDs) with hydrogen bond acceptors (HBA) to form DESs. The predicted IR results were compared with the previously reported experimental FT-IR analysis for the same studied DESs. The sigma profiles for the HBD, HBAs and formed DESs were interpreted to identify qualitatively molecular properties like polarity or hydrogen bonding donor and acceptor abilities. The predicted physicochemical properties reported in this study were in good agreement with experimental ones.
Kraft, Reuben H.; Mckee, Phillip Justin; Dagro, Amy M.; Grafton, Scott T.
2012-01-01
This article presents the integration of brain injury biomechanics and graph theoretical analysis of neuronal connections, or connectomics, to form a neurocomputational model that captures spatiotemporal characteristics of trauma. We relate localized mechanical brain damage predicted from biofidelic finite element simulations of the human head subjected to impact with degradation in the structural connectome for a single individual. The finite element model incorporates various length scales into the full head simulations by including anisotropic constitutive laws informed by diffusion tensor imaging. Coupling between the finite element analysis and network-based tools is established through experimentally-based cellular injury thresholds for white matter regions. Once edges are degraded, graph theoretical measures are computed on the “damaged” network. For a frontal impact, the simulations predict that the temporal and occipital regions undergo the most axonal strain and strain rate at short times (less than 24 hrs), which leads to cellular death initiation, which results in damage that shows dependence on angle of impact and underlying microstructure of brain tissue. The monotonic cellular death relationships predict a spatiotemporal change of structural damage. Interestingly, at 96 hrs post-impact, computations predict no network nodes were completely disconnected from the network, despite significant damage to network edges. At early times () network measures of global and local efficiency were degraded little; however, as time increased to 96 hrs the network properties were significantly reduced. In the future, this computational framework could help inform functional networks from physics-based structural brain biomechanics to obtain not only a biomechanics-based understanding of injury, but also neurophysiological insight. PMID:22915997
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.
Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.
Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions
Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.
2013-01-01
Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843
Group-regularized individual prediction: theory and application to pain.
Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D
2017-01-15
Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha
2014-03-01
Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method1 with novel MR-based multivariate morphometric surface map of the hippocampus2 to predict future cognitive scores of patients. Previous work by Zhou et al.1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al.2s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
NASA Astrophysics Data System (ADS)
Maurya, S. P.; Singh, K. H.; Singh, N. P.
2018-05-01
In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.
Fatima, Iram; Fahim, Muhammad; Lee, Young-Koo; Lee, Sungyoung
2013-01-01
In recent years, activity recognition in smart homes is an active research area due to its applicability in many applications, such as assistive living and healthcare. Besides activity recognition, the information collected from smart homes has great potential for other application domains like lifestyle analysis, security and surveillance, and interaction monitoring. Therefore, discovery of users common behaviors and prediction of future actions from past behaviors become an important step towards allowing an environment to provide personalized service. In this paper, we develop a unified framework for activity recognition-based behavior analysis and action prediction. For this purpose, first we propose kernel fusion method for accurate activity recognition and then identify the significant sequential behaviors of inhabitants from recognized activities of their daily routines. Moreover, behaviors patterns are further utilized to predict the future actions from past activities. To evaluate the proposed framework, we performed experiments on two real datasets. The results show a remarkable improvement of 13.82% in the accuracy on average of recognized activities along with the extraction of significant behavioral patterns and precise activity predictions with 6.76% increase in F-measure. All this collectively help in understanding the users” actions to gain knowledge about their habits and preferences. PMID:23435057
Sensor image prediction techniques
NASA Astrophysics Data System (ADS)
Stenger, A. J.; Stone, W. R.; Berry, L.; Murray, T. J.
1981-02-01
The preparation of prediction imagery is a complex, costly, and time consuming process. Image prediction systems which produce a detailed replica of the image area require the extensive Defense Mapping Agency data base. The purpose of this study was to analyze the use of image predictions in order to determine whether a reduced set of more compact image features contains enough information to produce acceptable navigator performance. A job analysis of the navigator's mission tasks was performed. It showed that the cognitive and perceptual tasks he performs during navigation are identical to those performed for the targeting mission function. In addition, the results of the analysis of his performance when using a particular sensor can be extended to the analysis of this mission tasks using any sensor. An experimental approach was used to determine the relationship between navigator performance and the type of amount of information in the prediction image. A number of subjects were given image predictions containing varying levels of scene detail and different image features, and then asked to identify the predicted targets in corresponding dynamic flight sequences over scenes of cultural, terrain, and mixed (both cultural and terrain) content.
Zhang, H-X; Xu, X-Q; Fu, J-F; Lai, C; Chen, X-F
2015-04-01
Predictors of quantitative evaluation of hepatic steatosis and liver fat content (LFC) using clinical and laboratory variables available in the general practice in the obese children are poorly identified. To build predictive models of hepatic steatosis and LFC in obese children based on biochemical parameters and anthropometry. Hepatic steatosis and LFC were determined using proton magnetic resonance spectroscopy in 171 obese children aged 5.5-18.0 years. Routine clinical and laboratory parameters were also measured in all subjects. Group analysis, univariable correlation analysis, and multivariate logistic and linear regression analysis were used to develop a liver fat score to identify hepatic steatosis and a liver fat equation to predict LFC in each subject. The predictive model of hepatic steatosis in our participants based on waist circumference and alanine aminotransferase had an area under the receiver operating characteristic curve of 0.959 (95% confidence interval: 0.927-0.990). The optimal cut-off value of 0.525 for determining hepatic steatosis had sensitivity of 93% and specificity of 90%. A liver fat equation was also developed based on the same parameters of hepatic steatosis liver fat score, which would be used to calculate the LFC in each individual. The liver fat score and liver fat equation, consisting of routinely available variables, may help paediatricians to accurately determine hepatic steatosis and LFC in clinical practice, but external validation is needed before it can be employed for this purpose. © 2014 The Authors. Pediatric Obesity © 2014 World Obesity.
Adeniyi, D A; Wei, Z; Yang, Y
2018-01-30
A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.
NASA Astrophysics Data System (ADS)
Shibata, Hisaichi; Takaki, Ryoji
2017-11-01
A novel method to compute current-voltage characteristics (CVCs) of direct current positive corona discharges is formulated based on a perturbation technique. We use linearized fluid equations coupled with the linearized Poisson's equation. Townsend relation is assumed to predict CVCs apart from the linearization point. We choose coaxial cylinders as a test problem, and we have successfully predicted parameters which can determine CVCs with arbitrary inner and outer radii. It is also confirmed that the proposed method essentially does not induce numerical instabilities.
Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.
Lee, Wen-Chung; Wu, Yun-Chun
2016-01-01
The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.
NASA Astrophysics Data System (ADS)
Wang, Weijie; Lu, Yanmin
2018-03-01
Most existing Collaborative Filtering (CF) algorithms predict a rating as the preference of an active user toward a given item, which is always a decimal fraction. Meanwhile, the actual ratings in most data sets are integers. In this paper, we discuss and demonstrate why rounding can bring different influences to these two metrics; prove that rounding is necessary in post-processing of the predicted ratings, eliminate of model prediction bias, improving the accuracy of the prediction. In addition, we also propose two new rounding approaches based on the predicted rating probability distribution, which can be used to round the predicted rating to an optimal integer rating, and get better prediction accuracy compared to the Basic Rounding approach. Extensive experiments on different data sets validate the correctness of our analysis and the effectiveness of our proposed rounding approaches.
Scaling analysis of gas-liquid two-phase flow pattern in microgravity
NASA Technical Reports Server (NTRS)
Lee, Jinho
1993-01-01
A scaling analysis of gas-liquid two-phase flow pattern in microgravity, based on the dominant physical mechanism, was carried out with the goal of predicting the gas-liquid two-phase flow regime in a pipe under conditions of microgravity. The results demonstrated the effect of inlet geometry on the flow regime transition. A comparison of the predictions with existing experimental data showed good agreement.
Improved nonlinear prediction method
NASA Astrophysics Data System (ADS)
Adenan, Nur Hamiza; Md Noorani, Mohd Salmi
2014-06-01
The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.
Wet tropospheric delays forecast based on Vienna Mapping Function time series analysis
NASA Astrophysics Data System (ADS)
Rzepecka, Zofia; Kalita, Jakub
2016-04-01
It is well known that the dry part of the zenith tropospheric delay (ZTD) is much easier to model than the wet part (ZTW). The aim of the research is applying stochastic modeling and prediction of ZTW using time series analysis tools. Application of time series analysis enables closer understanding of ZTW behavior as well as short-term prediction of future ZTW values. The ZTW data used for the studies were obtained from the GGOS service hold by Vienna technical University. The resolution of the data is six hours. ZTW for the years 2010 -2013 were adopted for the study. The International GNSS Service (IGS) permanent stations LAMA and GOPE, located in mid-latitudes, were admitted for the investigations. Initially the seasonal part was separated and modeled using periodic signals and frequency analysis. The prominent annual and semi-annual signals were removed using sines and consines functions. The autocorrelation of the resulting signal is significant for several days (20-30 samples). The residuals of this fitting were further analyzed and modeled with ARIMA processes. For both the stations optimal ARMA processes based on several criterions were obtained. On this basis predicted ZTW values were computed for one day ahead, leaving the white process residuals. Accuracy of the prediction can be estimated at about 3 cm.
Factors leading to different viability predictions for a grizzly bear data set
Mills, L.S.; Hayes, S.G.; Wisdom, M.J.; Citta, J.; Mattson, D.J.; Murphy, K.
1996-01-01
Population viability analysis programs are being used increasingly in research and management applications, but there has not been a systematic study of the congruence of different program predictions based on a single data set. We performed such an analysis using four population viability analysis computer programs: GAPPS, INMAT, RAMAS/AGE, and VORTEX. The standardized demographic rates used in all programs were generalized from hypothetical increasing and decreasing grizzly bear (Ursus arctos horribilis) populations. Idiosyncracies of input format for each program led to minor differences in intrinsic growth rates that translated into striking differences in estimates of extinction rates and expected population size. In contrast, the addition of demographic stochasticity, environmental stochasticity, and inbreeding costs caused only a small divergence in viability predictions. But, the addition of density dependence caused large deviations between the programs despite our best attempts to use the same density-dependent functions. Population viability programs differ in how density dependence is incorporated, and the necessary functions are difficult to parameterize accurately. Thus, we recommend that unless data clearly suggest a particular density-dependent model, predictions based on population viability analysis should include at least one scenario without density dependence. Further, we describe output metrics that may differ between programs; development of future software could benefit from standardized input and output formats across different programs.
Crysalis: an integrated server for computational analysis and design of protein crystallization.
Wang, Huilin; Feng, Liubin; Zhang, Ziding; Webb, Geoffrey I; Lin, Donghai; Song, Jiangning
2016-02-24
The failure of multi-step experimental procedures to yield diffraction-quality crystals is a major bottleneck in protein structure determination. Accordingly, several bioinformatics methods have been successfully developed and employed to select crystallizable proteins. Unfortunately, the majority of existing in silico methods only allow the prediction of crystallization propensity, seldom enabling computational design of protein mutants that can be targeted for enhancing protein crystallizability. Here, we present Crysalis, an integrated crystallization analysis tool that builds on support-vector regression (SVR) models to facilitate computational protein crystallization prediction, analysis, and design. More specifically, the functionality of this new tool includes: (1) rapid selection of target crystallizable proteins at the proteome level, (2) identification of site non-optimality for protein crystallization and systematic analysis of all potential single-point mutations that might enhance protein crystallization propensity, and (3) annotation of target protein based on predicted structural properties. We applied the design mode of Crysalis to identify site non-optimality for protein crystallization on a proteome-scale, focusing on proteins currently classified as non-crystallizable. Our results revealed that site non-optimality is based on biases related to residues, predicted structures, physicochemical properties, and sequence loci, which provides in-depth understanding of the features influencing protein crystallization. Crysalis is freely available at http://nmrcen.xmu.edu.cn/crysalis/.
Crysalis: an integrated server for computational analysis and design of protein crystallization
Wang, Huilin; Feng, Liubin; Zhang, Ziding; Webb, Geoffrey I.; Lin, Donghai; Song, Jiangning
2016-01-01
The failure of multi-step experimental procedures to yield diffraction-quality crystals is a major bottleneck in protein structure determination. Accordingly, several bioinformatics methods have been successfully developed and employed to select crystallizable proteins. Unfortunately, the majority of existing in silico methods only allow the prediction of crystallization propensity, seldom enabling computational design of protein mutants that can be targeted for enhancing protein crystallizability. Here, we present Crysalis, an integrated crystallization analysis tool that builds on support-vector regression (SVR) models to facilitate computational protein crystallization prediction, analysis, and design. More specifically, the functionality of this new tool includes: (1) rapid selection of target crystallizable proteins at the proteome level, (2) identification of site non-optimality for protein crystallization and systematic analysis of all potential single-point mutations that might enhance protein crystallization propensity, and (3) annotation of target protein based on predicted structural properties. We applied the design mode of Crysalis to identify site non-optimality for protein crystallization on a proteome-scale, focusing on proteins currently classified as non-crystallizable. Our results revealed that site non-optimality is based on biases related to residues, predicted structures, physicochemical properties, and sequence loci, which provides in-depth understanding of the features influencing protein crystallization. Crysalis is freely available at http://nmrcen.xmu.edu.cn/crysalis/. PMID:26906024
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.
1995-01-01
NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.
NASA Technical Reports Server (NTRS)
Andrews, E. H., Jr.; Mackley, E. A.
1976-01-01
An aerodynamic engine inlet analysis was performed on the experimental results obtained at nominal Mach numbers of 5, 6, and 7 from the NASA Hypersonic Research Engine (HRE) Aerothermodynamic Integration Model (AIM). Incorporation on the AIM of the mixed-compression inlet design represented the final phase of an inlet development program of the HRE Project. The purpose of this analysis was to compare the AIM inlet experimental results with theoretical results. Experimental performance was based on measured surface pressures used in a one-dimensional force-momentum theorem. Results of the analysis indicate that surface static-pressure measurements agree reasonably well with theoretical predictions except in the regions where the theory predicts large pressure discontinuities. Experimental and theoretical results both based on the one-dimensional force-momentum theorem yielded inlet performance parameters as functions of Mach number that exhibited reasonable agreement. Previous predictions of inlet unstart that resulted from pressure disturbances created by fuel injection and combustion appeared to be pessimistic.
Predicting PDZ domain mediated protein interactions from structure
2013-01-01
Background PDZ domains are structural protein domains that recognize simple linear amino acid motifs, often at protein C-termini, and mediate protein-protein interactions (PPIs) in important biological processes, such as ion channel regulation, cell polarity and neural development. PDZ domain-peptide interaction predictors have been developed based on domain and peptide sequence information. Since domain structure is known to influence binding specificity, we hypothesized that structural information could be used to predict new interactions compared to sequence-based predictors. Results We developed a novel computational predictor of PDZ domain and C-terminal peptide interactions using a support vector machine trained with PDZ domain structure and peptide sequence information. Performance was estimated using extensive cross validation testing. We used the structure-based predictor to scan the human proteome for ligands of 218 PDZ domains and show that the predictions correspond to known PDZ domain-peptide interactions and PPIs in curated databases. The structure-based predictor is complementary to the sequence-based predictor, finding unique known and novel PPIs, and is less dependent on training–testing domain sequence similarity. We used a functional enrichment analysis of our hits to create a predicted map of PDZ domain biology. This map highlights PDZ domain involvement in diverse biological processes, some only found by the structure-based predictor. Based on this analysis, we predict novel PDZ domain involvement in xenobiotic metabolism and suggest new interactions for other processes including wound healing and Wnt signalling. Conclusions We built a structure-based predictor of PDZ domain-peptide interactions, which can be used to scan C-terminal proteomes for PDZ interactions. We also show that the structure-based predictor finds many known PDZ mediated PPIs in human that were not found by our previous sequence-based predictor and is less dependent on training–testing domain sequence similarity. Using both predictors, we defined a functional map of human PDZ domain biology and predict novel PDZ domain function. Users may access our structure-based and previous sequence-based predictors at http://webservice.baderlab.org/domains/POW. PMID:23336252
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.
1990-01-01
The feasibility of predicting interior noise due to random acoustic or turbulent boundary layer excitation was investigated in experiments in which a statistical energy analysis model (VAPEPS) was used to analyze measurements of the acceleration response and sound transmission of flat aluminum, lucite, and graphite/epoxy plates exposed to random acoustic or turbulent boundary layer excitation. The noise reduction of the plate, when backed by a shallow cavity and excited by a turbulent boundary layer, was predicted using a simplified theory based on the assumption of adiabatic compression of the fluid in the cavity. The predicted plate acceleration response was used as input in the noise reduction prediction. Reasonable agreement was found between the predictions and the measured noise reduction in the frequency range 315-1000 Hz.
Improved Bond Equations for Fiber-Reinforced Polymer Bars in Concrete.
Pour, Sadaf Moallemi; Alam, M Shahria; Milani, Abbas S
2016-08-30
This paper explores a set of new equations to predict the bond strength between fiber reinforced polymer (FRP) rebar and concrete. The proposed equations are based on a comprehensive statistical analysis and existing experimental results in the literature. Namely, the most effective parameters on bond behavior of FRP concrete were first identified by applying a factorial analysis on a part of the available database. Then the database that contains 250 pullout tests were divided into four groups based on the concrete compressive strength and the rebar surface. Afterward, nonlinear regression analysis was performed for each study group in order to determine the bond equations. The results show that the proposed equations can predict bond strengths more accurately compared to the other previously reported models.
Classification and disease prediction via mathematical programming
NASA Astrophysics Data System (ADS)
Lee, Eva K.; Wu, Tsung-Lin
2007-11-01
In this chapter, we present classification models based on mathematical programming approaches. We first provide an overview on various mathematical programming approaches, including linear programming, mixed integer programming, nonlinear programming and support vector machines. Next, we present our effort of novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule) and (5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multigroup prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; multistage discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80% to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.
Brown Connolly, Nancy E
2014-12-01
This foundational study applies the process of receiver operating characteristic (ROC) analysis to evaluate utility and predictive value of a disease management (DM) model that uses RM devices for chronic obstructive pulmonary disease (COPD). The literature identifies a need for a more rigorous method to validate and quantify evidence-based value for remote monitoring (RM) systems being used to monitor persons with a chronic disease. ROC analysis is an engineering approach widely applied in medical testing, but that has not been evaluated for its utility in RM. Classifiers (saturated peripheral oxygen [SPO2], blood pressure [BP], and pulse), optimum threshold, and predictive accuracy are evaluated based on patient outcomes. Parametric and nonparametric methods were used. Event-based patient outcomes included inpatient hospitalization, accident and emergency, and home health visits. Statistical analysis tools included Microsoft (Redmond, WA) Excel(®) and MedCalc(®) (MedCalc Software, Ostend, Belgium) version 12 © 1993-2013 to generate ROC curves and statistics. Persons with COPD were monitored a minimum of 183 days, with at least one inpatient hospitalization within 12 months prior to monitoring. Retrospective, de-identified patient data from a United Kingdom National Health System COPD program were used. Datasets included biometric readings, alerts, and resource utilization. SPO2 was identified as a predictive classifier, with an optimal average threshold setting of 85-86%. BP and pulse were failed classifiers, and areas of design were identified that may improve utility and predictive capacity. Cost avoidance methodology was developed. RESULTS can be applied to health services planning decisions. Methods can be applied to system design and evaluation based on patient outcomes. This study validated the use of ROC in RM program evaluation.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Gonzalez Viejo, Claudia; Fuentes, Sigfredo; Torrico, Damir D; Dunshea, Frank R
2018-06-03
Traditional methods to assess heart rate (HR) and blood pressure (BP) are intrusive and can affect results in sensory analysis of food as participants are aware of the sensors. This paper aims to validate a non-contact method to measure HR using the photoplethysmography (PPG) technique and to develop models to predict the real HR and BP based on raw video analysis (RVA) with an example application in chocolate consumption using machine learning (ML). The RVA used a computer vision algorithm based on luminosity changes on the different RGB color channels using three face-regions (forehead and both cheeks). To validate the proposed method and ML models, a home oscillometric monitor and a finger sensor were used. Results showed high correlations with the G color channel (R² = 0.83). Two ML models were developed using three face-regions: (i) Model 1 to predict HR and BP using the RVA outputs with R = 0.85 and (ii) Model 2 based on time-series prediction with HR, magnitude and luminosity from RVA inputs to HR values every second with R = 0.97. An application for the sensory analysis of chocolate showed significant correlations between changes in HR and BP with chocolate hardness and purchase intention.
Johannesdottir, Fjola; Allaire, Brett; Bouxsein, Mary L
2018-05-30
This review critiques the ability of CT-based methods to predict incident hip and vertebral fractures. CT-based techniques with concurrent calibration all show strong associations with incident hip and vertebral fracture, predicting hip and vertebral fractures as well as, and sometimes better than, dual-energy X-ray absorptiometry areal biomass density (DXA aBMD). There is growing evidence for use of routine CT scans for bone health assessment. CT-based techniques provide a robust approach for osteoporosis diagnosis and fracture prediction. It remains to be seen if further technical advances will improve fracture prediction compared to DXA aBMD. Future work should include more standardization in CT analyses, establishment of treatment intervention thresholds, and more studies to determine whether routine CT scans can be efficiently used to expand the number of individuals who undergo evaluation for fracture risk.
Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L
2017-05-07
In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC = 0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.
NASA Astrophysics Data System (ADS)
Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.
2017-05-01
In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC = 0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.
Mechatronics technology in predictive maintenance method
NASA Astrophysics Data System (ADS)
Majid, Nurul Afiqah A.; Muthalif, Asan G. A.
2017-11-01
This paper presents recent mechatronics technology that can help to implement predictive maintenance by combining intelligent and predictive maintenance instrument. Vibration Fault Simulation System (VFSS) is an example of mechatronics system. The focus of this study is the prediction on the use of critical machines to detect vibration. Vibration measurement is often used as the key indicator of the state of the machine. This paper shows the choice of the appropriate strategy in the vibration of diagnostic process of the mechanical system, especially rotating machines, in recognition of the failure during the working process. In this paper, the vibration signature analysis is implemented to detect faults in rotary machining that includes imbalance, mechanical looseness, bent shaft, misalignment, missing blade bearing fault, balancing mass and critical speed. In order to perform vibration signature analysis for rotating machinery faults, studies have been made on how mechatronics technology is used as predictive maintenance methods. Vibration Faults Simulation Rig (VFSR) is designed to simulate and understand faults signatures. These techniques are based on the processing of vibrational data in frequency-domain. The LabVIEW-based spectrum analyzer software is developed to acquire and extract frequency contents of faults signals. This system is successfully tested based on the unique vibration fault signatures that always occur in a rotating machinery.
NASA Astrophysics Data System (ADS)
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Feng, Sheng; Shi, Jun; Parrott, Neil; Hu, Pei; Weber, Cornelia; Martin-Facklam, Meret; Saito, Tomohisa; Peck, Richard
2016-07-01
We propose a strategy for studying ethnopharmacology by conducting sequential physiologically based pharmacokinetic (PBPK) prediction (a 'bottom-up' approach) and population pharmacokinetic (popPK) confirmation (a 'top-down' approach), or in reverse order, depending on whether the purpose is ethnic effect assessment for a new molecular entity under development or a tool for ethnic sensitivity prediction for a given pathway. The strategy is exemplified with bitopertin. A PBPK model was built using Simcyp(®) to simulate the pharmacokinetics of bitopertin and to predict the ethnic sensitivity in clearance, given pharmacokinetic data in just one ethnicity. Subsequently, a popPK model was built using NONMEM(®) to assess the effect of ethnicity on clearance, using human data from multiple ethnic groups. A comparison was made to confirm the PBPK-based ethnic sensitivity prediction, using the results of the popPK analysis. PBPK modelling predicted that the bitopertin geometric mean clearance values after 20 mg oral administration in Caucasians would be 1.32-fold and 1.27-fold higher than the values in Chinese and Japanese, respectively. The ratios of typical clearance in Caucasians to the values in Chinese and Japanese estimated by popPK analysis were 1.20 and 1.17, respectively. The popPK analysis results were similar to the PBPK modelling results. As a general framework, we propose that PBPK modelling should be considered to predict ethnic sensitivity of pharmacokinetics prior to any human data and/or with data in only one ethnicity. In some cases, this will be sufficient to guide initial dose selection in different ethnicities. After clinical trials in different ethnicities, popPK analysis can be used to confirm ethnic differences and to support dose justification and labelling. PBPK modelling prediction and popPK analysis confirmation can complement each other to assess ethnic differences in pharmacokinetics at different drug development stages.
Analytical method for predicting the pressure distribution about a nacelle at transonic speeds
NASA Technical Reports Server (NTRS)
Keith, J. S.; Ferguson, D. R.; Merkle, C. L.; Heck, P. H.; Lahti, D. J.
1973-01-01
The formulation and development of a computer analysis for the calculation of streamlines and pressure distributions around two-dimensional (planar and axisymmetric) isolated nacelles at transonic speeds are described. The computerized flow field analysis is designed to predict the transonic flow around long and short high-bypass-ratio fan duct nacelles with inlet flows and with exhaust flows having appropriate aerothermodynamic properties. The flow field boundaries are located as far upstream and downstream as necessary to obtain minimum disturbances at the boundary. The far-field lateral flow field boundary is analytically defined to exactly represent free-flight conditions or solid wind tunnel wall effects. The inviscid solution technique is based on a Streamtube Curvature Analysis. The computer program utilizes an automatic grid refinement procedure and solves the flow field equations with a matrix relaxation technique. The boundary layer displacement effects and the onset of turbulent separation are included, based on the compressible turbulent boundary layer solution method of Stratford and Beavers and on the turbulent separation prediction method of Stratford.
Predicting miRNA targets for head and neck squamous cell carcinoma using an ensemble method.
Gao, Hong; Jin, Hui; Li, Guijun
2018-01-01
This study aimed to uncover potential microRNA (miRNA) targets in head and neck squamous cell carcinoma (HNSCC) using an ensemble method which combined 3 different methods: Pearson's correlation coefficient (PCC), Lasso and a causal inference method (i.e., intervention calculus when the directed acyclic graph (DAG) is absent [IDA]), based on Borda count election. The Borda count election method was used to integrate the top 100 predicted targets of each miRNA generated by individual methods. Afterwards, to validate the performance ability of our method, we checked the TarBase v6.0, miRecords v2013, miRWalk v2.0 and miRTarBase v4.5 databases to validate predictions for miRNAs. Pathway enrichment analysis of target genes in the top 1,000 miRNA-messenger RNA (mRNA) interactions was conducted to focus on significant KEGG pathways. Finally, we extracted target genes based on occurrence frequency ≥3. Based on an absolute value of PCC >0.7, we found 33 miRNAs and 288 mRNAs for further analysis. We extracted 10 target genes with predicted frequencies not less than 3. The target gene MYO5C possessed the highest frequency, which was predicted by 7 different miRNAs. Significantly, a total of 8 pathways were identified; the pathways of cytokine-cytokine receptor interaction and chemokine signaling pathway were the most significant. We successfully predicted target genes and pathways for HNSCC relying on miRNA expression data, mRNA expression profile, an ensemble method and pathway information. Our results may offer new information for the diagnosis and estimation of the prognosis of HNSCC.
Analysis of Physicochemical and Structural Properties Determining HIV-1 Coreceptor Usage
Bozek, Katarzyna; Lengauer, Thomas; Sierra, Saleta; Kaiser, Rolf; Domingues, Francisco S.
2013-01-01
The relationship of HIV tropism with disease progression and the recent development of CCR5-blocking drugs underscore the importance of monitoring virus coreceptor usage. As an alternative to costly phenotypic assays, computational methods aim at predicting virus tropism based on the sequence and structure of the V3 loop of the virus gp120 protein. Here we present a numerical descriptor of the V3 loop encoding its physicochemical and structural properties. The descriptor allows for structure-based prediction of HIV tropism and identification of properties of the V3 loop that are crucial for coreceptor usage. Use of the proposed descriptor for prediction results in a statistically significant improvement over the prediction based solely on V3 sequence with 3 percentage points improvement in AUC and 7 percentage points in sensitivity at the specificity of the 11/25 rule (95%). We additionally assessed the predictive power of the new method on clinically derived ‘bulk’ sequence data and obtained a statistically significant improvement in AUC of 3 percentage points over sequence-based prediction. Furthermore, we demonstrated the capacity of our method to predict therapy outcome by applying it to 53 samples from patients undergoing Maraviroc therapy. The analysis of structural features of the loop informative of tropism indicates the importance of two loop regions and their physicochemical properties. The regions are located on opposite strands of the loop stem and the respective features are predominantly charge-, hydrophobicity- and structure-related. These regions are in close proximity in the bound conformation of the loop potentially forming a site determinant for the coreceptor binding. The method is available via server under http://structure.bioinf.mpi-inf.mpg.de/. PMID:23555214
Mohebbi, Maryam; Ghassemian, Hassan
2011-08-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia and increases the risk of stroke. Predicting the onset of paroxysmal AF (PAF), based on noninvasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic intervention and to minimize risks for the patients. In this paper, we propose an effective PAF predictor which is based on the analysis of the RR-interval signal. This method consists of three steps: preprocessing, feature extraction and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the RR-interval signal is extracted. In the next step, the recurrence plot (RP) of the RR-interval signal is obtained and five statistically significant features are extracted to characterize the basic patterns of the RP. These features consist of the recurrence rate, length of longest diagonal segments (L(max )), average length of the diagonal lines (L(mean)), entropy, and trapping time. Recurrence quantification analysis can reveal subtle aspects of dynamics not easily appreciated by other methods and exhibits characteristic patterns which are caused by the typical dynamical behavior. In the final step, a support vector machine (SVM)-based classifier is used for PAF prediction. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database (AFPDB) which consists of both 30 min ECG recordings that end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, positive predictivity and negative predictivity were 97%, 100%, 100%, and 96%, respectively. The proposed methodology presents better results than other existing approaches.
NASA Technical Reports Server (NTRS)
Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.
1998-01-01
High temperature and long duration applications of monolithic ceramics can place their failure mode in the creep rupture regime. A previous model advanced by the authors described a methodology by which the creep rupture life of a loaded component can be predicted. That model was based on the life fraction damage accumulation rule in association with the modified Monkman-Grant creep ripture criterion However, that model did not take into account the deteriorating state of the material due to creep damage (e.g., cavitation) as time elapsed. In addition, the material creep parameters used in that life prediction methodology, were based on uniaxial creep curves displaying primary and secondary creep behavior, with no tertiary regime. The objective of this paper is to present a creep life prediction methodology based on a modified form of the Kachanov-Rabotnov continuum damage mechanics (CDM) theory. In this theory, the uniaxial creep rate is described in terms of stress, temperature, time, and the current state of material damage. This scalar damage state parameter is basically an abstract measure of the current state of material damage due to creep deformation. The damage rate is assumed to vary with stress, temperature, time, and the current state of damage itself. Multiaxial creep and creep rupture formulations of the CDM approach are presented in this paper. Parameter estimation methodologies based on nonlinear regression analysis are also described for both, isothermal constant stress states and anisothermal variable stress conditions This creep life prediction methodology was preliminarily added to the integrated design code CARES/Creep (Ceramics Analysis and Reliability Evaluation of Structures/Creep), which is a postprocessor program to commercially available finite element analysis (FEA) packages. Two examples, showing comparisons between experimental and predicted creep lives of ceramic specimens, are used to demonstrate the viability of this methodology and the CARES/Creep program.
Estimation of the Driving Style Based on the Users’ Activity and Environment Influence
Sysoev, Mikhail; Kos, Andrej; Guna, Jože; Pogačnik, Matevž
2017-01-01
New models and methods have been designed to predict the influence of the user’s environment and activity information to the driving style in standard automotive environments. For these purposes, an experiment was conducted providing two types of analysis: (i) the evaluation of a self-assessment of the driving style; (ii) the prediction of aggressive driving style based on drivers’ activity and environment parameters. Sixty seven h of driving data from 10 drivers were collected for analysis in this study. The new parameters used in the experiment are the car door opening and closing manner, which were applied to improve the prediction accuracy. An Android application called Sensoric was developed to collect low-level smartphone data about the users’ activity. The driving style was predicted from the user’s environment and activity data collected before driving. The prediction was tested against the actual driving style, calculated from objective driving data. The prediction has shown encouraging results, with precision values ranging from 0.727 up to 0.909 for aggressive driving recognition rate. The obtained results lend support to the hypothesis that user’s environment and activity data could be used for the prediction of the aggressive driving style in advance, before the driving starts. PMID:29065476
Comparative Study of Different Methods for the Prediction of Drug-Polymer Solubility.
Knopp, Matthias Manne; Tajber, Lidia; Tian, Yiwei; Olesen, Niels Erik; Jones, David S; Kozyra, Agnieszka; Löbmann, Korbinian; Paluch, Krzysztof; Brennan, Claire Marie; Holm, René; Healy, Anne Marie; Andrews, Gavin P; Rades, Thomas
2015-09-08
In this study, a comparison of different methods to predict drug-polymer solubility was carried out on binary systems consisting of five model drugs (paracetamol, chloramphenicol, celecoxib, indomethacin, and felodipine) and polyvinylpyrrolidone/vinyl acetate copolymers (PVP/VA) of different monomer weight ratios. The drug-polymer solubility at 25 °C was predicted using the Flory-Huggins model, from data obtained at elevated temperature using thermal analysis methods based on the recrystallization of a supersaturated amorphous solid dispersion and two variations of the melting point depression method. These predictions were compared with the solubility in the low molecular weight liquid analogues of the PVP/VA copolymer (N-vinylpyrrolidone and vinyl acetate). The predicted solubilities at 25 °C varied considerably depending on the method used. However, the three thermal analysis methods ranked the predicted solubilities in the same order, except for the felodipine-PVP system. Furthermore, the magnitude of the predicted solubilities from the recrystallization method and melting point depression method correlated well with the estimates based on the solubility in the liquid analogues, which suggests that this method can be used as an initial screening tool if a liquid analogue is available. The learnings of this important comparative study provided general guidance for the selection of the most suitable method(s) for the screening of drug-polymer solubility.
Matsumoto, Hiroshi; Saito, Fumiyo; Takeyoshi, Masahiro
2015-12-01
Recently, the development of several gene expression-based prediction methods has been attempted in the fields of toxicology. CARCINOscreen® is a gene expression-based screening method to predict carcinogenicity of chemicals which target the liver with high accuracy. In this study, we investigated the applicability of the gene expression-based screening method to SD and Wistar rats by using CARCINOscreen®, originally developed with F344 rats, with two carcinogens, 2,4-diaminotoluen and thioacetamide, and two non-carcinogens, 2,6-diaminotoluen and sodium benzoate. After the 28-day repeated dose test was conducted with each chemical in SD and Wistar rats, microarray analysis was performed using total RNA extracted from each liver. Obtained gene expression data were applied to CARCINOscreen®. Predictive scores obtained by the CARCINOscreen® for known carcinogens were > 2 in all strains of rats, while non-carcinogens gave prediction scores below 0.5. These results suggested that the gene expression based screening method, CARCINOscreen®, can be applied to SD and Wistar rats, widely used strains in toxicological studies, by setting of an appropriate boundary line of prediction score to classify the chemicals into carcinogens and non-carcinogens.
NASA Technical Reports Server (NTRS)
Song, Kyonchan; Li, Yingyong; Rose, Cheryl A.
2011-01-01
The performance of a state-of-the-art continuum damage mechanics model for interlaminar damage, coupled with a cohesive zone model for delamination is examined for failure prediction of quasi-isotropic open-hole tension laminates. Limitations of continuum representations of intra-ply damage and the effect of mesh orientation on the analysis predictions are discussed. It is shown that accurate prediction of matrix crack paths and stress redistribution after cracking requires a mesh aligned with the fiber orientation. Based on these results, an aligned mesh is proposed for analysis of the open-hole tension specimens consisting of different meshes within the individual plies, such that the element edges are aligned with the ply fiber direction. The modeling approach is assessed by comparison of analysis predictions to experimental data for specimen configurations in which failure is dominated by complex interactions between matrix cracks and delaminations. It is shown that the different failure mechanisms observed in the tests are well predicted. In addition, the modeling approach is demonstrated to predict proper trends in the effect of scaling on strength and failure mechanisms of quasi-isotropic open-hole tension laminates.
NASA Astrophysics Data System (ADS)
Park, Jeong-Gyun; Jee, Joon-Bum
2017-04-01
Dangerous weather such as severe rain, heavy snow, drought and heat wave caused by climate change make more damage in the urban area that dense populated and industry areas. Urban areas, unlike the rural area, have big population and transportation, dense the buildings and fuel consumption. Anthropogenic factors such as road energy balance, the flow of air in the urban is unique meteorological phenomena. However several researches are in process about prediction of urban meteorology. ASAPS (Advanced Storm-scale Analysis and Prediction System) predicts a severe weather with very short range (prediction with 6 hour) and high resolution (every hour with time and 1 km with space) on Seoul metropolitan area based on KLAPS (Korea Local Analysis and Prediction System) from KMA (Korea Meteorological Administration). This system configured three parts that make a background field (SUF5), analysis field (SU01) with observation and forecast field with high resolution (SUF1). In this study, we improve a high-resolution ASAPS model and perform a sensitivity test for the rainfall case. The improvement of ASAPS include model domain configuration, high resolution topographic data and data assimilation with WISE observation data.
[Prediction of ETA oligopeptides antagonists from Glycine max based on in silico proteolysis].
Qiao, Lian-Sheng; Jiang, Lu-di; Luo, Gang-Gang; Lu, Fang; Chen, Yan-Kun; Wang, Ling-Zhi; Li, Gong-Yu; Zhang, Yan-Ling
2017-02-01
Oligopeptides are one of the the key pharmaceutical effective constituents of traditional Chinese medicine(TCM). Systematic study on composition and efficacy of TCM oligopeptides is essential for the analysis of material basis and mechanism of TCM. In this study, the potential anti-hypertensive oligopeptides from Glycine max and their endothelin receptor A (ETA) antagonistic activity were discovered and predicted based on in silico technologies.Main protein sequences of G. max were collected and oligopeptides were obtained using in silico gastrointestinal tract proteolysis. Then, the pharmacophore of ETA antagonistic peptides was constructed and included one hydrophobic feature, one ionizable negative feature, one ring aromatic feature and five excluded volumes. Meanwhile, three-dimensional structure of ETA was developed by homology modeling methods for further docking studies. According to docking analysis and consensus score, the key amino acid of GLN165 was identified for ETA antagonistic activity. And 27 oligopeptides from G. max were predicted as the potential ETA antagonists by pharmacophore and docking studies.In silico proteolysis could be used to analyze the protein sequences from TCM. According to combination of in silico proteolysis and molecular simulation, the biological activities of oligopeptides could be predicted rapidly based on the known TCM protein sequence. It might provide the methodology basis for rapidly and efficiently implementing the mechanism analysis of TCM oligopeptides. Copyright© by the Chinese Pharmaceutical Association.
Yu, Kun-Hsing; Fitzpatrick, Michael R; Pappas, Luke; Chan, Warren; Kung, Jessica; Snyder, Michael
2017-09-12
Precision oncology is an approach that accounts for individual differences to guide cancer management. Omics signatures have been shown to predict clinical traits for cancer patients. However, the vast amount of omics information poses an informatics challenge in systematically identifying patterns associated with health outcomes, and no general-purpose data-mining tool exists for physicians, medical researchers, and citizen scientists without significant training in programming and bioinformatics. To bridge this gap, we built the Omics AnalySIs System for PRecision Oncology (OASISPRO), a web-based system to mine the quantitative omics information from The Cancer Genome Atlas (TCGA). This system effectively visualizes patients' clinical profiles, executes machine-learning algorithms of choice on the omics data, and evaluates the prediction performance using held-out test sets. With this tool, we successfully identified genes strongly associated with tumor stage, and accurately predicted patients' survival outcomes in many cancer types, including mesothelioma and adrenocortical carcinoma. By identifying the links between omics and clinical phenotypes, this system will facilitate omics studies on precision cancer medicine and contribute to establishing personalized cancer treatment plans. This web-based tool is available at http://tinyurl.com/oasispro ;source codes are available at http://tinyurl.com/oasisproSourceCode . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
An Analysis of Effects of Background Variables on Student Achievement Based on the CTD Survey Data.
ERIC Educational Resources Information Center
Tolgyesi, Miklos
1985-01-01
Factors contributing to achievement were investigated as part of Hungary's Curriculum Theory Department (CTD-80) national assessment. Parent's education and occupation and student's sex predicted achievement results. School characteristics did not predict achievement or attitudes. (Author/GDC)
Clustering gene expression data based on predicted differential effects of GV interaction.
Pan, Hai-Yan; Zhu, Jun; Han, Dan-Fu
2005-02-01
Microarray has become a popular biotechnology in biological and medical research. However, systematic and stochastic variabilities in microarray data are expected and unavoidable, resulting in the problem that the raw measurements have inherent "noise" within microarray experiments. Currently, logarithmic ratios are usually analyzed by various clustering methods directly, which may introduce bias interpretation in identifying groups of genes or samples. In this paper, a statistical method based on mixed model approaches was proposed for microarray data cluster analysis. The underlying rationale of this method is to partition the observed total gene expression level into various variations caused by different factors using an ANOVA model, and to predict the differential effects of GV (gene by variety) interaction using the adjusted unbiased prediction (AUP) method. The predicted GV interaction effects can then be used as the inputs of cluster analysis. We illustrated the application of our method with a gene expression dataset and elucidated the utility of our approach using an external validation.
NASA Technical Reports Server (NTRS)
Miles, J. H.
1981-01-01
A predicted standing wave pressure and phase angle profile for a hard wall rectangular duct with a region of converging-diverging area variation is compared to published experimental measurements in a study of sound propagation without flow. The factor of 1/2 area variation used is sufficient magnitude to produce large reflections. The prediction is based on a transmission matrix approach developed for the analysis of sound propagation in a variable area duct with and without flow. The agreement between the measured and predicted results is shown to be excellent.
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2015-01-01
Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM). Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS), segmented PsePSSM, and segmented autocovariance transformation (ACT) based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640) are adopted in this paper. Then a 700-dimensional (700D) feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA). To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences.
Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H
2017-09-01
Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
Thomas, Edward V.; Lewis, John R.; Anderson-Cook, Christine M.; ...
2017-11-21
nverse prediction is important in a wide variety of scientific and engineering contexts. One might use inverse prediction to predict fundamental properties/characteristics of an object using measurements obtained from it. This can be accomplished by “inverting” parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are science based; but often, forward models are empirically based, using the results of experimentation. For empirically-based forward models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). While nature dictates the causal relationship between factorsmore » and responses, experimenters can influence control of the type, accuracy, and precision of forward models that can be constructed via selection of factors, factor levels, and the set of trials that are performed. Whether the forward models are based on science, experiments or both, researchers can influence the ability to perform inverse prediction by selecting informative response variables. By using an errors-in-variables framework for inverse prediction, this paper shows via simple analysis and examples how the capability of a multivariate response (with respect to being informative and discriminating) can vary depending on how well the various responses complement one another over the range of the factor-space of interest. Insights derived from this analysis could be useful for selecting a set of response variables among candidates in cases where the number of response variables that can be acquired is limited by difficulty, expense, and/or availability of material.« less
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Edward V.; Lewis, John R.; Anderson-Cook, Christine M.
nverse prediction is important in a wide variety of scientific and engineering contexts. One might use inverse prediction to predict fundamental properties/characteristics of an object using measurements obtained from it. This can be accomplished by “inverting” parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are science based; but often, forward models are empirically based, using the results of experimentation. For empirically-based forward models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). While nature dictates the causal relationship between factorsmore » and responses, experimenters can influence control of the type, accuracy, and precision of forward models that can be constructed via selection of factors, factor levels, and the set of trials that are performed. Whether the forward models are based on science, experiments or both, researchers can influence the ability to perform inverse prediction by selecting informative response variables. By using an errors-in-variables framework for inverse prediction, this paper shows via simple analysis and examples how the capability of a multivariate response (with respect to being informative and discriminating) can vary depending on how well the various responses complement one another over the range of the factor-space of interest. Insights derived from this analysis could be useful for selecting a set of response variables among candidates in cases where the number of response variables that can be acquired is limited by difficulty, expense, and/or availability of material.« less
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild
2013-08-01
This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies.
Hu, Min; Nohara, Yasunobu; Nakamura, Masafumi; Nakashima, Naoki
2017-01-01
The World Health Organization has declared Bangladesh one of 58 countries facing acute Human Resources for Health (HRH) crisis. Artificial intelligence in healthcare has been shown to be successful for diagnostics. Using machine learning to predict pharmaceutical prescriptions may solve HRH crises. In this study, we investigate a predictive model by analyzing prescription data of 4,543 subjects in Bangladesh. We predict the function of prescribed drugs, comparing three machine-learning approaches. The approaches compare whether a subject shall be prescribed medicine from the 21 most frequently prescribed drug functions. Receiver Operating Characteristics (ROC) were selected as a way to evaluate and assess prediction models. The results show the drug function with the best prediction performance was oral hypoglycemic drugs, which has an average AUC of 0.962. To understand how the variables affect prediction, we conducted factor analysis based on tree-based algorithms and natural language processing techniques.
CFD Analysis of the Aerodynamics of a Business-Jet Airfoil with Leading-Edge Ice Accretion
NASA Technical Reports Server (NTRS)
Chi, X.; Zhu, B.; Shih, T. I.-P.; Addy, H. E.; Choo, Y. K.
2004-01-01
For rime ice - where the ice buildup has only rough and jagged surfaces but no protruding horns - this study shows two dimensional CFD analysis based on the one-equation Spalart-Almaras (S-A) turbulence model to predict accurately the lift, drag, and pressure coefficients up to near the stall angle. For glaze ice - where the ice buildup has two or more protruding horns near the airfoil's leading edge - CFD predictions were much less satisfactory because of the large separated region produced by the horns even at zero angle of attack. This CFD study, based on the WIND and the Fluent codes, assesses the following turbulence models by comparing predictions with available experimental data: S-A, standard k-epsilon, shear-stress transport, v(exp 2)-f, and differential Reynolds stress.
Winter Precipitation Forecast in the European and Mediterranean Regions Using Cluster Analysis
NASA Astrophysics Data System (ADS)
Totz, Sonja; Tziperman, Eli; Coumou, Dim; Pfeiffer, Karl; Cohen, Judah
2017-12-01
The European climate is changing under global warming, and especially the Mediterranean region has been identified as a hot spot for climate change with climate models projecting a reduction in winter rainfall and a very pronounced increase in summertime heat waves. These trends are already detectable over the historic period. Hence, it is beneficial to forecast seasonal droughts well in advance so that water managers and stakeholders can prepare to mitigate deleterious impacts. We developed a new cluster-based empirical forecast method to predict precipitation anomalies in winter. This algorithm considers not only the strength but also the pattern of the precursors. We compare our algorithm with dynamic forecast models and a canonical correlation analysis-based prediction method demonstrating that our prediction method performs better in terms of time and pattern correlation in the Mediterranean and European regions.
O'Reilly, Christian; Plamondon, Réjean; Landou, Mohamed K; Stemmer, Brigitte
2013-01-01
This article presents an exploratory study investigating the possibility of predicting the time occurrence of a motor event related potential (ERP) from a kinematic analysis of human movements. Although the response-locked motor potential may link the ERP components to the recorded response, to our knowledge no previous attempt has been made to predict a priori (i.e. before any contact with the electroencephalographic data) the time occurrence of an ERP component based only on the modeling of an overt response. The proposed analysis relies on the delta-lognormal modeling of velocity, as proposed by the kinematic theory of rapid human movement used in several studies of motor control. Although some methodological aspects of this technique still need to be fine-tuned, the initial results showed that the model-based kinematic analysis allowed the prediction of the time occurrence of a motor command ERP in most participants in the experiment. The average map of the motor command ERPs showed that this signal was stronger in electrodes close to the contra-lateral motor area (Fz, FCz, FC1, and FC3). These results seem to support the claims made by the kinematic theory that a motor command is emitted at time t(0), the time reference parameter of the model. This article proposes a new time marker directly associated with a cerebral event (i.e. the emission of a motor command) that can be used for the development of new data analysis methodologies and for the elaboration of new experimental protocols based on ERP. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Fingeret, Abbey L; Martinez, Rebecca H; Hsieh, Christine; Downey, Peter; Nowygrod, Roman
2016-02-01
We aim to determine whether observed operations or internet-based video review predict improved performance in the surgery clerkship. A retrospective review of students' usage of surgical videos, observed operations, evaluations, and examination scores were used to construct an exploratory principal component analysis. Multivariate regression was used to determine factors predictive of clerkship performance. Case log data for 231 students revealed a median of 25 observed cases. Students accessed the web-based video platform a median of 15 times. Principal component analysis yielded 4 factors contributing 74% of the variability with a Kaiser-Meyer-Olkin coefficient of .83. Multivariate regression predicted shelf score (P < .0001), internal clinical skills examination score (P < .0001), subjective evaluations (P < .001), and video website utilization (P < .001) but not observed cases to be significantly associated with overall performance. Utilization of a web-based operative video platform during a surgical clerkship is an independently associated with improved clinical reasoning, fund of knowledge, and overall evaluation. Thus, this modality can serve as a useful adjunct to live observation. Copyright © 2016 Elsevier Inc. All rights reserved.
Enhancing Flood Prediction Reliability Using Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Liu, Z.; Merwade, V.
2017-12-01
Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.
NASA Astrophysics Data System (ADS)
Feller, Jens; Feller, Sebastian; Mauersberg, Bernhard; Mergenthaler, Wolfgang
2009-09-01
Many applications in plant management require close monitoring of equipment performance, in particular with the objective to prevent certain critical events. At each point in time, the information available to classify the criticality of the process, is represented through the historic signal database as well as the actual measurement. This paper presents an approach to detect and predict critical events, based on pattern recognition and discriminance analysis.
Customer Churn Prediction for Broadband Internet Services
NASA Astrophysics Data System (ADS)
Huang, B. Q.; Kechadi, M.-T.; Buckley, B.
Although churn prediction has been an area of research in the voice branch of telecommunications services, more focused studies on the huge growth area of Broadband Internet services are limited. Therefore, this paper presents a new set of features for broadband Internet customer churn prediction, based on Henley segments, the broadband usage, dial types, the spend of dial-up, line-information, bill and payment information, account information. Then the four prediction techniques (Logistic Regressions, Decision Trees, Multilayer Perceptron Neural Networks and Support Vector Machines) are applied in customer churn, based on the new features. Finally, the evaluation of new features and a comparative analysis of the predictors are made for broadband customer churn prediction. The experimental results show that the new features with these four modelling techniques are efficient for customer churn prediction in the broadband service field.
PredictProtein—an open resource for online prediction of protein structural and functional features
Yachdav, Guy; Kloppmann, Edda; Kajan, Laszlo; Hecht, Maximilian; Goldberg, Tatyana; Hamp, Tobias; Hönigschmid, Peter; Schafferhans, Andrea; Roos, Manfred; Bernhofer, Michael; Richter, Lothar; Ashkenazy, Haim; Punta, Marco; Schlessinger, Avner; Bromberg, Yana; Schneider, Reinhard; Vriend, Gerrit; Sander, Chris; Ben-Tal, Nir; Rost, Burkhard
2014-01-01
PredictProtein is a meta-service for sequence analysis that has been predicting structural and functional features of proteins since 1992. Queried with a protein sequence it returns: multiple sequence alignments, predicted aspects of structure (secondary structure, solvent accessibility, transmembrane helices (TMSEG) and strands, coiled-coil regions, disulfide bonds and disordered regions) and function. The service incorporates analysis methods for the identification of functional regions (ConSurf), homology-based inference of Gene Ontology terms (metastudent), comprehensive subcellular localization prediction (LocTree3), protein–protein binding sites (ISIS2), protein–polynucleotide binding sites (SomeNA) and predictions of the effect of point mutations (non-synonymous SNPs) on protein function (SNAP2). Our goal has always been to develop a system optimized to meet the demands of experimentalists not highly experienced in bioinformatics. To this end, the PredictProtein results are presented as both text and a series of intuitive, interactive and visually appealing figures. The web server and sources are available at http://ppopen.rostlab.org. PMID:24799431
Power quality analysis based on spatial correlation
NASA Astrophysics Data System (ADS)
Li, Jiangtao; Zhao, Gang; Liu, Haibo; Li, Fenghou; Liu, Xiaoli
2018-03-01
With the industrialization and urbanization, the status of electricity in the production and life is getting higher and higher. So the prediction of power quality is the more potential significance. Traditional power quality analysis methods include: power quality data compression, disturbance event pattern classification, disturbance parameter calculation. Under certain conditions, these methods can predict power quality. This paper analyses the temporal variation of power quality of one provincial power grid in China from time angle. The distribution of power quality was analyzed based on spatial autocorrelation. This paper tries to prove that the research idea of geography is effective for mining the potential information of power quality.
EUGÈNE'HOM: a generic similarity-based gene finder using multiple homologous sequences
Foissac, Sylvain; Bardou, Philippe; Moisan, Annick; Cros, Marie-Josée; Schiex, Thomas
2003-01-01
EUGÈNE'HOM is a gene prediction software for eukaryotic organisms based on comparative analysis. EUGÈNE'HOM is able to take into account multiple homologous sequences from more or less closely related organisms. It integrates the results of TBLASTX analysis, splice site and start codon prediction and a robust coding/non-coding probabilistic model which allows EUGÈNE'HOM to handle sequences from a variety of organisms. The current target of EUGÈNE'HOM is plant sequences. The EUGÈNE'HOM web site is available at http://genopole.toulouse.inra.fr/bioinfo/eugene/EuGeneHom/cgi-bin/EuGeneHom.pl. PMID:12824408
In Silico Analysis of Epitope-Based Vaccine Candidates against Hepatitis B Virus Polymerase Protein
Zheng, Juzeng; Lin, Xianfan; Wang, Xiuyan; Zheng, Liyu; Lan, Songsong; Jin, Sisi; Ou, Zhanfan; Wu, Jinming
2017-01-01
Hepatitis B virus (HBV) infection has persisted as a major public health problem due to the lack of an effective treatment for those chronically infected. Therapeutic vaccination holds promise, and targeting HBV polymerase is pivotal for viral eradication. In this research, a computational approach was employed to predict suitable HBV polymerase targeting multi-peptides for vaccine candidate selection. We then performed in-depth computational analysis to evaluate the predicted epitopes’ immunogenicity, conservation, population coverage, and toxicity. Lastly, molecular docking and MHC-peptide complex stabilization assay were utilized to determine the binding energy and affinity of epitopes to the HLA-A0201 molecule. Criteria-based analysis provided four predicted epitopes, RVTGGVFLV, VSIPWTHKV, YMDDVVLGA and HLYSHPIIL. Assay results indicated the lowest binding energy and high affinity to the HLA-A0201 molecule for epitopes VSIPWTHKV and YMDDVVLGA and epitopes RVTGGVFLV and VSIPWTHKV, respectively. Regions 307 to 320 and 377 to 387 were considered to have the highest probability to be involved in B cell epitopes. The T cell and B cell epitopes identified in this study are promising targets for an epitope-focused, peptide-based HBV vaccine, and provide insight into HBV-induced immune response. PMID:28509875
Model-driven development of covariances for spatiotemporal environmental health assessment.
Kolovos, Alexander; Angulo, José Miguel; Modis, Konstantinos; Papantonopoulos, George; Wang, Jin-Feng; Christakos, George
2013-01-01
Known conceptual and technical limitations of mainstream environmental health data analysis have directed research to new avenues. The goal is to deal more efficiently with the inherent uncertainty and composite space-time heterogeneity of key attributes, account for multi-sourced knowledge bases (health models, survey data, empirical relationships etc.), and generate more accurate predictions across space-time. Based on a versatile, knowledge synthesis methodological framework, we introduce new space-time covariance functions built by integrating epidemic propagation models and we apply them in the analysis of existing flu datasets. Within the knowledge synthesis framework, the Bayesian maximum entropy theory is our method of choice for the spatiotemporal prediction of the ratio of new infectives (RNI) for a case study of flu in France. The space-time analysis is based on observations during a period of 15 weeks in 1998-1999. We present general features of the proposed covariance functions, and use these functions to explore the composite space-time RNI dependency. We then implement the findings to generate sufficiently detailed and informative maps of the RNI patterns across space and time. The predicted distributions of RNI suggest substantive relationships in accordance with the typical physiographic and climatologic features of the country.
NASA Astrophysics Data System (ADS)
Cherumadanakadan Thelliyil, S.; Ravindran, A. M.; Giannakis, D.; Majda, A.
2016-12-01
An improved index for real time monitoring and forecast verification of monsoon intraseasonal oscillations (MISO) is introduced using the recently developed Nonlinear Laplacian Spectral Analysis (NLSA) algorithm. Previous studies has demonstrated the proficiency of NLSA in capturing low frequency variability and intermittency of a time series. Using NLSA a hierarchy of Laplace-Beltrami (LB) eigen functions are extracted from the unfiltered daily GPCP rainfall data over the south Asian monsoon region. Two modes representing the full life cycle of complex northeastward propagating boreal summer MISO are identified from the hierarchy of Laplace-Beltrami eigen functions. These two MISO modes have a number of advantages over the conventionally used Extended Empirical Orthogonal Function (EEOF) MISO modes including higher memory and better predictability, higher fractional variance over the western Pacific, Western Ghats and adjoining Arabian Sea regions and more realistic representation of regional heat sources associated with the MISO. The skill of NLSA based MISO indices in real time prediction of MISO is demonstrated using hindcasts of CFSv2 extended range prediction runs. It is shown that these indices yield a higher prediction skill than the other conventional indices supporting the use of NLSA in real time prediction of MISO. Real time monitoring and prediction of MISO finds its application in agriculture, construction and hydro-electric power sectors and hence an important component of monsoon prediction.
Yang, Shuai; Wang, Yu; Ao, Wengang; Bai, Yun; Li, Chuan
2018-01-01
Based on the consumption of fossil energy, the CO2 emissions of Chongqing are calculated and analyzed from 1997 to 2015 in this paper. Based on the calculation results, the consumption of fossil fuels and the corresponding CO2 emissions of Chongqing in 2020 are predicted, and the supporting data and corresponding policies are provided for the government of Chongqing to reach its goal as the economic unit of low-carbon emission in the ‘13th Five-Year Plan’. The results of the analysis show that there is a rapid decreasing trend of CO2 emissions in Chongqing during the ‘12th Five-Year Plan’, which are caused by the adjustment policy of the energy structure in Chongqing. Therefore, the analysis and prediction are primarily based on the adjustment of Chongqing’s coal energy consumption in this paper. At the initial stage, support vector regression (SVR) method is applied to predict the other fossil energy consumption and the corresponding CO2 emissions of Chongqing in 2020. Then, with the energy intensity of 2015 and the official target of CO2 intensity in 2020, the total fossil energy consumption and CO2 emissions of Chongqing in 2020 are predicted respectively. By the above results of calculation, the coal consumption and its corresponding CO2 emissions of Chongqing in 2020 are determined. To achieve the goal of CO2 emissions of Chongqing in 2020, the coal consumption level and energy intensity of Chongqing are calculated, and the adjustment strategies for energy consumption structure in Chongqing are proposed. PMID:29547505
NASA Technical Reports Server (NTRS)
Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga
2005-01-01
Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.
Zastrow, Stefan; Brookman-May, Sabine; Cong, Thi Anh Phuong; Jurk, Stanislaw; von Bar, Immanuel; Novotny, Vladimir; Wirth, Manfred
2015-03-01
To predict outcome of patients with renal cell carcinoma (RCC) who undergo surgical therapy, risk models and nomograms are valuable tools. External validation on independent datasets is crucial for evaluating accuracy and generalizability of these models. The objective of the present study was to externally validate the postoperative nomogram developed by Karakiewicz et al. for prediction of cancer-specific survival. A total of 1,480 consecutive patients with a median follow-up of 82 months (IQR 46-128) were included into this analysis with 268 RCC-specific deaths. Nomogram-estimated survival probabilities were compared with survival probabilities of the actual cohort, and concordance indices were calculated. Calibration plots and decision curve analyses were used for evaluating calibration and clinical net benefit of the nomogram. Concordance between predictions of the nomogram and survival rates of the cohort was 0.911 after 12, 0.909 after 24 months and 0.896 after 60 months. Comparison of predicted probabilities and actual survival estimates with calibration plots showed an overestimation of tumor-specific survival based on nomogram predictions of high-risk patients, although calibration plots showed a reasonable calibration for probability ranges of interest. Decision curve analysis showed a positive net benefit of nomogram predictions for our patient cohort. The postoperative Karakiewicz nomogram provides a good concordance in this external cohort and is reasonably calibrated. It may overestimate tumor-specific survival in high-risk patients, which should be kept in mind when counseling patients. A positive net benefit of nomogram predictions was proven.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs
2014-11-15
Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, J; Gong, G; Cui, Y
Purpose: To predict early pathological response of breast cancer to neoadjuvant chemotherapy (NAC) based on quantitative, multi-region analysis of dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). Methods: In this institution review board-approved study, 35 patients diagnosed with stage II/III breast cancer were retrospectively investigated using DCE-MR images acquired before and after the first cycle of NAC. First, principal component analysis (PCA) was used to reduce the dimensionality of the DCE-MRI data with a high-temporal resolution. We then partitioned the whole tumor into multiple subregions using k-means clustering based on the PCA-defined eigenmaps. Within each tumor subregion, we extracted four quantitativemore » Haralick texture features based on the gray-level co-occurrence matrix (GLCM). The change in texture features in each tumor subregion between pre- and during-NAC was used to predict pathological complete response after NAC. Results: Three tumor subregions were identified through clustering, each with distinct enhancement characteristics. In univariate analysis, all imaging predictors except one extracted from the tumor subregion associated with fast wash-out were statistically significant (p< 0.05) after correcting for multiple testing, with area under the ROC curve or AUCs between 0.75 and 0.80. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.79 (p = 0.002) in leave-one-out cross validation. This improved upon conventional imaging predictors such as tumor volume (AUC=0.53) and texture features based on whole-tumor analysis (AUC=0.65). Conclusion: The heterogeneity of the tumor subregion associated with fast wash-out on DCE-MRI predicted early pathological response to neoadjuvant chemotherapy in breast cancer.« less
Wright, Julie A.; Velicer, Wayne F.; Prochaska, James O.
2009-01-01
This study evaluated how well predictions from the transtheoretical model (TTM) generalized from smoking to diet. Longitudinal data were used from a randomized control trial on reducing dietary fat consumption in adults (n =1207) recruited from primary care practices. Predictive power was evaluated by making a priori predictions of the magnitude of change expected in the TTM constructs of temptation, pros and cons, and 10 processes of change when an individual transitions between the stages of change. Generalizability was evaluated by testing predictions based on smoking data. Three sets of predictions were made for each stage: Precontemplation (PC), Contemplation (C) and Preparation (PR) based on stage transition categories of no progress, progress and regression determined by stage at baseline versus stage at the 12-month follow-up. Univariate analysis of variance between stage transition groups was used to calculate the effect size [omega squared (ω2)]. For diet predictions based on diet data, there was a high degree of confirmation: 92%, 95% and 92% for PC, C and PR, respectively. For diet predictions based on smoking data, 77%, 79% and 85% were confirmed, respectively, suggesting a moderate degree of generalizability. This study revised effect size estimates for future theory testing on the TTM applied to dietary fat. PMID:18400785
Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.
1999-01-01
A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.
NASA Astrophysics Data System (ADS)
Zhao, Xiang-Feng; Shang, De-Guang; Sun, Yu-Juan; Song, Ming-Liang; Wang, Xiao-Wei
2018-01-01
The maximum shear strain and the normal strain excursion on the critical plane are regarded as the primary parameters of the crack driving force to establish a new short crack model in this paper. An equivalent strain-based intensity factor is proposed to correlate the short crack growth rate under multiaxial loading. According to the short crack model, a new method is proposed for multiaxial fatigue life prediction based on crack growth analysis. It is demonstrated that the method can be used under proportional and non-proportional loadings. The predicted results showed a good agreement with experimental lives in both high-cycle and low-cycle regions.
Riometer based Neural Network Prediction of Kp
NASA Astrophysics Data System (ADS)
Arnason, K. M.; Spanswick, E.; Chaddock, D.; Tabrizi, A. F.; Behjat, L.
2017-12-01
The Canadian Geospace Observatory Riometer Array is a network of 11 wide-beam riometers deployed across Central and Northern Canada. The geographic coverage of the network affords a near continent scale view of high energy (>30keV) electron precipitation at a very course spatial resolution. In this paper we present the first results from a neural network based analysis of riometer data. Trained on decades of riometer data, the neural network is tuned to predict a simple index of global geomagnetic activity (Kp) based solely on the information provided by the high energy electron precipitation over Canada. We present results from various configurations of training and discuss the applicability of this technique for short term prediction of geomagnetic activity.
Prediction of sweetness and amino acid content in soybean crops from hyperspectral imagery
NASA Astrophysics Data System (ADS)
Monteiro, Sildomar Takahashi; Minekawa, Yohei; Kosugi, Yukio; Akazawa, Tsuneya; Oda, Kunio
Hyperspectral image data provides a powerful tool for non-destructive crop analysis. This paper investigates a hyperspectral image data-processing method to predict the sweetness and amino acid content of soybean crops. Regression models based on artificial neural networks were developed in order to calculate the level of sucrose, glucose, fructose, and nitrogen concentrations, which can be related to the sweetness and amino acid content of vegetables. A performance analysis was conducted comparing regression models obtained using different preprocessing methods, namely, raw reflectance, second derivative, and principal components analysis. This method is demonstrated using high-resolution hyperspectral data of wavelengths ranging from the visible to the near infrared acquired from an experimental field of green vegetable soybeans. The best predictions were achieved using a nonlinear regression model of the second derivative transformed dataset. Glucose could be predicted with greater accuracy, followed by sucrose, fructose and nitrogen. The proposed method provides the possibility to provide relatively accurate maps predicting the chemical content of soybean crop fields.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.
Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L
2017-10-01
The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (p<0.005) and correlated with independently identified visual EEG patterns such as generalized periodic discharges (p<0.02). Receiver operating characteristic (ROC) analysis confirmed the predictive value of lower state space velocity for poor clinical outcome after cardiac arrest (AUC 80.8, 70% sensitivity, 15% false positive rate). Model-based quantitative EEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic encephalopathy. Copyright © 2017 Elsevier B.V. All rights reserved.
On Burst Detection and Prediction in Retweeting Sequence
2015-05-22
We conduct a comprehensive empirical analysis of a large microblogging dataset collected from the Sina Weibo and report our observations of burst...whether and how accurate we can predict bursts using classifiers based on the extracted features. Our empirical study of the Sina Weibo data shows the...feasibility of burst prediction using appropriately extracted features and classic classifiers. 1 Introduction Microblogging, such as Twitter and Sina
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.
MEMS based shock pulse detection sensor for improved rotary Stirling cooler end of life prediction
NASA Astrophysics Data System (ADS)
Hübner, M.; Münzberg, M.
2018-05-01
The widespread use of rotary Stirling coolers in high performance thermal imagers used for critical 24/7 surveillance tasks justifies any effort to significantly enhance the reliability and predictable uptime of those coolers. Typically the lifetime of the whole imaging device is limited due to continuous wear and finally failure of the rotary compressor of the Stirling cooler, especially due to failure of the comprised bearings. MTTF based lifetime predictions, even based on refined MTTF models taking operational scenario dependent scaling factors into account, still lack in precision to forecast accurately the end of life (EOL) of individual coolers. Consequently preventive maintenance of individual coolers to avoid failures of the main sensor in critical operational scenarios are very costly or even useless. We have developed an integrated test method based on `Micro Electromechanical Systems', so called MEMS sensors, which significantly improves the cooler EOL prediction. The recently commercially available MEMS acceleration sensors have mechanical resonance frequencies up to 50 kHz. They are able to detect solid borne shock pulses in the cooler structure, originating from e.g. metal on metal impacts driven by periodical forces acting on moving inner parts of the rotary compressor within wear dependent slack and play. The impact driven transient shock pulse analyses uses only the high frequency signal <10kHz and differs therefore from the commonly used broadband low frequencies vibrational analysis of reciprocating machines. It offers a direct indicator of the individual state of wear. The predictive cooler lifetime model based on the shock pulse analysis is presented and results are discussed.
Novel Method of Production Decline Analysis
NASA Astrophysics Data System (ADS)
Xie, Shan; Lan, Yifei; He, Lei; Jiao, Yang; Wu, Yong
2018-02-01
ARPS decline curves is the most commonly used in oil and gas field due to its minimal data requirements and ease application. And prediction of production decline which is based on ARPS analysis rely on known decline type. However, when coefficient index are very approximate under different decline type, it is difficult to directly recognize decline trend of matched curves. Due to difficulties above, based on simulation results of multi-factor response experiments, a new dynamic decline prediction model is introduced with using multiple linear regression of influence factors. First of all, according to study of effect factors of production decline, interaction experimental schemes are designed. Based on simulated results, annual decline rate is predicted by decline model. Moreover, the new method is applied in A gas filed of Ordos Basin as example to illustrate reliability. The result commit that the new model can directly predict decline tendency without needing recognize decline style. From arithmetic aspect, it also take advantage of high veracity. Finally, the new method improves the evaluation method of gas well production decline in low permeability gas reservoir, which also provides technical support for further understanding of tight gas field development laws.
Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying
2011-08-01
The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.
ten Haaf, Twan; Weijs, Peter J. M.
2014-01-01
Introduction Resting energy expenditure (REE) is expected to be higher in athletes because of their relatively high fat free mass (FFM). Therefore, REE predictive equation for recreational athletes may be required. The aim of this study was to validate existing REE predictive equations and to develop a new recreational athlete specific equation. Methods 90 (53M, 37F) adult athletes, exercising on average 9.1±5.0 hours a week and 5.0±1.8 times a week, were included. REE was measured using indirect calorimetry (Vmax Encore n29), FFM and FM were measured using air displacement plethysmography. Multiple linear regression analysis was used to develop a new FFM-based and weight-based REE predictive equation. The percentage accurate predictions (within 10% of measured REE), percentage bias, root mean square error and limits of agreement were calculated. Results The Cunningham equation and the new weight-based equation and the new FFM-based equation performed equally well. De Lorenzo's equation predicted REE less accurate, but better than the other generally used REE predictive equations. Harris-Benedict, WHO, Schofield, Mifflin and Owen all showed less than 50% accuracy. Conclusion For a population of (Dutch) recreational athletes, the REE can accurately be predicted with the existing Cunningham equation. Since body composition measurement is not always possible, and other generally used equations fail, the new weight-based equation is advised for use in sports nutrition. PMID:25275434
Analysis and modelling of septic shock microarray data using Singular Value Decomposition.
Allanki, Srinivas; Dixit, Madhulika; Thangaraj, Paul; Sinha, Nandan Kumar
2017-06-01
Being a high throughput technique, enormous amounts of microarray data has been generated and there arises a need for more efficient techniques of analysis, in terms of speed and accuracy. Finding the differentially expressed genes based on just fold change and p-value might not extract all the vital biological signals that occur at a lower gene expression level. Besides this, numerous mathematical models have been generated to predict the clinical outcome from microarray data, while very few, if not none, aim at predicting the vital genes that are important in a disease progression. Such models help a basic researcher narrow down and concentrate on a promising set of genes which leads to the discovery of gene-based therapies. In this article, as a first objective, we have used the lesser known and used Singular Value Decomposition (SVD) technique to build a microarray data analysis tool that works with gene expression patterns and intrinsic structure of the data in an unsupervised manner. We have re-analysed a microarray data over the clinical course of Septic shock from Cazalis et al. (2014) and have shown that our proposed analysis provides additional information compared to the conventional method. As a second objective, we developed a novel mathematical model that predicts a set of vital genes in the disease progression that works by generating samples in the continuum between health and disease, using a simple normal-distribution-based random number generator. We also verify that most of the predicted genes are indeed related to septic shock. Copyright © 2017 Elsevier Inc. All rights reserved.
Spek, J W; Dijkstra, J; van Duinkerken, G; Hendriks, W H; Bannink, A
2013-07-01
A meta-analysis was conducted on the effect of dietary and animal factors on the excretion of total urinary nitrogen (UN) and urinary urea nitrogen (UUN) in lactating dairy cattle in North America (NA) and northwestern Europe (EU). Mean treatment data were used from 47 trials carried out in NA and EU. Mixed model analysis was used with experiment included as a random effect and all other factors, consisting of dietary and animal characteristics, included as fixed effects. Fixed factors were nested within continent (EU or NA). A distinction was made between urinary excretions based on either urine spot samples or calculated assuming a zero N balance, and excretions that were determined by total collection of urine only. Moreover, with the subset of data based on total collection of urine, a new data set was created by calculating urinary N excretion assuming a zero N balance. Comparison with the original subset of data allowed for examining the effect of such an assumption on the relationship established between milk urea N (MUN) concentration and UN. Of all single dietary and animal factors evaluated to predict N excretion in urine, MUN and dietary crude protein (CP) concentration were by far the best predictors. Urinary N excretion was best predicted by the combination of MUN, CP, and dry matter intake, whereas UUN was best predicted by the combination of MUN and CP. All other factors did not improve or only marginally improved the prediction of UN or UUN. The relationship between UN and MUN differed between NA and EU, with higher estimated regression coefficients for MUN for the NA data set. Precision of UN and UUN prediction improved substantially when only UN or UUN data based on total collection of urine were used. The relationship between UN and MUN for the NA data set, but not for the EU data set, was substantially altered when UN was calculated assuming a zero N balance instead of being based on the total collection of urine. According to results of the present meta-analysis, UN and UUN are best predicted by the combination of MUN and CP and that, in regard to precision and accuracy, prediction equations for UN and UUN should be derived from the total collection of urine. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Song, Yang; Zhang, Yu-Dong; Yan, Xu; Liu, Hui; Zhou, Minxiong; Hu, Bingwen; Yang, Guang
2018-04-16
Deep learning is the most promising methodology for automatic computer-aided diagnosis of prostate cancer (PCa) with multiparametric MRI (mp-MRI). To develop an automatic approach based on deep convolutional neural network (DCNN) to classify PCa and noncancerous tissues (NC) with mp-MRI. Retrospective. In all, 195 patients with localized PCa were collected from a PROSTATEx database. In total, 159/17/19 patients with 444/48/55 observations (215/23/23 PCas and 229/25/32 NCs) were randomly selected for training/validation/testing, respectively. T 2 -weighted, diffusion-weighted, and apparent diffusion coefficient images. A radiologist manually labeled the regions of interest of PCas and NCs and estimated the Prostate Imaging Reporting and Data System (PI-RADS) scores for each region. Inspired by VGG-Net, we designed a patch-based DCNN model to distinguish between PCa and NCs based on a combination of mp-MRI data. Additionally, an enhanced prediction method was used to improve the prediction accuracy. The performance of DCNN prediction was tested using a receiver operating characteristic (ROC) curve, and the area under the ROC curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Moreover, the predicted result was compared with the PI-RADS score to evaluate its clinical value using decision curve analysis. Two-sided Wilcoxon signed-rank test with statistical significance set at 0.05. The DCNN produced excellent diagnostic performance in distinguishing between PCa and NC for testing datasets with an AUC of 0.944 (95% confidence interval: 0.876-0.994), sensitivity of 87.0%, specificity of 90.6%, PPV of 87.0%, and NPV of 90.6%. The decision curve analysis revealed that the joint model of PI-RADS and DCNN provided additional net benefits compared with the DCNN model and the PI-RADS scheme. The proposed DCNN-based model with enhanced prediction yielded high performance in statistical analysis, suggesting that DCNN could be used in computer-aided diagnosis (CAD) for PCa classification. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Analysis of free modeling predictions by RBO aleph in CASP11.
Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver
2016-09-01
The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue-residue contact prediction by EPC-map and contact-guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. Proteins 2016; 84(Suppl 1):87-104. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.
Buvé, Carolien; Van Bedts, Tine; Haenen, Annelien; Kebede, Biniam; Braekers, Roel; Hendrickx, Marc; Van Loey, Ann; Grauwet, Tara
2018-07-01
Accurate shelf-life dating of food products is crucial for consumers and industries. Therefore, in this study we applied a science-based approach for shelf-life assessment, including accelerated shelf-life testing (ASLT), acceptability testing and the screening of analytical attributes for fast shelf-life predictions. Shelf-stable strawberry juice was selected as a case study. Ambient storage (20 °C) had no effect on the aroma-based acceptance of strawberry juice. The colour-based acceptability decreased during storage under ambient and accelerated (28-42 °C) conditions. The application of survival analysis showed that the colour-based shelf-life was reached in the early stages of storage (≤11 weeks) and that the shelf-life was shortened at higher temperatures. None of the selected attributes (a * and ΔE * value, anthocyanin and ascorbic acid content) is an ideal analytical marker for shelf-life predictions in the investigated temperature range (20-42 °C). Nevertheless, an overall analytical cut-off value over the whole temperature range can be selected. Colour changes of strawberry juice during storage are shelf-life limiting. Combining ASLT with acceptability testing allowed to gain faster insight into the change in colour-based acceptability and to perform shelf-life predictions relying on scientific data. An analytical marker is a convenient tool for shelf-life predictions in the context of ASLT. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder
Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant’s treatment outcome may help during antidepressant’s selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant’s treatment outcome for the MDD patients. PMID:28152063
Zhang, L; Price, R; Aweeka, F; Bellibas, S E; Sheiner, L B
2001-02-01
A small-scale clinical investigation was done to quantify the penetration of stavudine (D4T) into cerebrospinal fluid (CSF). A model-based analysis estimates the steady-state ratio of AUCs of CSF and plasma concentrations (R(AUC)) to be 0.270, and the mean residence time of drug in the CSF to be 7.04 h. The analysis illustrates the advantages of a causal (scientific, predictive) model-based approach to analysis over a noncausal (empirical, descriptive) approach when the data, as here, demonstrate certain problematic features commonly encountered in clinical data, namely (i) few subjects, (ii) sparse sampling, (iii) repeated measures, (iv) imbalance, and (v) individual design variation. These features generally require special attention in data analysis. The causal-model-based analysis deals with features (i) and (ii), both of which reduce efficiency, by combining data from different studies and adding subject-matter prior information. It deals with features (iii)--(v), all of which prevent 'averaging' individual data points directly, first, by adjusting in the model for interindividual data differences due to design differences, secondly, by explicitly differentiating between interpatient, interoccasion, and measurement error variation, and lastly, by defining a scientifically meaningful estimand (R(AUC)) that is independent of design.
NASA Astrophysics Data System (ADS)
Zhang, Nannan; Zhou, Kefa; Du, Xishihui
2017-04-01
Mineral prospectivity mapping (MPM) is a multi-step process that ranks promising target areas for further exploration. Fuzzy logic and fuzzy analytical hierarchy process (AHP) are knowledge-driven MPM approaches. In this study, both approaches were used for data processing, based on which MPM was performed for porphyry and hydrothermal vein copper deposits in the Dananhu-Tousuquan island arc, Xinjiang. The results of the two methods were then compared. The two methods combined expert experience and the Studentized contrast (S(C)) values of the weights-of-evidence approach to calculate the weights of 15 layers, and these layers were then integrated by the gamma operator (γ). Through prediction-area (P-A) plot analysis, the optimal γ for fuzzy logic and fuzzy AHP was determined as 0.95 and 0.93, respectively. The thresholds corresponding to different levels of metallogenic probability were defined via concentration-area (C-A) fractal analysis. The prediction performances of the two methods were compared on this basis. The results showed that in MPM based on fuzzy logic, the area under the receiver operating characteristic (ROC) curve was 0.806 and 81.48% of the known deposits were predicted, whereas in MPM based on fuzzy AHP, the area under the ROC curve was 0.862 and 92.59% of the known deposits were predicted. Therefore, prediction based on fuzzy AHP is more accurate and can provide directions for future prospecting.
Effects of urban microcellular environments on ray-tracing-based coverage predictions.
Liu, Zhongyu; Guo, Lixin; Guan, Xiaowei; Sun, Jiejing
2016-09-01
The ray-tracing (RT) algorithm, which is based on geometrical optics and the uniform theory of diffraction, has become a typical deterministic approach of studying wave-propagation characteristics. Under urban microcellular environments, the RT method highly depends on detailed environmental information. The aim of this paper is to provide help in selecting the appropriate level of accuracy required in building databases to achieve good tradeoffs between database costs and prediction accuracy. After familiarization with the operating procedures of the RT-based prediction model, this study focuses on the effect of errors in environmental information on prediction results. The environmental information consists of two parts, namely, geometric and electrical parameters. The geometric information can be obtained from a digital map of a city. To study the effects of inaccuracies in geometry information (building layout) on RT-based coverage prediction, two different artificial erroneous maps are generated based on the original digital map, and systematic analysis is performed by comparing the predictions with the erroneous maps and measurements or the predictions with the original digital map. To make the conclusion more persuasive, the influence of random errors on RMS delay spread results is investigated. Furthermore, given the electrical parameters' effect on the accuracy of the predicted results of the RT model, the dielectric constant and conductivity of building materials are set with different values. The path loss and RMS delay spread under the same circumstances are simulated by the RT prediction model.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
Exploiting Information Diffusion Feature for Link Prediction in Sina Weibo
NASA Astrophysics Data System (ADS)
Li, Dong; Zhang, Yongchao; Xu, Zhiming; Chu, Dianhui; Li, Sheng
2016-01-01
The rapid development of online social networks (e.g., Twitter and Facebook) has promoted research related to social networks in which link prediction is a key problem. Although numerous attempts have been made for link prediction based on network structure, node attribute and so on, few of the current studies have considered the impact of information diffusion on link creation and prediction. This paper mainly addresses Sina Weibo, which is the largest microblog platform with Chinese characteristics, and proposes the hypothesis that information diffusion influences link creation and verifies the hypothesis based on real data analysis. We also detect an important feature from the information diffusion process, which is used to promote link prediction performance. Finally, the experimental results on Sina Weibo dataset have demonstrated the effectiveness of our methods.
Exploiting Information Diffusion Feature for Link Prediction in Sina Weibo.
Li, Dong; Zhang, Yongchao; Xu, Zhiming; Chu, Dianhui; Li, Sheng
2016-01-28
The rapid development of online social networks (e.g., Twitter and Facebook) has promoted research related to social networks in which link prediction is a key problem. Although numerous attempts have been made for link prediction based on network structure, node attribute and so on, few of the current studies have considered the impact of information diffusion on link creation and prediction. This paper mainly addresses Sina Weibo, which is the largest microblog platform with Chinese characteristics, and proposes the hypothesis that information diffusion influences link creation and verifies the hypothesis based on real data analysis. We also detect an important feature from the information diffusion process, which is used to promote link prediction performance. Finally, the experimental results on Sina Weibo dataset have demonstrated the effectiveness of our methods.
Assessment of traffic noise levels in urban areas using different soft computing techniques.
Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D
2016-10-01
Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.
Potentiality Prediction of Electric Power Replacement Based on Power Market Development Strategy
NASA Astrophysics Data System (ADS)
Miao, Bo; Yang, Shuo; Liu, Qiang; Lin, Jingyi; Zhao, Le; Liu, Chang; Li, Bin
2017-05-01
The application of electric power replacement plays an important role in promoting the development of energy conservation and emission reduction in our country. To exploit the potentiality of regional electric power replacement, the regional GDP (gross domestic product) and energy consumption are taken as potentiality evaluation indicators. The principal component factors are extracted with PCA (principal component analysis), and the integral potentiality analysis is made to the potentiality of electric power replacement in the national various regions; a region is taken as a research object, and the potentiality of electric power replacement is defined and quantified. The analytical model for the potentiality of multi-scenario electric power replacement is developed, and prediction is made to the energy consumption with the grey prediction model. The relevant theoretical research is utilized to realize prediction analysis on the potentiality amount of multi-scenario electric power replacement.
ERIC Educational Resources Information Center
Yeo, Seungsoo
2010-01-01
The purpose of this synthesis was to examine the relationship between Curriculum-Based Measurement (CBM) and statewide achievement tests in reading. A multilevel meta-analysis was used to calculate the correlation coefficient of the population for 27 studies that met the inclusion criteria. Results showed an overall large correlation coefficient…
Improved Bond Equations for Fiber-Reinforced Polymer Bars in Concrete
Pour, Sadaf Moallemi; Alam, M. Shahria; Milani, Abbas S.
2016-01-01
This paper explores a set of new equations to predict the bond strength between fiber reinforced polymer (FRP) rebar and concrete. The proposed equations are based on a comprehensive statistical analysis and existing experimental results in the literature. Namely, the most effective parameters on bond behavior of FRP concrete were first identified by applying a factorial analysis on a part of the available database. Then the database that contains 250 pullout tests were divided into four groups based on the concrete compressive strength and the rebar surface. Afterward, nonlinear regression analysis was performed for each study group in order to determine the bond equations. The results show that the proposed equations can predict bond strengths more accurately compared to the other previously reported models. PMID:28773859
Coupled rotor/airframe vibration analysis
NASA Technical Reports Server (NTRS)
Sopher, R.; Studwell, R. E.; Cassarino, S.; Kottapalli, S. B. R.
1982-01-01
A coupled rotor/airframe vibration analysis developed as a design tool for predicting helicopter vibrations and a research tool to quantify the effects of structural properties, aerodynamic interactions, and vibration reduction devices on vehicle vibration levels is described. The analysis consists of a base program utilizing an impedance matching technique to represent the coupled rotor/airframe dynamics of the system supported by inputs from several external programs supplying sophisticated rotor and airframe aerodynamic and structural dynamic representation. The theoretical background, computer program capabilities and limited correlation results are presented in this report. Correlation results using scale model wind tunnel results show that the analysis can adequately predict trends of vibration variations with airspeed and higher harmonic control effects. Predictions of absolute values of vibration levels were found to be very sensitive to modal characteristics and results were not representative of measured values.
Mocellin, Simone; Ambrosi, Alessandro; Montesco, Maria Cristina; Foletto, Mirto; Zavagno, Giorgio; Nitti, Donato; Lise, Mario; Rossi, Carlo Riccardo
2006-08-01
Currently, approximately 80% of melanoma patients undergoing sentinel node biopsy (SNB) have negative sentinel lymph nodes (SLNs), and no prediction system is reliable enough to be implemented in the clinical setting to reduce the number of SNB procedures. In this study, the predictive power of support vector machine (SVM)-based statistical analysis was tested. The clinical records of 246 patients who underwent SNB at our institution were used for this analysis. The following clinicopathologic variables were considered: the patient's age and sex and the tumor's histological subtype, Breslow thickness, Clark level, ulceration, mitotic index, lymphocyte infiltration, regression, angiolymphatic invasion, microsatellitosis, and growth phase. The results of SVM-based prediction of SLN status were compared with those achieved with logistic regression. The SLN positivity rate was 22% (52 of 234). When the accuracy was > or = 80%, the negative predictive value, positive predictive value, specificity, and sensitivity were 98%, 54%, 94%, and 77% and 82%, 41%, 69%, and 93% by using SVM and logistic regression, respectively. Moreover, SVM and logistic regression were associated with a diagnostic error and an SNB percentage reduction of (1) 1% and 60% and (2) 15% and 73%, respectively. The results from this pilot study suggest that SVM-based prediction of SLN status might be evaluated as a prognostic method to avoid the SNB procedure in 60% of patients currently eligible, with a very low error rate. If validated in larger series, this strategy would lead to obvious advantages in terms of both patient quality of life and costs for the health care system.
Gray, Benjamin J; Bracken, Richard M; Turner, Daniel; Morgan, Kerry; Thomas, Michael; Williams, Sally P; Williams, Meurig; Rice, Sam; Stephens, Jeffrey W
2015-01-01
Background Use of a validated risk-assessment tool to identify individuals at high risk of developing type 2 diabetes is currently recommended. It is under-reported, however, whether a different risk tool alters the predicted risk of an individual. Aim This study explored any differences between commonly used validated risk-assessment tools for type 2 diabetes. Design and setting Cross-sectional analysis of individuals who participated in a workplace-based risk assessment in Carmarthenshire, South Wales. Method Retrospective analysis of 676 individuals (389 females and 287 males) who participated in a workplace-based diabetes risk-assessment initiative. Ten-year risk of type 2 diabetes was predicted using the validated QDiabetes®, Leicester Risk Assessment (LRA), FINDRISC, and Cambridge Risk Score (CRS) algorithms. Results Differences between the risk-assessment tools were apparent following retrospective analysis of individuals. CRS categorised the highest proportion (13.6%) of individuals at ‘high risk’ followed by FINDRISC (6.6%), QDiabetes (6.1%), and, finally, the LRA was the most conservative risk tool (3.1%). Following further analysis by sex, over one-quarter of males were categorised at high risk using CRS (25.4%), whereas a greater percentage of females were categorised as high risk using FINDRISC (7.8%). Conclusion The adoption of a different valid risk-assessment tool can alter the predicted risk of an individual and caution should be used to identify those individuals who really are at high risk of type 2 diabetes. PMID:26541180
Gray, Benjamin J; Bracken, Richard M; Turner, Daniel; Morgan, Kerry; Thomas, Michael; Williams, Sally P; Williams, Meurig; Rice, Sam; Stephens, Jeffrey W
2015-12-01
Use of a validated risk-assessment tool to identify individuals at high risk of developing type 2 diabetes is currently recommended. It is under-reported, however, whether a different risk tool alters the predicted risk of an individual. This study explored any differences between commonly used validated risk-assessment tools for type 2 diabetes. Cross-sectional analysis of individuals who participated in a workplace-based risk assessment in Carmarthenshire, South Wales. Retrospective analysis of 676 individuals (389 females and 287 males) who participated in a workplace-based diabetes risk-assessment initiative. Ten-year risk of type 2 diabetes was predicted using the validated QDiabetes(®), Leicester Risk Assessment (LRA), FINDRISC, and Cambridge Risk Score (CRS) algorithms. Differences between the risk-assessment tools were apparent following retrospective analysis of individuals. CRS categorised the highest proportion (13.6%) of individuals at 'high risk' followed by FINDRISC (6.6%), QDiabetes (6.1%), and, finally, the LRA was the most conservative risk tool (3.1%). Following further analysis by sex, over one-quarter of males were categorised at high risk using CRS (25.4%), whereas a greater percentage of females were categorised as high risk using FINDRISC (7.8%). The adoption of a different valid risk-assessment tool can alter the predicted risk of an individual and caution should be used to identify those individuals who really are at high risk of type 2 diabetes. © British Journal of General Practice 2015.
NASA Astrophysics Data System (ADS)
Castedo, Ricardo; de la Vega-Panizo, Rogelio; Fernández-Hernández, Marta; Paredes, Carlos
2015-02-01
A key requirement for effective coastal zone management is good knowledge of historical rates of change and the ability to predict future shoreline evolution, especially for rapidly eroding areas. Historical shoreline recession analysis was used for the prediction of future cliff shoreline positions along a section of 9 km between Bridlington and Hornsea, on the northern area of the Holderness Coast, UK. The analysis was based on historical maps and aerial photographs dating from 1852 to 2011 using the Digital Shoreline Analysis System (DSAS) 4.3, extension of ESRI's ArcInfo 10.×. The prediction of future shorelines was performed for the next 40 years using a variety of techniques, ranging from extrapolation from historical data, geometric approaches like the historical trend analysis, to a process-response numerical model that incorporates physically-based equations and geotechnical stability analysis. With climate change and sea-level rise implying that historical rates of change may not be a reliable guide for the future, enhanced visualization of the evolving coastline has the potential to improve awareness of these changing conditions. Following the IPCC, 2013 report, two sea-level rise rates, 2 mm/yr and 6 mm/yr, have been used to estimate future shoreline conditions. This study illustrated that good predictive models, once their limitations are estimated or at least defined, are available for use by managers, planners, engineers, scientists and the public to make better decisions regarding coastal management, development, and erosion-control strategies.
NASA Astrophysics Data System (ADS)
Gu, Rongbao; Shao, Yanmin
2016-07-01
In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.
Symplectic geometry spectrum regression for prediction of noisy time series
NASA Astrophysics Data System (ADS)
Xie, Hong-Bo; Dokos, Socrates; Sivakumar, Bellie; Mengersen, Kerrie
2016-05-01
We present the symplectic geometry spectrum regression (SGSR) technique as well as a regularized method based on SGSR for prediction of nonlinear time series. The main tool of analysis is the symplectic geometry spectrum analysis, which decomposes a time series into the sum of a small number of independent and interpretable components. The key to successful regularization is to damp higher order symplectic geometry spectrum components. The effectiveness of SGSR and its superiority over local approximation using ordinary least squares are demonstrated through prediction of two noisy synthetic chaotic time series (Lorenz and Rössler series), and then tested for prediction of three real-world data sets (Mississippi River flow data and electromyographic and mechanomyographic signal recorded from human body).
Improve SSME power balance model
NASA Technical Reports Server (NTRS)
Karr, Gerald R.
1992-01-01
Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
Yu, Shao-Nan; Liu, Gui-Feng; Li, Xue-Feng; Fu, Bao-Hong; Dong, Li-Xin; Zhang, Shu-Hua
2017-12-01
This network meta-analysis (NMA) was conducted to compare the predictive value of 14 SNPs in eight DNA repair genes on the efficacy of platinum-based chemotherapy in patients with non-small cell lung cancer (NSCLC). These included ERCC1 (rs11615, rs3212986, rs3212948), XRCC1 (rs25487, rs25489, rs1799782), XPD (rs13181, rs1799793), XPG (rs1047768, rs17655), XPA (rs1800975), XRCC3 (rs861539), APE1 (rs3136820), and RRM1 (rs1042858). The PubMed and Cochrane library databases were reviewed from their inception to February 2017 and studies which met our inclusion criteria were included in our investigation. This network meta-analysis combines direct and indirect evidence to assess the predictive value of 14 SNPs in eight DNA repair genes on the efficacy of platinum-based chemotherapy in NSCLC. We evaluated the predictive value through the use of the odd ratios (OR) and drawing surface under the cumulative ranking curves (SUCRA). A total of 26 eligible cohort studies were enrolled in this NMA. The pairwise meta-analysis indicated that in terms of overall response ratio (ORR), ERCC1 (rs11615), XRCC1 (rs25487, rs1799782), and XPD (rs13181) polymorphisms are associated with the efficacy of platinum-based chemotherapy in NSCLC. The result of this NMA suggests that there is no significant difference in predictive value of 8 DNA repair genes on the efficacy of platinum-based chemotherapy in NSCLC patients. The rank of SUCRA values of the 14 SNPs in the eight DNA repair genes were: XPD (rs1799793)→ERCC1 (rs3212986)→XPA(rs1800975)→ERCC1(rs3212948)→XRCC1(rs25487)→XRCC3(rs861539)→APE1(rs3136820)→ERCC1(rs11615)→XRCC1(rs1799782)→RRM1(rs1042858)→XPD(rs13181)→XPG (rs1047768)→XPG(rs17655)→XRCC1(rs25489). ERCC1(rs11615), XRCC1(rs25487, rs1799782) and XPD(rs13181) polymorphisms were better predictors in evaluating the efficacy of platinum-based chemotherapy in NSCLC patients. J. Cell. Biochem. 118: 4782-4791, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Comparison of RNA-seq and microarray-based models for clinical endpoint prediction.
Zhang, Wenqian; Yu, Ying; Hertwig, Falk; Thierry-Mieg, Jean; Zhang, Wenwei; Thierry-Mieg, Danielle; Wang, Jian; Furlanello, Cesare; Devanarayan, Viswanath; Cheng, Jie; Deng, Youping; Hero, Barbara; Hong, Huixiao; Jia, Meiwen; Li, Li; Lin, Simon M; Nikolsky, Yuri; Oberthuer, André; Qing, Tao; Su, Zhenqiang; Volland, Ruth; Wang, Charles; Wang, May D; Ai, Junmei; Albanese, Davide; Asgharzadeh, Shahab; Avigad, Smadar; Bao, Wenjun; Bessarabova, Marina; Brilliant, Murray H; Brors, Benedikt; Chierici, Marco; Chu, Tzu-Ming; Zhang, Jibin; Grundy, Richard G; He, Min Max; Hebbring, Scott; Kaufman, Howard L; Lababidi, Samir; Lancashire, Lee J; Li, Yan; Lu, Xin X; Luo, Heng; Ma, Xiwen; Ning, Baitang; Noguera, Rosa; Peifer, Martin; Phan, John H; Roels, Frederik; Rosswog, Carolina; Shao, Susan; Shen, Jie; Theissen, Jessica; Tonini, Gian Paolo; Vandesompele, Jo; Wu, Po-Yen; Xiao, Wenzhong; Xu, Joshua; Xu, Weihong; Xuan, Jiekun; Yang, Yong; Ye, Zhan; Dong, Zirui; Zhang, Ke K; Yin, Ye; Zhao, Chen; Zheng, Yuanting; Wolfinger, Russell D; Shi, Tieliu; Malkas, Linda H; Berthold, Frank; Wang, Jun; Tong, Weida; Shi, Leming; Peng, Zhiyu; Fischer, Matthias
2015-06-25
Gene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model. We generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models. We demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice.
A statistical software tool, Stream Fish Community Predictor (SFCP), based on EMAP stream sampling in the mid-Atlantic Highlands, was developed to predict stream fish communities using stream and watershed characteristics. Step one in the tool development was a cluster analysis t...
PREDICTING ER BINDING AFFINITY FOR EDC RANKING AND PRIORITIZATION: A COMPARISON OF THREE MODELS
A comparative analysis of how three COREPA models for ER binding affinity performed when used to predict potential estrogen receptor (ER) ligands is presented. Models I and II were developed based on training sets of 232 and 279 rat ER binding affinity measurements, respectively....
Health Literacy Predicts Cardiac Knowledge Gains in Cardiac Rehabilitation Participants
ERIC Educational Resources Information Center
Mattson, Colleen C.; Rawson, Katherine; Hughes, Joel W.; Waechter, Donna; Rosneck, James
2015-01-01
Objective: Health literacy is increasingly recognised as a potentially important patient characteristic related to patient education efforts. We evaluated whether health literacy would predict gains in knowledge after completion of patient education in cardiac rehabilitation. Method: This was a re-post observational analysis study design based on…
Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling
USDA-ARS?s Scientific Manuscript database
We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...
Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers
Jiang, Yong; Schmidt, Renate H.; Reif, Jochen C.
2018-01-01
Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. PMID:29549092
Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers.
Jiang, Yong; Schmidt, Renate H; Reif, Jochen C
2018-05-04
Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. Copyright © 2018 Jiang et al.
Study on rapid valid acidity evaluation of apple by fiber optic diffuse reflectance technique
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Fu, Xiaping; Jiang, Xuesong
2004-03-01
Some issues related to nondestructive evaluation of valid acidity in intact apples by means of Fourier transform near infrared (FTNIR) (800-2631nm) method were addressed. A relationship was established between the diffuse reflectance spectra recorded with a bifurcated optic fiber and the valid acidity. The data were analyzed by multivariate calibration analysis such as partial least squares (PLS) analysis and principal component regression (PCR) technique. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influence of data preprocessing and different spectra treatments were also investigated. Models based on smoothing spectra were slightly worse than models based on derivative spectra and the best result was obtained when the segment length was 5 and the gap size was 10. Depending on data preprocessing and multivariate calibration technique, the best prediction model had a correlation efficient (0.871), a low RMSEP (0.0677), a low RMSEC (0.056) and a small difference between RMSEP and RMSEC by PLS analysis. The results point out the feasibility of FTNIR spectral analysis to predict the fruit valid acidity non-destructively. The ratio of data standard deviation to the root mean square error of prediction (SDR) is better to be less than 3 in calibration models, however, the results cannot meet the demand of actual application. Therefore, further study is required for better calibration and prediction.
Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research
Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi
2016-01-01
The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637
A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Piascik, Robert S.; Newman, James C., Jr.
1999-01-01
An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.
A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints
NASA Technical Reports Server (NTRS)
Harris, C. E.; Piascik, R. S.; Newman, J. C., Jr.
2000-01-01
An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.
NASA Astrophysics Data System (ADS)
Yao, Yao
2012-05-01
Hydraulic fracturing technology is being widely used within the oil and gas industry for both waste injection and unconventional gas production wells. It is essential to predict the behavior of hydraulic fractures accurately based on understanding the fundamental mechanism(s). The prevailing approach for hydraulic fracture modeling continues to rely on computational methods based on Linear Elastic Fracture Mechanics (LEFM). Generally, these methods give reasonable predictions for hard rock hydraulic fracture processes, but still have inherent limitations, especially when fluid injection is performed in soft rock/sand or other non-conventional formations. These methods typically give very conservative predictions on fracture geometry and inaccurate estimation of required fracture pressure. One of the reasons the LEFM-based methods fail to give accurate predictions for these materials is that the fracture process zone ahead of the crack tip and softening effect should not be neglected in ductile rock fracture analysis. A 3D pore pressure cohesive zone model has been developed and applied to predict hydraulic fracturing under fluid injection. The cohesive zone method is a numerical tool developed to model crack initiation and growth in quasi-brittle materials considering the material softening effect. The pore pressure cohesive zone model has been applied to investigate the hydraulic fracture with different rock properties. The hydraulic fracture predictions of a three-layer water injection case have been compared using the pore pressure cohesive zone model with revised parameters, LEFM-based pseudo 3D model, a Perkins-Kern-Nordgren (PKN) model, and an analytical solution. Based on the size of the fracture process zone and its effect on crack extension in ductile rock, the fundamental mechanical difference of LEFM and cohesive fracture mechanics-based methods is discussed. An effective fracture toughness method has been proposed to consider the fracture process zone effect on the ductile rock fracture.
Prediction of microcracking in composite laminates under thermomechanical loading
NASA Technical Reports Server (NTRS)
Maddocks, Jason R.; Mcmanus, Hugh L.
1995-01-01
Composite laminates used in space structures are exposed to both thermal and mechanical loads. Cracks in the matrix form, changing the laminate thermoelastic properties. An analytical methodology is developed to predict microcrack density in a general laminate exposed to an arbitrary thermomechanical load history. The analysis uses a shear lag stress solution in conjunction with an energy-based cracking criterion. Experimental investigation was used to verify the analysis. Correlation between analysis and experiment is generally excellent. The analysis does not capture machining-induced cracking, or observed delayed crack initiation in a few ply groups, but these errors do not prevent the model from being a useful preliminary design tool.
Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P
2018-02-01
The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.
Prediction of Composite Laminate Strength Properties Using a Refined Zigzag Plate Element
NASA Technical Reports Server (NTRS)
Barut, Atila; Madenci, Erdogan; Tessler, Alexander
2013-01-01
This study presents an approach that uses the refined zigzag element, RZE(exp2,2) in conjunction with progressive failure criteria to predict the ultimate strength of composite laminates based on only ply-level strength properties. The methodology involves four major steps: (1) Determination of accurate stress and strain fields under complex loading conditions using RZE(exp2,2)-based finite element analysis, (2) Determination of failure locations and failure modes using the commonly accepted Hashin's failure criteria, (3) Recursive degradation of the material stiffness, and (4) Non-linear incremental finite element analysis to obtain stress redistribution until global failure. The validity of this approach is established by considering the published test data and predictions for (1) strength of laminates under various off-axis loading, (2) strength of laminates with a hole under compression, and (3) strength of laminates with a hole under tension.
Predicting Positive Education Outcomes for Emerging Adults in Mental Health Systems of Care.
Brennan, Eileen M; Nygren, Peggy; Stephens, Robert L; Croskey, Adrienne
2016-10-01
Emerging adults who receive services based on positive youth development models have shown an ability to shape their own life course to achieve positive goals. This paper reports secondary data analysis from the Longitudinal Child and Family Outcome Study including 248 culturally diverse youth ages 17 through 22 receiving mental health services in systems of care. After 12 months of services, school performance was positively related to youth ratings of school functioning and service participation and satisfaction. Regression analysis revealed ratings of young peoples' perceptions of school functioning, and their experience in services added to the significant prediction of satisfactory school performance, even controlling for sex and attendance. Finally, in addition to expected predictors, participation in planning their own services significantly predicted enrollment in higher education for those who finished high school. Findings suggest that programs and practices based on positive youth development approaches can improve educational outcomes for emerging adults.
Gan, Heng Hui; Yan, Bingnan; Linforth, Robert S.T.; Fisk, Ian D.
2016-01-01
Headspace techniques have been extensively employed in food analysis to measure volatile compounds, which play a central role in the perceived quality of food. In this study atmospheric pressure chemical ionisation-mass spectrometry (APCI-MS), coupled with gas chromatography–mass spectrometry (GC–MS), was used to investigate the complex mix of volatile compounds present in Cheddar cheeses of different maturity, processing and recipes to enable characterisation of the cheeses based on their ripening stages. Partial least squares-linear discriminant analysis (PLS-DA) provided a 70% success rate in correct prediction of the age of the cheeses based on their key headspace volatile profiles. In addition to predicting maturity, the analytical results coupled with chemometrics offered a rapid and detailed profiling of the volatile component of Cheddar cheeses, which could offer a new tool for quality assessment and accelerate product development. PMID:26212994
NASA Astrophysics Data System (ADS)
Kodera, Yuki
2018-01-01
Large earthquakes with long rupture durations emit P wave energy throughout the rupture period. Incorporating late-onset P waves into earthquake early warning (EEW) algorithms could contribute to robust predictions of strong ground motion. Here I describe a technique to detect in real time P waves from growing ruptures to improve the timeliness of an EEW algorithm based on seismic wavefield estimation. The proposed P wave detector, which employs a simple polarization analysis, successfully detected P waves from strong motion generation areas of the 2011 Mw 9.0 Tohoku-oki earthquake rupture. An analysis using 23 large (M ≥ 7) events from Japan confirmed that seismic intensity predictions based on the P wave detector significantly increased lead times without appreciably decreasing the prediction accuracy. P waves from growing ruptures, being one of the fastest carriers of information on ongoing rupture development, have the potential to improve the performance of EEW systems.
Formability prediction for AHSS materials using damage models
NASA Astrophysics Data System (ADS)
Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara
2017-05-01
Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.
Analysis of temporal dynamics in imagery during acute limb ischemia and reperfusion
NASA Astrophysics Data System (ADS)
Irvine, John M.; Regan, John; Spain, Tammy A.; Caruso, Joseph D.; Rodriquez, Maricela; Luthra, Rajiv; Forsberg, Jonathon; Crane, Nicole J.; Elster, Eric
2014-03-01
Ischemia and reperfusion injuries present major challenges for both military and civilian medicine. Improved methods for assessing the effects and predicting outcome could guide treatment decisions. Specific issues related to ischemia and reperfusion injury can include complications arising from tourniquet use, such as microvascular leakage in the limb, loss of muscle strength and systemic failures leading to hypotension and cardiac failure. Better methods for assessing the viability of limbs/tissues during ischemia and reducing complications arising from reperfusion are critical to improving clinical outcomes for at-risk patients. The purpose of this research is to develop and assess possible prediction models of outcome for acute limb ischemia using a pre-clinical model. Our model relies only on non-invasive imaging data acquired from an animal study. Outcome is measured by pathology and functional scores. We explore color, texture, and temporal features derived from both color and thermal motion imagery acquired during ischemia and reperfusion. The imagery features form the explanatory variables in a model for predicting outcome. Comparing model performance to outcome prediction based on direct observation of blood chemistry, blood gas, urinalysis, and physiological measurements provides a reference standard. Initial results show excellent performance for the imagery-base model, compared to predictions based direct measurements. This paper will present the models and supporting analysis, followed by recommendations for future investigations.
Shahlaei, M.; Saghaie, L.
2014-01-01
A quantitative structure–activity relationship (QSAR) study is suggested for the prediction of biological activity (pIC50) of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors. Modeling of the biological activities of compounds of interest as a function of molecular structures was established by means of principal component analysis (PCA) and least square support vector machine (LS-SVM) methods. The results showed that the pIC50 values calculated by LS-SVM are in good agreement with the experimental data, and the performance of the LS-SVM regression model is superior to the PCA-based model. The developed LS-SVM model was applied for the prediction of the biological activities of pyrimidone derivatives, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.460 for LS-SVM. The study provided a novel and effective approach for predicting biological activities of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors and disclosed that LS-SVM can be used as a powerful chemometrics tool for QSAR studies. PMID:26339262
Simulation analysis of adaptive cruise prediction control
NASA Astrophysics Data System (ADS)
Zhang, Li; Cui, Sheng Min
2017-09-01
Predictive control is suitable for multi-variable and multi-constraint system control.In order to discuss the effect of predictive control on the vehicle longitudinal motion, this paper establishes the expected spacing model by combining variable pitch spacing and the of safety distance strategy. The model predictive control theory and the optimization method based on secondary planning are designed to obtain and track the best expected acceleration trajectory quickly. Simulation models are established including predictive and adaptive fuzzy control. Simulation results show that predictive control can realize the basic function of the system while ensuring the safety. The application of predictive and fuzzy adaptive algorithm in cruise condition indicates that the predictive control effect is better.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Qi, Haishan; Lv, Mengmeng; Song, Kejing; Wen, Jianping
2017-05-01
Herein, the hyper-producing strain for ascomycin was engineered based on 13 C-labeling experiments and elementary flux modes analysis (EFMA). First, the metabolism of non-model organism Streptomyces hygroscopicus var. ascomyceticus SA68 was investigated and an updated network model was reconstructed using 13 C- metabolic flux analysis. Based on the precise model, EFMA was further employed to predict genetic targets for higher ascomycin production. Chorismatase (FkbO) and pyruvate carboxylase (Pyc) were predicted as the promising overexpression and deletion targets, respectively. The corresponding mutant TD-FkbO and TD-ΔPyc exhibited the consistency effects between model prediction and experimental results. Finally, the combined genetic manipulations were performed, achieving a high-yield ascomycin engineering strain TD-ΔPyc-FkbO with production up to 610 mg/L, 84.8% improvement compared with the parent strain SA68. These results manifested that the integration of 13 C-labeling experiments and in silico pathway analysis could serve as a promising concept to enhance ascomycin production, as well as other valuable products. Biotechnol. Bioeng. 2017;114: 1036-1044. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Chen, Xuewen; Alonso, Ana P; Allen, Doug K; Reed, Jennifer L; Shachar-Hill, Yair
2011-01-01
Genome-based Flux Balance Analysis (FBA) and steady-state isotopic-labeling-based Metabolic Flux Analysis (MFA) are complimentary approaches to predicting and measuring the operation and regulation of metabolic networks. Here, genome-derived models of Escherichia coli (E. coli) metabolism were used for FBA and ¹³C-MFA analyses of aerobic and anaerobic growths of wild-type E. coli (K-12 MG1655) cells. Validated MFA flux maps reveal that the fraction of maintenance ATP consumption in total ATP production is about 14% higher under anaerobic (51.1%) than aerobic conditions (37.2%). FBA revealed that an increased ATP utilization is consumed by ATP synthase to secrete protons from fermentation. The TCA cycle is shown to be incomplete in aerobically growing cells and submaximal growth is due to limited oxidative phosphorylation. An FBA was successful in predicting product secretion rates in aerobic culture if both glucose and oxygen uptake measurement were constrained, but the most-frequently predicted values of internal fluxes yielded from sampling the feasible space differ substantially from MFA-derived fluxes. © 2010 Elsevier Inc. All rights reserved.
Applications of a damage tolerance analysis methodology in aircraft design and production
NASA Technical Reports Server (NTRS)
Woodward, M. R.; Owens, S. D.; Law, G. E.; Mignery, L. A.
1992-01-01
Objectives of customer mandated aircraft structural integrity initiatives in design are to guide material selection, to incorporate fracture resistant concepts in the design, to utilize damage tolerance based allowables and planned inspection procedures necessary to enhance the safety and reliability of manned flight vehicles. However, validated fracture analysis tools for composite structures are needed to accomplish these objectives in a timely and economical manner. This paper briefly describes the development, validation, and application of a damage tolerance methodology for composite airframe structures. A closed-form analysis code, entitled SUBLAM was developed to predict the critical biaxial strain state necessary to cause sublaminate buckling-induced delamination extension in an impact damaged composite laminate. An embedded elliptical delamination separating a thin sublaminate from a thick parent laminate is modelled. Predicted failure strains were correlated against a variety of experimental data that included results from compression after impact coupon and element tests. An integrated analysis package was developed to predict damage tolerance based margin-of-safety (MS) using NASTRAN generated loads and element information. Damage tolerance aspects of new concepts are quickly and cost-effectively determined without the need for excessive testing.
Soil-pipe interaction modeling for pipe behavior prediction with super learning based methods
NASA Astrophysics Data System (ADS)
Shi, Fang; Peng, Xiang; Liu, Huan; Hu, Yafei; Liu, Zheng; Li, Eric
2018-03-01
Underground pipelines are subject to severe distress from the surrounding expansive soil. To investigate the structural response of water mains to varying soil movements, field data, including pipe wall strains in situ soil water content, soil pressure and temperature, was collected. The research on monitoring data analysis has been reported, but the relationship between soil properties and pipe deformation has not been well-interpreted. To characterize the relationship between soil property and pipe deformation, this paper presents a super learning based approach combining feature selection algorithms to predict the water mains structural behavior in different soil environments. Furthermore, automatic variable selection method, e.i. recursive feature elimination algorithm, were used to identify the critical predictors contributing to the pipe deformations. To investigate the adaptability of super learning to different predictive models, this research employed super learning based methods to three different datasets. The predictive performance was evaluated by R-squared, root-mean-square error and mean absolute error. Based on the prediction performance evaluation, the superiority of super learning was validated and demonstrated by predicting three types of pipe deformations accurately. In addition, a comprehensive understand of the water mains working environments becomes possible.
MEM spectral analysis for predicting influenza epidemics in Japan.
Sumi, Ayako; Kamo, Ken-ichi
2012-03-01
The prediction of influenza epidemics has long been the focus of attention in epidemiology and mathematical biology. In this study, we tested whether time series analysis was useful for predicting the incidence of influenza in Japan. The method of time series analysis we used consists of spectral analysis based on the maximum entropy method (MEM) in the frequency domain and the nonlinear least squares method in the time domain. Using this time series analysis, we analyzed the incidence data of influenza in Japan from January 1948 to December 1998; these data are unique in that they covered the periods of pandemics in Japan in 1957, 1968, and 1977. On the basis of the MEM spectral analysis, we identified the periodic modes explaining the underlying variations of the incidence data. The optimum least squares fitting (LSF) curve calculated with the periodic modes reproduced the underlying variation of the incidence data. An extension of the LSF curve could be used to predict the incidence of influenza quantitatively. Our study suggested that MEM spectral analysis would allow us to model temporal variations of influenza epidemics with multiple periodic modes much more effectively than by using the method of conventional time series analysis, which has been used previously to investigate the behavior of temporal variations in influenza data.
In silico platform for predicting and initiating β-turns in a protein at desired locations.
Singh, Harinder; Singh, Sandeep; Raghava, Gajendra P S
2015-05-01
Numerous studies have been performed for analysis and prediction of β-turns in a protein. This study focuses on analyzing, predicting, and designing of β-turns to understand the preference of amino acids in β-turn formation. We analyzed around 20,000 PDB chains to understand the preference of residues or pair of residues at different positions in β-turns. Based on the results, a propensity-based method has been developed for predicting β-turns with an accuracy of 82%. We introduced a new approach entitled "Turn level prediction method," which predicts the complete β-turn rather than focusing on the residues in a β-turn. Finally, we developed BetaTPred3, a Random forest based method for predicting β-turns by utilizing various features of four residues present in β-turns. The BetaTPred3 achieved an accuracy of 79% with 0.51 MCC that is comparable or better than existing methods on BT426 dataset. Additionally, models were developed to predict β-turn types with better performance than other methods available in the literature. In order to improve the quality of prediction of turns, we developed prediction models on a large and latest dataset of 6376 nonredundant protein chains. Based on this study, a web server has been developed for prediction of β-turns and their types in proteins. This web server also predicts minimum number of mutations required to initiate or break a β-turn in a protein at specified location of a protein. © 2015 Wiley Periodicals, Inc.
Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation
Dayan, Peter; Berridge, Kent C.
2014-01-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659
On some methods for assessing earthquake predictions
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.; Peresan, A.
2017-09-01
A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.
Prediction of physical workload in reduced gravity environments
NASA Technical Reports Server (NTRS)
Goldberg, Joseph H.
1987-01-01
The background, development, and application of a methodology to predict human energy expenditure and physical workload in low gravity environments, such as a Lunar or Martian base, is described. Based on a validated model to predict energy expenditures in Earth-based industrial jobs, the model relies on an elemental analysis of the proposed job. Because the job itself need not physically exist, many alternative job designs may be compared in their physical workload. The feasibility of using the model for prediction of low gravity work was evaluated by lowering body and load weights, while maintaining basal energy expenditure. Comparison of model results was made both with simulated low gravity energy expenditure studies and with reported Apollo 14 Lunar EVA expenditure. Prediction accuracy was very good for walking and for cart pulling on slopes less than 15 deg, but the model underpredicted the most difficult work conditions. This model was applied to example core sampling and facility construction jobs, as presently conceptualized for a Lunar or Martian base. Resultant energy expenditures and suggested work-rest cycles were well within the range of moderate work difficulty. Future model development requirements were also discussed.
Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.
Dayan, Peter; Berridge, Kent C
2014-06-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.
Malec, James F; Parrot, Devan; Altman, Irwin M; Swick, Shannon
2015-01-01
The objective of the study was to develop statistical formulas to predict levels of community participation on discharge from post-hospital brain injury rehabilitation using retrospective data analysis. Data were collected from seven geographically distinct programmes in a home- and community-based brain injury rehabilitation provider network. Participants were 642 individuals with post-traumatic brain injury. Interventions consisted of home- and community-based brain injury rehabilitation. The main outcome measure was the Mayo-Portland Adaptability Inventory (MPAI-4) Participation Index. Linear discriminant models using admission MPAI-4 Participation Index score and log chronicity correctly predicted excellent (no to minimal participation limitations), very good (very mild participation limitations), good (mild participation limitations), and limited (significant participation limitations) outcome levels at discharge. Predicting broad outcome categories for post-hospital rehabilitation programmes based on admission assessment data appears feasible and valid. Equations to provide patients and families with probability statements on admission about expected levels of outcome are provided. It is unknown to what degree these prediction equations can be reliably applied and valid in other settings.
Making predictions skill level analysis
NASA Astrophysics Data System (ADS)
Katarína, Krišková; Marián, Kireš
2017-01-01
The current trend in the education is focused on skills that are cross-subject and have a great importance for the pupil future life. Pupils should acquire different types of skills during their education to be prepared for future careers and life in the 21st century. Physics as a subject offers many opportunities for pupils' skills development. One of the skills that are expected to be developed in physics and also in other sciences is making predictions. The prediction, in the meaning of the argument about what may happen in the future, is an integral part of the empirical cognition, in which students confront existing knowledge and experience with new, hitherto unknown and surprising phenomena. The extent of the skill is the formulation of hypotheses, which is required in the upper secondary physics education. In the contribution, the prediction skill is specified and its eventual levels are classified. Authors focus on the tools for skill level determination based on the analysis of pupils` worksheets. Worksheets are the part of the educational activities conducted within the Inquiry Science Laboratory Steelpark. Based on the formulation of pupils' prediction the pupils thinking can be seen and their understanding of the topic, as well as preconceptions and misconceptions.
Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.
Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Logistic model analysis of neurological findings in Minamata disease and the predicting index.
Nakagawa, Masanori; Kodama, Tomoko; Akiba, Suminori; Arimura, Kimiyoshi; Wakamiya, Junji; Futatsuka, Makoto; Kitano, Takao; Osame, Mitsuhiro
2002-01-01
To establish a statistical diagnostic method to identify patients with Minamata disease (MD) considering factors of aging and sex, we analyzed the neurological findings in MD patients, inhabitants in a methylmercury polluted (MP) area, and inhabitants in a non-MP area. We compared the neurological findings in MD patients and inhabitants aged more than 40 years in the non-MP area. Based on the different frequencies of the neurological signs in the two groups, we devised the following formula to calculate the predicting index for MD: predicting index = 1/(1+e(-x)) x 100 (The value of x was calculated using the regression coefficients of each neurological finding obtained from logistic analysis. The index 100 indicated MD, and 0, non-MD). Using this method, we found that 100% of male and 98% of female patients with MD (95 cases) gave predicting indices higher than 95. Five percent of the aged inhabitants in the MP area (598 inhabitants) and 0.2% of those in the non-MP area (558 inhabitants) gave predicting indices of 50 or higher. Our statistical diagnostic method for MD was useful in distinguishing MD patients from healthy elders based on their neurological findings.
A comprehensive prediction and evaluation method of pilot workload
Feng, Chuanyan; Wanyan, Xiaoru; Yang, Kun; Zhuang, Damin; Wu, Xu
2018-01-01
BACKGROUND: The prediction and evaluation of pilot workload is a key problem in human factor airworthiness of cockpit. OBJECTIVE: A pilot traffic pattern task was designed in a flight simulation environment in order to carry out the pilot workload prediction and improve the evaluation method. METHODS: The prediction of typical flight subtasks and dynamic workloads (cruise, approach, and landing) were built up based on multiple resource theory, and a favorable validity was achieved by the correlation analysis verification between sensitive physiological data and the predicted value. RESULTS: Statistical analysis indicated that eye movement indices (fixation frequency, mean fixation time, saccade frequency, mean saccade time, and mean pupil diameter), Electrocardiogram indices (mean normal-to-normal interval and the ratio between low frequency and sum of low frequency and high frequency), and Electrodermal Activity indices (mean tonic and mean phasic) were all sensitive to typical workloads of subjects. CONCLUSION: A multinominal logistic regression model based on combination of physiological indices (fixation frequency, mean normal-to-normal interval, the ratio between low frequency and sum of low frequency and high frequency, and mean tonic) was constructed, and the discriminate accuracy was comparatively ideal with a rate of 84.85%. PMID:29710742
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namhata, Argha; Oladyshkin, Sergey; Dilmore, Rober
Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices basedmore » on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites.« less
A comprehensive prediction and evaluation method of pilot workload.
Feng, Chuanyan; Wanyan, Xiaoru; Yang, Kun; Zhuang, Damin; Wu, Xu
2018-01-01
The prediction and evaluation of pilot workload is a key problem in human factor airworthiness of cockpit. A pilot traffic pattern task was designed in a flight simulation environment in order to carry out the pilot workload prediction and improve the evaluation method. The prediction of typical flight subtasks and dynamic workloads (cruise, approach, and landing) were built up based on multiple resource theory, and a favorable validity was achieved by the correlation analysis verification between sensitive physiological data and the predicted value. Statistical analysis indicated that eye movement indices (fixation frequency, mean fixation time, saccade frequency, mean saccade time, and mean pupil diameter), Electrocardiogram indices (mean normal-to-normal interval and the ratio between low frequency and sum of low frequency and high frequency), and Electrodermal Activity indices (mean tonic and mean phasic) were all sensitive to typical workloads of subjects. A multinominal logistic regression model based on combination of physiological indices (fixation frequency, mean normal-to-normal interval, the ratio between low frequency and sum of low frequency and high frequency, and mean tonic) was constructed, and the discriminate accuracy was comparatively ideal with a rate of 84.85%.
NASA Astrophysics Data System (ADS)
Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.
2018-04-01
Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.
Predictive models of safety based on audit findings: Part 2: Measurement of model validity.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-07-01
Part 1 of this study sequence developed a human factors/ergonomics (HF/E) based classification system (termed HFACS-MA) for safety audit findings and proved its measurement reliability. In Part 2, we used the human error categories of HFACS-MA as predictors of future safety performance. Audit records and monthly safety incident reports from two airlines submitted to their regulatory authority were available for analysis, covering over 6.5 years. Two participants derived consensus results of HF/E errors from the audit reports using HFACS-MA. We adopted Neural Network and Poisson regression methods to establish nonlinear and linear prediction models respectively. These models were tested for the validity of prediction of the safety data, and only Neural Network method resulted in substantially significant predictive ability for each airline. Alternative predictions from counting of audit findings and from time sequence of safety data produced some significant results, but of much smaller magnitude than HFACS-MA. The use of HF/E analysis of audit findings provided proactive predictors of future safety performance in the aviation maintenance field. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Formation enthalpies for transition metal alloys using machine learning
NASA Astrophysics Data System (ADS)
Ubaru, Shashanka; Miedlar, Agnieszka; Saad, Yousef; Chelikowsky, James R.
2017-06-01
The enthalpy of formation is an important thermodynamic property. Developing fast and accurate methods for its prediction is of practical interest in a variety of applications. Material informatics techniques based on machine learning have recently been introduced in the literature as an inexpensive means of exploiting materials data, and can be used to examine a variety of thermodynamics properties. We investigate the use of such machine learning tools for predicting the formation enthalpies of binary intermetallic compounds that contain at least one transition metal. We consider certain easily available properties of the constituting elements complemented by some basic properties of the compounds, to predict the formation enthalpies. We show how choosing these properties (input features) based on a literature study (using prior physics knowledge) seems to outperform machine learning based feature selection methods such as sensitivity analysis and LASSO (least absolute shrinkage and selection operator) based methods. A nonlinear kernel based support vector regression method is employed to perform the predictions. The predictive ability of our model is illustrated via several experiments on a dataset containing 648 binary alloys. We train and validate the model using the formation enthalpies calculated using a model by Miedema, which is a popular semiempirical model used for the prediction of formation enthalpies of metal alloys.
NASA Astrophysics Data System (ADS)
Liu, Zhenchen; Lu, Guihua; He, Hai; Wu, Zhiyong; He, Jian
2018-01-01
Reliable drought prediction is fundamental for water resource managers to develop and implement drought mitigation measures. Considering that drought development is closely related to the spatial-temporal evolution of large-scale circulation patterns, we developed a conceptual prediction model of seasonal drought processes based on atmospheric and oceanic standardized anomalies (SAs). Empirical orthogonal function (EOF) analysis is first applied to drought-related SAs at 200 and 500 hPa geopotential height (HGT) and sea surface temperature (SST). Subsequently, SA-based predictors are built based on the spatial pattern of the first EOF modes. This drought prediction model is essentially the synchronous statistical relationship between 90-day-accumulated atmospheric-oceanic SA-based predictors and SPI3 (3-month standardized precipitation index), calibrated using a simple stepwise regression method. Predictor computation is based on forecast atmospheric-oceanic products retrieved from the NCEP Climate Forecast System Version 2 (CFSv2), indicating the lead time of the model depends on that of CFSv2. The model can make seamless drought predictions for operational use after a year-to-year calibration. Model application to four recent severe regional drought processes in China indicates its good performance in predicting seasonal drought development, despite its weakness in predicting drought severity. Overall, the model can be a worthy reference for seasonal water resource management in China.
Lalonde, Michel; Wells, R Glenn; Birnie, David; Ruddy, Terrence D; Wassenaar, Richard
2014-07-01
Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. About 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, Michel, E-mail: mlalonde15@rogers.com; Wassenaar, Richard; Wells, R. Glenn
2014-07-15
Purpose: Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. Methods: Aboutmore » 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Results: Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). Conclusions: A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.« less
ProTox: a web server for the in silico prediction of rodent oral toxicity
Drwal, Malgorzata N.; Banerjee, Priyanka; Dunkel, Mathias; Wettig, Martin R.; Preissner, Robert
2014-01-01
Animal trials are currently the major method for determining the possible toxic effects of drug candidates and cosmetics. In silico prediction methods represent an alternative approach and aim to rationalize the preclinical drug development, thus enabling the reduction of the associated time, costs and animal experiments. Here, we present ProTox, a web server for the prediction of rodent oral toxicity. The prediction method is based on the analysis of the similarity of compounds with known median lethal doses (LD50) and incorporates the identification of toxic fragments, therefore representing a novel approach in toxicity prediction. In addition, the web server includes an indication of possible toxicity targets which is based on an in-house collection of protein–ligand-based pharmacophore models (‘toxicophores’) for targets associated with adverse drug reactions. The ProTox web server is open to all users and can be accessed without registration at: http://tox.charite.de/tox. The only requirement for the prediction is the two-dimensional structure of the input compounds. All ProTox methods have been evaluated based on a diverse external validation set and displayed strong performance (sensitivity, specificity and precision of 76, 95 and 75%, respectively) and superiority over other toxicity prediction tools, indicating their possible applicability for other compound classes. PMID:24838562
Availability Analysis of Dual Mode Systems
DOT National Transportation Integrated Search
1974-04-01
The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...
Meertens, Linda J E; van Montfort, Pim; Scheepers, Hubertina C J; van Kuijk, Sander M J; Aardenburg, Robert; Langenveld, Josje; van Dooren, Ivo M A; Zwaan, Iris M; Spaanderman, Marc E A; Smits, Luc J M
2018-04-17
Prediction models may contribute to personalized risk-based management of women at high risk of spontaneous preterm delivery. Although prediction models are published frequently, often with promising results, external validation generally is lacking. We performed a systematic review of prediction models for the risk of spontaneous preterm birth based on routine clinical parameters. Additionally, we externally validated and evaluated the clinical potential of the models. Prediction models based on routinely collected maternal parameters obtainable during first 16 weeks of gestation were eligible for selection. Risk of bias was assessed according to the CHARMS guidelines. We validated the selected models in a Dutch multicenter prospective cohort study comprising 2614 unselected pregnant women. Information on predictors was obtained by a web-based questionnaire. Predictive performance of the models was quantified by the area under the receiver operating characteristic curve (AUC) and calibration plots for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation. Clinical value was evaluated by means of decision curve analysis and calculating classification accuracy for different risk thresholds. Four studies describing five prediction models fulfilled the eligibility criteria. Risk of bias assessment revealed a moderate to high risk of bias in three studies. The AUC of the models ranged from 0.54 to 0.67 and from 0.56 to 0.70 for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation, respectively. A subanalysis showed that the models discriminated poorly (AUC 0.51-0.56) for nulliparous women. Although we recalibrated the models, two models retained evidence of overfitting. The decision curve analysis showed low clinical benefit for the best performing models. This review revealed several reporting and methodological shortcomings of published prediction models for spontaneous preterm birth. Our external validation study indicated that none of the models had the ability to predict spontaneous preterm birth adequately in our population. Further improvement of prediction models, using recent knowledge about both model development and potential risk factors, is necessary to provide an added value in personalized risk assessment of spontaneous preterm birth. © 2018 The Authors Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).
NASA Astrophysics Data System (ADS)
Yu, H.-L.; Yang, S.-J.; Lin, Y.-C.
2012-04-01
Dengue Fever (DF) has been identified by the World Health organization (WHO) as one of the most serious vector-borne infectious diseases in tropical and sub-tropical areas. DF has been one of the most important epidemics in Taiwan which occur annually especially in southern Taiwan during summer and autumn. Most DF studies have focused mainly on temporal DF patterns and its close association with climatic covariates, whereas few studies have investigated the spatial DF patterns (spatial dependence and clustering) and composite space-time effects of the DF epidemics. The present study proposes a spatio-temporal DF prediction approach based on stochastic Bayesian Maximum Entropy (BME) analysis. Core and site-specific knowledge bases are considered, including climate and health datasets under conditions of uncertainty, space-time dependence functions, and a Poisson regression model of climatic variables contributing to DF occurrences in southern Taiwan during 2007, when the highest number of DF cases was recorded in the history of Taiwan epidemics (over 2000). The obtained results show that the DF outbreaks in the study area are highly influenced by climatic conditions. Furthermore, the analysis can provide the required "one-week-ahead" outbreak warnings based on spatio-temporal predictions of DF distributions. Therefore, the proposed analysis can provide the Taiwan Disease Control Agency with a valuable tool to timely identify, control, and even efficiently prevent DF spreading across space-time.
Park, In-Hee; Venable, John D; Steckler, Caitlin; Cellitti, Susan E; Lesley, Scott A; Spraggon, Glen; Brock, Ansgar
2015-09-28
Hydrogen exchange (HX) studies have provided critical insight into our understanding of protein folding, structure, and dynamics. More recently, hydrogen exchange mass spectrometry (HX-MS) has become a widely applicable tool for HX studies. The interpretation of the wealth of data generated by HX-MS experiments as well as other HX methods would greatly benefit from the availability of exchange predictions derived from structures or models for comparison with experiment. Most reported computational HX modeling studies have employed solvent-accessible-surface-area based metrics in attempts to interpret HX data on the basis of structures or models. In this study, a computational HX-MS prediction method based on classification of the amide hydrogen bonding modes mimicking the local unfolding model is demonstrated. Analysis of the NH bonding configurations from molecular dynamics (MD) simulation snapshots is used to determine partitioning over bonded and nonbonded NH states and is directly mapped into a protection factor (PF) using a logistics growth function. Predicted PFs are then used for calculating deuteration values of peptides and compared with experimental data. Hydrogen exchange MS data for fatty acid synthase thioesterase (FAS-TE) collected for a range of pHs and temperatures was used for detailed evaluation of the approach. High correlation between prediction and experiment for observable fragment peptides is observed in the FAS-TE and additional benchmarking systems that included various apo/holo proteins for which literature data were available. In addition, it is shown that HX modeling can improve experimental resolution through decomposition of in-exchange curves into rate classes, which correlate with prediction from MD. Successful rate class decompositions provide further evidence that the presented approach captures the underlying physical processes correctly at the single residue level. This assessment is further strengthened in a comparison of residue resolved protection factor predictions for staphylococcal nuclease with NMR data, which was also used to compare prediction performance with other algorithms described in the literature. The demonstrated transferable and scalable MD based HX prediction approach adds significantly to the available tools for HX-MS data interpretation based on available structures and models.
Park, In-Hee; Venable, John D.; Steckler, Caitlin; Cellitti, Susan E.; Lesley, Scott A.; Spraggon, Glen; Brock, Ansgar
2015-01-01
Hydrogen exchange (HX) studies have provided critical insight into our understanding of protein folding, structure and dynamics. More recently, Hydrogen Exchange Mass Spectrometry (HX-MS) has become a widely applicable tool for HX studies. The interpretation of the wealth of data generated by HX-MS experiments as well as other HX methods would greatly benefit from the availability of exchange predictions derived from structures or models for comparison with experiment. Most reported computational HX modeling studies have employed solvent-accessible-surface-area based metrics in attempts to interpret HX data on the basis of structures or models. In this study, a computational HX-MS prediction method based on classification of the amide hydrogen bonding modes mimicking the local unfolding model is demonstrated. Analysis of the NH bonding configurations from Molecular Dynamics (MD) simulation snapshots is used to determine partitioning over bonded and non-bonded NH states and is directly mapped into a protection factor (PF) using a logistics growth function. Predicted PFs are then used for calculating deuteration values of peptides and compared with experimental data. Hydrogen exchange MS data for Fatty acid synthase thioesterase (FAS-TE) collected for a range of pHs and temperatures was used for detailed evaluation of the approach. High correlation between prediction and experiment for observable fragment peptides is observed in the FAS-TE and additional benchmarking systems that included various apo/holo proteins for which literature data were available. In addition, it is shown that HX modeling can improve experimental resolution through decomposition of in-exchange curves into rate classes, which correlate with prediction from MD. Successful rate class decompositions provide further evidence that the presented approach captures the underlying physical processes correctly at the single residue level. This assessment is further strengthened in a comparison of residue resolved protection factor predictions for staphylococcal nuclease with NMR data, which was also used to compare prediction performance with other algorithms described in the literature. The demonstrated transferable and scalable MD based HX prediction approach adds significantly to the available tools for HX-MS data interpretation based on available structures and models. PMID:26241692
Analysis of view synthesis prediction architectures in modern coding standards
NASA Astrophysics Data System (ADS)
Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang
2013-09-01
Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.
Prediction of Spatiotemporal Patterns of Neural Activity from Pairwise Correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marre, O.; El Boustani, S.; Fregnac, Y.
We designed a model-based analysis to predict the occurrence of population patterns in distributed spiking activity. Using a maximum entropy principle with a Markovian assumption, we obtain a model that accounts for both spatial and temporal pairwise correlations among neurons. This model is tested on data generated with a Glauber spin-glass system and is shown to correctly predict the occurrence probabilities of spatiotemporal patterns significantly better than Ising models only based on spatial correlations. This increase of predictability was also observed on experimental data recorded in parietal cortex during slow-wave sleep. This approach can also be used to generate surrogatesmore » that reproduce the spatial and temporal correlations of a given data set.« less
Laine, Elodie; Carbone, Alessandra
2015-01-01
Protein-protein interactions (PPIs) are essential to all biological processes and they represent increasingly important therapeutic targets. Here, we present a new method for accurately predicting protein-protein interfaces, understanding their properties, origins and binding to multiple partners. Contrary to machine learning approaches, our method combines in a rational and very straightforward way three sequence- and structure-based descriptors of protein residues: evolutionary conservation, physico-chemical properties and local geometry. The implemented strategy yields very precise predictions for a wide range of protein-protein interfaces and discriminates them from small-molecule binding sites. Beyond its predictive power, the approach permits to dissect interaction surfaces and unravel their complexity. We show how the analysis of the predicted patches can foster new strategies for PPIs modulation and interaction surface redesign. The approach is implemented in JET2, an automated tool based on the Joint Evolutionary Trees (JET) method for sequence-based protein interface prediction. JET2 is freely available at www.lcqb.upmc.fr/JET2. PMID:26690684
NASA Technical Reports Server (NTRS)
Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.
1991-01-01
A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).
NASA Astrophysics Data System (ADS)
Pirozzoli, Sergio
2018-07-01
We develop predictive formulas for friction resistance in ducts with complex cross-sectional shape based on the use of the log law and neglect of wall shear stress nonuniformities. The traditional hydraulic diameter naturally emerges from the analysis as the controlling length scale for common duct shapes as triangles and regular polygons. The analysis also suggests that a new effective diameter should be used in more general cases, yielding corrections of a few percent to friction estimates based on the traditional hydraulic diameter. Fair but consistent predictive improvement is shown for duct geometries of practical relevance, including rectangular and annular ducts, and circular rod bundles.
ERIC Educational Resources Information Center
Laracy, Seth D.; Hojnoski, Robin L.; Dever, Bridget V.
2016-01-01
Receiver operating characteristic curve (ROC) analysis was used to investigate the ability of early numeracy curriculum-based measures (EN-CBM) administered in preschool to predict performance below the 25th and 40th percentiles on a quantity discrimination measure in kindergarten. Areas under the curve derived from a sample of 279 students ranged…
Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir
2014-01-01
Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.
Human Factors Vehicle Displacement Analysis: Engineering In Motion
NASA Technical Reports Server (NTRS)
Atencio, Laura Ashley; Reynolds, David; Robertson, Clay
2010-01-01
While positioned on the launch pad at the Kennedy Space Center, tall stacked launch vehicles are exposed to the natural environment. Varying directional winds and vortex shedding causes the vehicle to sway in an oscillating motion. The Human Factors team recognizes that vehicle sway may hinder ground crew operation, impact the ground system designs, and ultimately affect launch availability . The objective of this study is to physically simulate predicted oscillation envelopes identified by analysis. and conduct a Human Factors Analysis to assess the ability to carry out essential Upper Stage (US) ground operator tasks based on predicted vehicle motion.
Finite element analysis of the stiffness of fabric reinforced composites
NASA Technical Reports Server (NTRS)
Foye, R. L.
1992-01-01
The objective of this work is the prediction of all three dimensional elastic moduli of textile fabric reinforced composites. The analysis is general enough for use with complex reinforcing geometries and capable of subsequent improvements. It places no restrictions on fabric microgeometry except that the unit cell be determinate and rectangular. The unit cell is divided into rectangular subcells in which the reinforcing geometries are easier to define and analyze. The analysis, based on inhomogeneous finite elements, is applied to a variety of weave, braid, and knit reinforced composites. Some of these predictions are correlated to test data.
NASA Astrophysics Data System (ADS)
Champeimont, Raphaël; Laine, Elodie; Hu, Shuang-Wei; Penin, Francois; Carbone, Alessandra
2016-05-01
A novel computational approach of coevolution analysis allowed us to reconstruct the protein-protein interaction network of the Hepatitis C Virus (HCV) at the residue resolution. For the first time, coevolution analysis of an entire viral genome was realized, based on a limited set of protein sequences with high sequence identity within genotypes. The identified coevolving residues constitute highly relevant predictions of protein-protein interactions for further experimental identification of HCV protein complexes. The method can be used to analyse other viral genomes and to predict the associated protein interaction networks.
NASA Astrophysics Data System (ADS)
Brannan, K. M.; Somor, A.
2016-12-01
A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.
Improved eye- and skin-color prediction based on 8 SNPs.
Hart, Katie L; Kimura, Shey L; Mushailov, Vladimir; Budimlija, Zoran M; Prinz, Mechthild; Wurmbach, Elisa
2013-06-01
To improve the 7-plex system to predict eye and skin color by increasing precision and detailed phenotypic descriptions. Analysis of an eighth single nucleotide polymorphism (SNP), rs12896399 (SLC24A4), showed a statistically significant association with human eye color (P=0.007) but a rather poor strength of agreement (κ=0.063). This SNP was added to the 7-plex system (rs12913832 at HERC2, rs1545397 at OCA2, rs16891982 at SLC45A2, rs1426654 at SLC24A5, rs885479 at MC1R, rs6119471 at ASIP, and rs12203592 at IRF4). Further, the instruction guidelines on the interpretation of genotypes were changed to create a new 8-plex system. This was based on the analysis of an 803-sample training set of various populations. The newly developed 8-plex system can predict the eye colors brown, green, and blue, and skin colors light, not dark, and not light. It is superior to the 7-plex system with its additional ability to predict blue eye and light skin color. The 8-plex system was tested on an additional 212 samples, the test set. Analysis showed that the number of positive descriptions for eye colors as being brown, green, or blue increased significantly (P=6.98e-15, z-score: -7.786). The error rate for eye-color prediction was low, at approximately 5%, while the skin color prediction showed no error in the test set (1% in training set). We can conclude that the new 8-plex system for the prediction of eye and skin color substantially enhances its former version.
A mechanics framework for a progressive failure methodology for laminated composites
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Allen, David H.; Lo, David C.
1989-01-01
A laminate strength and life prediction methodology has been postulated for laminated composites which accounts for the progressive development of microstructural damage to structural failure. A damage dependent constitutive model predicts the stress redistribution in an average sense that accompanies damage development in laminates. Each mode of microstructural damage is represented by a second-order tensor valued internal state variable which is a strain like quantity. The mechanics framework together with the global-local strategy for predicting laminate strength and life is presented in the paper. The kinematic effects of damage are represented by effective engineering moduli in the global analysis and the results of the global analysis provide the boundary conditions for the local ply level stress analysis. Damage evolution laws are based on experimental results.
Rotor/Wing Interactions in Hover
NASA Technical Reports Server (NTRS)
Young, Larry A.; Derby, Michael R.
2002-01-01
Hover predictions of tiltrotor aircraft are hampered by the lack of accurate and computationally efficient models for rotor/wing interactional aerodynamics. This paper summarizes the development of an approximate, potential flow solution for the rotor-on-rotor and wing-on-rotor interactions. This analysis is based on actuator disk and vortex theory and the method of images. The analysis is applicable for out-of-ground-effect predictions. The analysis is particularly suited for aircraft preliminary design studies. Flow field predictions from this simple analytical model are validated against experimental data from previous studies. The paper concludes with an analytical assessment of the influence of rotor-on-rotor and wing-on-rotor interactions. This assessment examines the effect of rotor-to-wing offset distance, wing sweep, wing span, and flaperon incidence angle on tiltrotor inflow and performance.
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
Boulet, Jean-Claude; Trarieux, Corinne; Souquet, Jean-Marc; Ducasse, Maris-Agnés; Caillé, Soline; Samson, Alain; Williams, Pascale; Doco, Thierry; Cheynier, Véronique
2016-01-01
Astringency elicited by tannins is usually assessed by tasting. Alternative methods involving tannin precipitation have been proposed, but they remain time-consuming. Our goal was to propose a faster method and investigate the links between wine composition and astringency. Red wines covering a wide range of astringency intensities, assessed by sensory analysis, were selected. Prediction models based on multiple linear regression (MLR) were built using UV spectrophotometry (190-400 nm) and chemical analysis (enological analysis, polyphenols, oligosaccharides and polysaccharides). Astringency intensity was strongly correlated (R(2) = 0.825) with tannin precipitation by bovine serum albumin (BSA). Wine absorbances at 230 nm (A230) proved more suitable for astringency prediction (R(2) = 0.705) than A280 (R(2) = 0.56) or tannin concentration estimated by phloroglucinolysis (R(2) = 0.59). Three variable models built with A230, oligosaccharides and polysaccharides presented high R(2) and low errors of cross-validation. These models confirmed that polysaccharides decrease astringency perception and indicated a positive relationship between oligosaccharides and astringency. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Meng-Han; Chen, Xiao-Jing; Ye, Peng-Chao; Chen, Xi; Shi, Yi-Jian; Zhai, Guang-Tao; Yang, Xiao-Kang
2016-11-01
The aim of this study was to use mid-infrared spectroscopy coupled with multiple model population analysis based on Monte Carlo-uninformative variable elimination for rapidly estimating the copper content of Tegillarca granosa. Copper-specific wavelengths were first extracted from the whole spectra, and subsequently, a least square-support vector machine was used to develop the prediction models. Compared with the prediction model based on full wavelengths, models that used 100 multiple MC-UVE selected wavelengths without and with bin operation showed comparable performances with Rp (root mean square error of Prediction) of 0.97 (14.60 mg/kg) and 0.94 (20.85 mg/kg) versus 0.96 (17.27 mg/kg), as well as ratio of percent deviation (number of wavelength) of 2.77 (407) and 1.84 (45) versus 2.32 (1762). The obtained results demonstrated that the mid-infrared technique could be used for estimating copper content in T. granosa. In addition, the proposed multiple model population analysis can eliminate uninformative, weakly informative and interfering wavelengths effectively, that substantially reduced the model complexity and computation time.
NASA Technical Reports Server (NTRS)
Gabel, R.; Lang, P.; Reed, D.
1993-01-01
Mathematical models based on the finite element method of structural analysis, as embodied in the NASTRAN computer code, are routinely used by the helicopter industry to calculate airframe static internal loads used for sizing structural members. Historically, less reliance has been placed on the vibration predictions based on these models. Beginning in the early 1980's NASA's Langley Research Center initiated an industry wide program with the objective of engendering the needed trust in vibration predictions using these models and establishing a body of modeling guides which would enable confident future prediction of airframe vibration as part of the regular design process. Emphasis in this paper is placed on the successful modeling of the Army/Boeing CH-47D which showed reasonable correlation with test data. A principal finding indicates that improved dynamic analysis requires greater attention to detail and perhaps a finer mesh, especially the mass distribution, than the usual stress model. Post program modeling efforts show improved correlation placing key modal frequencies in the b/rev range with 4 percent of the test frequencies.
Predictive modeling of respiratory tumor motion for real-time prediction of baseline shifts
NASA Astrophysics Data System (ADS)
Balasubramanian, A.; Shamsuddin, R.; Prabhakaran, B.; Sawant, A.
2017-03-01
Baseline shifts in respiratory patterns can result in significant spatiotemporal changes in patient anatomy (compared to that captured during simulation), in turn, causing geometric and dosimetric errors in the administration of thoracic and abdominal radiotherapy. We propose predictive modeling of the tumor motion trajectories for predicting a baseline shift ahead of its occurrence. The key idea is to use the features of the tumor motion trajectory over a 1 min window, and predict the occurrence of a baseline shift in the 5 s that immediately follow (lookahead window). In this study, we explored a preliminary trend-based analysis with multi-class annotations as well as a more focused binary classification analysis. In both analyses, a number of different inter-fraction and intra-fraction training strategies were studied, both offline as well as online, along with data sufficiency and skew compensation for class imbalances. The performance of different training strategies were compared across multiple machine learning classification algorithms, including nearest neighbor, Naïve Bayes, linear discriminant and ensemble Adaboost. The prediction performance is evaluated using metrics such as accuracy, precision, recall and the area under the curve (AUC) for repeater operating characteristics curve. The key results of the trend-based analysis indicate that (i) intra-fraction training strategies achieve highest prediction accuracies (90.5-91.4%) (ii) the predictive modeling yields lowest accuracies (50-60%) when the training data does not include any information from the test patient; (iii) the prediction latencies are as low as a few hundred milliseconds, and thus conducive for real-time prediction. The binary classification performance is promising, indicated by high AUCs (0.96-0.98). It also confirms the utility of prior data from previous patients, and also the necessity of training the classifier on some initial data from the new patient for reasonable prediction performance. The ability to predict a baseline shift with a sufficient look-ahead window will enable clinical systems or even human users to hold the treatment beam in such situations, thereby reducing the probability of serious geometric and dosimetric errors.
Predictive modeling of respiratory tumor motion for real-time prediction of baseline shifts
Balasubramanian, A; Shamsuddin, R; Prabhakaran, B; Sawant, A
2017-01-01
Baseline shifts in respiratory patterns can result in significant spatiotemporal changes in patient anatomy (compared to that captured during simulation), in turn, causing geometric and dosimetric errors in the administration of thoracic and abdominal radiotherapy. We propose predictive modeling of the tumor motion trajectories for predicting a baseline shift ahead of its occurrence. The key idea is to use the features of the tumor motion trajectory over a 1 min window, and predict the occurrence of a baseline shift in the 5 s that immediately follow (lookahead window). In this study, we explored a preliminary trend-based analysis with multi-class annotations as well as a more focused binary classification analysis. In both analyses, a number of different inter-fraction and intra-fraction training strategies were studied, both offline as well as online, along with data sufficiency and skew compensation for class imbalances. The performance of different training strategies were compared across multiple machine learning classification algorithms, including nearest neighbor, Naïve Bayes, linear discriminant and ensemble Adaboost. The prediction performance is evaluated using metrics such as accuracy, precision, recall and the area under the curve (AUC) for repeater operating characteristics curve. The key results of the trend-based analysis indicate that (i) intra-fraction training strategies achieve highest prediction accuracies (90.5–91.4%); (ii) the predictive modeling yields lowest accuracies (50–60%) when the training data does not include any information from the test patient; (iii) the prediction latencies are as low as a few hundred milliseconds, and thus conducive for real-time prediction. The binary classification performance is promising, indicated by high AUCs (0.96–0.98). It also confirms the utility of prior data from previous patients, and also the necessity of training the classifier on some initial data from the new patient for reasonable prediction performance. The ability to predict a baseline shift with a sufficient lookahead window will enable clinical systems or even human users to hold the treatment beam in such situations, thereby reducing the probability of serious geometric and dosimetric errors. PMID:28075331
Predictive modeling of respiratory tumor motion for real-time prediction of baseline shifts.
Balasubramanian, A; Shamsuddin, R; Prabhakaran, B; Sawant, A
2017-03-07
Baseline shifts in respiratory patterns can result in significant spatiotemporal changes in patient anatomy (compared to that captured during simulation), in turn, causing geometric and dosimetric errors in the administration of thoracic and abdominal radiotherapy. We propose predictive modeling of the tumor motion trajectories for predicting a baseline shift ahead of its occurrence. The key idea is to use the features of the tumor motion trajectory over a 1 min window, and predict the occurrence of a baseline shift in the 5 s that immediately follow (lookahead window). In this study, we explored a preliminary trend-based analysis with multi-class annotations as well as a more focused binary classification analysis. In both analyses, a number of different inter-fraction and intra-fraction training strategies were studied, both offline as well as online, along with data sufficiency and skew compensation for class imbalances. The performance of different training strategies were compared across multiple machine learning classification algorithms, including nearest neighbor, Naïve Bayes, linear discriminant and ensemble Adaboost. The prediction performance is evaluated using metrics such as accuracy, precision, recall and the area under the curve (AUC) for repeater operating characteristics curve. The key results of the trend-based analysis indicate that (i) intra-fraction training strategies achieve highest prediction accuracies (90.5-91.4%); (ii) the predictive modeling yields lowest accuracies (50-60%) when the training data does not include any information from the test patient; (iii) the prediction latencies are as low as a few hundred milliseconds, and thus conducive for real-time prediction. The binary classification performance is promising, indicated by high AUCs (0.96-0.98). It also confirms the utility of prior data from previous patients, and also the necessity of training the classifier on some initial data from the new patient for reasonable prediction performance. The ability to predict a baseline shift with a sufficient look-ahead window will enable clinical systems or even human users to hold the treatment beam in such situations, thereby reducing the probability of serious geometric and dosimetric errors.
NASA Astrophysics Data System (ADS)
Srirengan, Kanthikannan
The overall objective of this research was to develop the finite element code required to efficiently predict the strength of plain weave composite structures. Towards which, three-dimensional conventional progressive damage analysis was implemented to predict the strength of plain weave composites subjected to periodic boundary conditions. Also, modal technique for three-dimensional global/local stress analysis was developed to predict the failure initiation in plain weave composite structures. The progressive damage analysis was used to study the effect of quadrature order, mesh refinement and degradation models on the predicted damage and strength of plain weave composites subjected to uniaxial tension in the warp tow direction. A 1/32sp{nd} part of the representative volume element of a symmetrically stacked configuration was analyzed. The tow geometry was assumed to be sinusoidal. Graphite/Epoxy system was used. Maximum stress criteria and combined stress criteria were used to predict failure in the tows and maximum principal stress criterion was used to predict failure in the matrix. Degradation models based on logical reasoning, micromechanics idealization and experimental comparisons were used to calculate the effective material properties with of damage. Modified Newton-Raphson method was used to determine the incremental solution for each applied strain level. Using a refined mesh and the discount method based on experimental comparisons, the progressive damage and the strength of plain weave composites of waviness ratios 1/3 and 1/6 subjected to uniaxial tension in the warp direction have been characterized. Plain weave composites exhibit a brittle response in uniaxial tension. The strength decreases significantly with the increase in waviness ratio. Damage initiation and collapse were caused dominantly due to intra-tow cracking and inter-tow debonding respectively. The predicted strength of plain weave composites of racetrack geometry and waviness ratio 1/25.7 was compared with analytical predictions and experimental findings and was found to match well. To evaluate the performance of the modal technique, failure initiation in a short woven composite cantilevered plate subjected to end moment and transverse end load was predicted. The global/local predictions were found to reasonably match well with the conventional finite element predictions.
Analysis of NASA JP-4 fire tests data and development of a simple fire model
NASA Technical Reports Server (NTRS)
Raj, P.
1980-01-01
The temperature, velocity and species concentration data obtained during the NASA fire tests (3m, 7.5m and 15m diameter JP-4 fires) were analyzed. Utilizing the data analysis, a sample theoretical model was formulated to predict the temperature and velocity profiles in JP-4 fires. The theoretical model, which does not take into account the detailed chemistry of combustion, is capable of predicting the extent of necking of the fire near its base.
NASA Astrophysics Data System (ADS)
Sippl, Wolfgang
2000-08-01
One of the major challenges in computational approaches to drug design is the accurate prediction of binding affinity of biomolecules. In the present study several prediction methods for a published set of estrogen receptor ligands are investigated and compared. The binding modes of 30 ligands were determined using the docking program AutoDock and were compared with available X-ray structures of estrogen receptor-ligand complexes. On the basis of the docking results an interaction energy-based model, which uses the information of the whole ligand-receptor complex, was generated. Several parameters were modified in order to analyze their influence onto the correlation between binding affinities and calculated ligand-receptor interaction energies. The highest correlation coefficient ( r 2 = 0.617, q 2 LOO = 0.570) was obtained considering protein flexibility during the interaction energy evaluation. The second prediction method uses a combination of receptor-based and 3D quantitative structure-activity relationships (3D QSAR) methods. The ligand alignment obtained from the docking simulations was taken as basis for a comparative field analysis applying the GRID/GOLPE program. Using the interaction field derived with a water probe and applying the smart region definition (SRD) variable selection, a significant and robust model was obtained ( r 2 = 0.991, q 2 LOO = 0.921). The predictive ability of the established model was further evaluated by using a test set of six additional compounds. The comparison with the generated interaction energy-based model and with a traditional CoMFA model obtained using a ligand-based alignment ( r 2 = 0.951, q 2 LOO = 0.796) indicates that the combination of receptor-based and 3D QSAR methods is able to improve the quality of the underlying model.
Sumiyoshi, Chika; Uetsuki, Miki; Suga, Motomu; Kasai, Kiyoto; Sumiyoshi, Tomiki
2013-12-30
Short forms (SF) of the Wechsler Intelligence Scale have been developed to enhance its practicality. However, only a few studies have addressed the Wechsler Intelligence Scale Revised (WAIS-R) SFs based on data from patients with schizophrenia. The current study was conducted to develop the WAIS-R SFs for these patients based on the intelligence structure and predictability of the Full IQ (FIQ). Relations to demographic and clinical variables were also examined on selecting plausible subtests. The WAIS-R was administered to 90 Japanese patients with schizophrenia. Exploratory factor analysis (EFA) and multiple regression analysis were conducted to find potential subtests. EFA extracted two dominant factors corresponding to Verbal IQ and Performance IQ measures. Subtests with higher factor loadings on those factors were initially nominated. Regression analysis was carried out to reach the model containing all the nominated subtests. The optimality of the potential subtests included in that model was evaluated from the perspectives of the representativeness of intelligence structure, FIQ predictability, and the relation with demographic and clinical variables. Taken together, the dyad of Vocabulary and Block Design was considered to be the most optimal WAIS-R SF for patients with schizophrenia, reflecting both intelligence structure and FIQ predictability. © 2013 Elsevier Ireland Ltd. All rights reserved.
Handa, Koichi; Nakagome, Izumi; Yamaotsu, Noriyuki; Gouda, Hiroaki; Hirono, Shuichi
2015-01-01
The pregnane X receptor [PXR (NR1I2)] induces the expression of xenobiotic metabolic genes and transporter genes. In this study, we aimed to establish a computational method for quantifying the enzyme-inducing potencies of different compounds via their ability to activate PXR, for the application in drug discovery and development. To achieve this purpose, we developed a three-dimensional quantitative structure-activity relationship (3D-QSAR) model using comparative molecular field analysis (CoMFA) for predicting enzyme-inducing potencies, based on computer-ligand docking to multiple PXR protein structures sampled from the trajectory of a molecular dynamics simulation. Molecular mechanics-generalized born/surface area scores representing the ligand-protein-binding free energies were calculated for each ligand. As a result, the predicted enzyme-inducing potencies for compounds generated by the CoMFA model were in good agreement with the experimental values. Finally, we concluded that this 3D-QSAR model has the potential to predict the enzyme-inducing potencies of novel compounds with high precision and therefore has valuable applications in the early stages of the drug discovery process. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Reusable Solid Rocket Motor Nozzle Joint-4 Thermal Analysis
NASA Technical Reports Server (NTRS)
Clayton, J. Louie
2001-01-01
This study provides for development and test verification of a thermal model used for prediction of joint heating environments, structural temperatures and seal erosions in the Space Shuttle Reusable Solid Rocket Motor (RSRM) Nozzle Joint-4. The heating environments are a result of rapid pressurization of the joint free volume assuming a leak path has occurred in the filler material used for assembly gap close out. Combustion gases flow along the leak path from nozzle environment to joint O-ring gland resulting in local heating to the metal housing and erosion of seal materials. Analysis of this condition was based on usage of the NASA Joint Pressurization Routine (JPR) for environment determination and the Systems Improved Numerical Differencing Analyzer (SINDA) for structural temperature prediction. Model generated temperatures, pressures and seal erosions are compared to hot fire test data for several different leak path situations. Investigated in the hot fire test program were nozzle joint-4 O-ring erosion sensitivities to leak path width in both open and confined joint geometries. Model predictions were in generally good agreement with the test data for the confined leak path cases. Worst case flight predictions are provided using the test-calibrated model. Analysis issues are discussed based on model calibration procedures.
A microRNA-based prediction model for lymph node metastasis in hepatocellular carcinoma.
Zhang, Li; Xiang, Zuo-Lin; Zeng, Zhao-Chong; Fan, Jia; Tang, Zhao-You; Zhao, Xiao-Mei
2016-01-19
We developed an efficient microRNA (miRNA) model that could predict the risk of lymph node metastasis (LNM) in hepatocellular carcinoma (HCC). We first evaluated a training cohort of 192 HCC patients after hepatectomy and found five LNM associated predictive factors: vascular invasion, Barcelona Clinic Liver Cancer stage, miR-145, miR-31, and miR-92a. The five statistically independent factors were used to develop a predictive model. The predictive value of the miRNA-based model was confirmed in a validation cohort of 209 consecutive HCC patients. The prediction model was scored for LNM risk from 0 to 8. The cutoff value 4 was used to distinguish high-risk and low-risk groups. The model sensitivity and specificity was 69.6 and 80.2%, respectively, during 5 years in the validation cohort. And the area under the curve (AUC) for the miRNA-based prognostic model was 0.860. The 5-year positive and negative predictive values of the model in the validation cohort were 30.3 and 95.5%, respectively. Cox regression analysis revealed that the LNM hazard ratio of the high-risk versus low-risk groups was 11.751 (95% CI, 5.110-27.021; P < 0.001) in the validation cohort. In conclusion, the miRNA-based model is reliable and accurate for the early prediction of LNM in patients with HCC.
Prediction of Marital Satisfaction Based on Emotional Intelligence in Postmenopausal Women.
Heidari, Mohammad; Shahbazi, Sara; Ghafourifard, Mansour; Ali Sheikhi, Rahim
2017-12-01
This study was coperinducted with the aim of prediction of marital satisfaction based on emotional intelligence for postmenopausal women. This cross-sectional study was the descriptive-correlation and with a sample size of 134 people to predict marital satisfaction based on emotional intelligence for postmenopausal women was conducted in the Borujen city. The subjects were selected by convenience sampling. Data collection tools included an emotional intelligence questionnaire (Bar-on) and Enrich marital satisfaction questionnaire. The results of this study showed a significant positive relationship between marital satisfaction and emotional intelligence ( P < 0.05, r = 0.25). Also, regression analysis showed that emotional intelligence ( β = 0.31) can predict positively and significantly marital satisfaction. Due to the positive relationship between emotional intelligence and marital satisfaction, adequacy of emotional intelligence is improved as important structural in marital satisfaction. So it seems that can with measuring emotional intelligence in reinforced marital satisfaction during menopause, done appropriate action.
Business intelligence from social media: a study from the VAST Box Office Challenge.
Lu, Yafeng; Wang, Feng; Maciejewski, Ross
2014-01-01
With over 16 million tweets per hour, 600 new blog posts per minute, and 400 million active users on Facebook, businesses have begun searching for ways to turn real-time consumer-based posts into actionable intelligence. The goal is to extract information from this noisy, unstructured data and use it for trend analysis and prediction. Current practices support the idea that visual analytics (VA) can help enable the effective analysis of such data. However, empirical evidence demonstrating the effectiveness of a VA solution is still lacking. A proposed VA toolkit extracts data from Bitly and Twitter to predict movie revenue and ratings. Results from the 2013 VAST Box Office Challenge demonstrate the benefit of an interactive environment for predictive analysis, compared to a purely statistical modeling approach. The VA approach used by the toolkit is generalizable to other domains involving social media data, such as sales forecasting and advertisement analysis.
The use of copula functions for predictive analysis of correlations between extreme storm tides
NASA Astrophysics Data System (ADS)
Domino, Krzysztof; Błachowicz, Tomasz; Ciupak, Maurycy
2014-11-01
In this paper we present a method used in quantitative description of weakly predictable hydrological, extreme events at inland sea. Investigations for correlations between variations of individual measuring points, employing combined statistical methods, were carried out. As a main tool for this analysis we used a two-dimensional copula function sensitive for correlated extreme effects. Additionally, a new proposed methodology, based on Detrended Fluctuations Analysis (DFA) and Anomalous Diffusion (AD), was used for the prediction of negative and positive auto-correlations and associated optimum choice of copula functions. As a practical example we analysed maximum storm tides data recorded at five spatially separated places at the Baltic Sea. For the analysis we used Gumbel, Clayton, and Frank copula functions and introduced the reversed Clayton copula. The application of our research model is associated with modelling the risk of high storm tides and possible storm flooding.
Family-Based Benchmarking of Copy Number Variation Detection Software.
Nutsua, Marcel Elie; Fischer, Annegret; Nebel, Almut; Hofmann, Sylvia; Schreiber, Stefan; Krawczak, Michael; Nothnagel, Michael
2015-01-01
The analysis of structural variants, in particular of copy-number variations (CNVs), has proven valuable in unraveling the genetic basis of human diseases. Hence, a large number of algorithms have been developed for the detection of CNVs in SNP array signal intensity data. Using the European and African HapMap trio data, we undertook a comparative evaluation of six commonly used CNV detection software tools, namely Affymetrix Power Tools (APT), QuantiSNP, PennCNV, GLAD, R-gada and VEGA, and assessed their level of pair-wise prediction concordance. The tool-specific CNV prediction accuracy was assessed in silico by way of intra-familial validation. Software tools differed greatly in terms of the number and length of the CNVs predicted as well as the number of markers included in a CNV. All software tools predicted substantially more deletions than duplications. Intra-familial validation revealed consistently low levels of prediction accuracy as measured by the proportion of validated CNVs (34-60%). Moreover, up to 20% of apparent family-based validations were found to be due to chance alone. Software using Hidden Markov models (HMM) showed a trend to predict fewer CNVs than segmentation-based algorithms albeit with greater validity. PennCNV yielded the highest prediction accuracy (60.9%). Finally, the pairwise concordance of CNV prediction was found to vary widely with the software tools involved. We recommend HMM-based software, in particular PennCNV, rather than segmentation-based algorithms when validity is the primary concern of CNV detection. QuantiSNP may be used as an additional tool to detect sets of CNVs not detectable by the other tools. Our study also reemphasizes the need for laboratory-based validation, such as qPCR, of CNVs predicted in silico.
Predicting plant biomass accumulation from image-derived parameters
Chen, Dijun; Shi, Rongli; Pape, Jean-Michel; Neumann, Kerstin; Graner, Andreas; Chen, Ming; Klukas, Christian
2018-01-01
Abstract Background Image-based high-throughput phenotyping technologies have been rapidly developed in plant science recently, and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologists. However, it is a great challenge to find a predictive biomass model across experiments. Results In the present study, we constructed 4 predictive models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to 3 consecutive barley (Hordeum vulgare) experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model will contribute to relieving the phenotyping bottleneck in biomass measurement in breeding applications. The prediction performance is still relatively high across experiments under similar conditions. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of the plant biomass outcome. Furthermore, methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass. Conclusions We have developed quantitative models to accurately predict plant biomass accumulation from image data. We anticipate that the analysis results will be useful to advance our views of the phenotypic determinants of plant biomass outcome, and the statistical methods can be broadly used for other plant species. PMID:29346559
Performance of the dipstick screening test as a predictor of negative urine culture
Marques, Alexandre Gimenes; Doi, André Mario; Pasternak, Jacyr; Damascena, Márcio dos Santos; França, Carolina Nunes; Martino, Marinês Dalla Valle
2017-01-01
ABSTRACT Objective To investigate whether the urine dipstick screening test can be used to predict urine culture results. Methods A retrospective study conducted between January and December 2014 based on data from 8,587 patients with a medical order for urine dipstick test, urine sediment analysis and urine culture. Sensitivity, specificity, positive and negative predictive values were determined and ROC curve analysis was performed. Results The percentage of positive cultures was 17.5%. Nitrite had 28% sensitivity and 99% specificity, with positive and negative predictive values of 89% and 87%, respectively. Leukocyte esterase had 79% sensitivity and 84% specificity, with positive and negative predictive values of 51% and 95%, respectively. The combination of positive nitrite or positive leukocyte esterase tests had 85% sensitivity and 84% specificity, with positive and negative predictive values of 53% and 96%, respectively. Positive urinary sediment (more than ten leukocytes per microliter) had 92% sensitivity and 71% specificity, with positive and negative predictive values of 40% and 98%, respectively. The combination of nitrite positive test and positive urinary sediment had 82% sensitivity and 99% specificity, with positive and negative predictive values of 91% and 98%, respectively. The combination of nitrite or leukocyte esterase positive tests and positive urinary sediment had the highest sensitivity (94%) and specificity (84%), with positive and negative predictive values of 58% and 99%, respectively. Based on ROC curve analysis, the best indicator of positive urine culture was the combination of positives leukocyte esterase or nitrite tests and positive urinary sediment, followed by positives leukocyte and nitrite tests, positive urinary sediment alone, positive leukocyte esterase test alone, positive nitrite test alone and finally association of positives nitrite and urinary sediment (AUC: 0.845, 0.844, 0.817, 0.814, 0.635 and 0.626, respectively). Conclusion A negative urine culture can be predicted by negative dipstick test results. Therefore, this test may be a reliable predictor of negative urine culture. PMID:28444086
Pejchinovski, Martin; Siwy, Justyna; Metzger, Jochen; Dakna, Mohammed; Mischak, Harald; Klein, Julie; Jankowski, Vera; Bae, Kyongtae T; Chapman, Arlene B; Kistler, Andreas D
2017-03-01
Autosomal dominant polycystic kidney disease (ADPKD) is characterized by slowly progressive bilateral renal cyst growth ultimately resulting in loss of kidney function and end-stage renal disease (ESRD). Disease progression rate and age at ESRD are highly variable. Therapeutic interventions therefore require early risk stratification of patients and monitoring of disease progression in response to treatment. We used a urine peptidomic approach based on capillary electrophoresis-mass-spectrometry (CE-MS) to identify potential biomarkers reflecting the risk for early progression to ESRD in the Consortium of Radiologic Imaging in Polycystic Kidney Disease (CRISP) cohort. A biomarker-based classifier consisting of 20 urinary peptides allowed the prediction of ESRD within 10-13 years of follow-up in patients 24-46 years of age at baseline. The performance of the biomarker score approached that of height-adjusted total kidney volume (htTKV) and the combination of the biomarker panel with htTKV improved prediction over either one alone. In young patients (<24 years at baseline), the same biomarker model predicted a 30 mL/min/1.73 m 2 glomerular filtration rate decline over 8 years. Sequence analysis of the altered urinary peptides and the prediction of the involved proteases by in silico analysis revealed alterations in distinct proteolytic pathways, in particular matrix metalloproteinases and cathepsins. We developed a urinary test that accurately predicts relevant clinical outcomes in ADPKD patients and suggests altered proteolytic pathways involved in disease progression. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Fundamental Algorithms of the Goddard Battery Model
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1985-01-01
The Goddard Space Flight Center (GSFC) is currently producing a computer model to predict Nickel Cadmium (NiCd) performance in a Low Earth Orbit (LEO) cycling regime. The model proper is currently still in development, but the inherent, fundamental algorithms (or methodologies) of the model are defined. At present, the model is closely dependent on empirical data and the data base currently used is of questionable accuracy. Even so, very good correlations have been determined between model predictions and actual cycling data. A more accurate and encompassing data base has been generated to serve dual functions: show the limitations of the current data base, and be inbred in the model properly for more accurate predictions. The fundamental algorithms of the model, and the present data base and its limitations, are described and a brief preliminary analysis of the new data base and its verification of the model's methodology are presented.
Perceived Teaching Practice and Its Prediction of Student Engagement in Singapore
ERIC Educational Resources Information Center
Luo, Wenshu
2017-01-01
This study examined teaching practice in Singapore mathematics classrooms and its prediction of student engagement. A large sample of Singapore Secondary 2 students first reported perceived teaching practice in their mathematics classrooms in Term 1 and their engagement in mathematics study in Term 2. Based on Rasch analysis of teaching practice,…
Examination of Factors Predicting Secondary Students' Interest in Tertiary STEM Education
ERIC Educational Resources Information Center
Chachashvili-Bolotin, Svetlana; Milner-Bolotin, Marina; Lissitsa, Sabina
2016-01-01
Based on the Social Cognitive Career Theory (SCCT), the study aims to investigate factors that predict students' interest in pursuing science, technology, engineering, and mathematics (STEM) fields in tertiary education both in general and in relation to their gender and socio-economic background. The results of the analysis of survey responses of…
ERIC Educational Resources Information Center
Montoya, Isaac D.
2008-01-01
Three classification techniques (Chi-square Automatic Interaction Detection [CHAID], Classification and Regression Tree [CART], and discriminant analysis) were tested to determine their accuracy in predicting Temporary Assistance for Needy Families program recipients' future employment. Technique evaluation was based on proportion of correctly…
NASA Astrophysics Data System (ADS)
Guo, Kun; Sun, Yi; Qian, Xin
2017-03-01
With the development of the social network, the interaction between investors in stock market became more fast and convenient. Thus, investor sentiment which can influence their investment decisions may be quickly spread and magnified through the network, and to a certain extent the stock market can be affected. This paper collected the user comments data from a popular professional social networking site of China stock market called Xueqiu, then the investor sentiment data can be obtained through semantic analysis. The dynamic analysis on relationship between investor sentiment and stock market is proposed based on Thermal Optimal Path (TOP) method. The results show that the sentiment data was not always leading over stock market price, and it can be used to predict the stock price only when the stock has high investor attention.
NASA Technical Reports Server (NTRS)
Zheng, Yihua; Kuznetsova, Maria M.; Pulkkinen, Antti A.; Maddox, Marlo M.; Mays, Mona Leila
2015-01-01
The Space Weather Research Center (http://swrc. gsfc.nasa.gov) at NASA Goddard, part of the Community Coordinated Modeling Center (http://ccmc.gsfc.nasa.gov), is committed to providing research-based forecasts and notifications to address NASA's space weather needs, in addition to its critical role in space weather education. It provides a host of services including spacecraft anomaly resolution, historical impact analysis, real-time monitoring and forecasting, tailored space weather alerts and products, and weekly summaries and reports. In this paper, we focus on how (near) real-time data (both in space and on ground), in combination with modeling capabilities and an innovative dissemination system called the integrated Space Weather Analysis system (http://iswa.gsfc.nasa.gov), enable monitoring, analyzing, and predicting the spacecraft charging environment for spacecraft users. Relevant tools and resources are discussed.
A novel time series link prediction method: Learning automata approach
NASA Astrophysics Data System (ADS)
Moradabadi, Behnaz; Meybodi, Mohammad Reza
2017-09-01
Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.
Interfacing comprehensive rotorcraft analysis with advanced aeromechanics and vortex wake models
NASA Astrophysics Data System (ADS)
Liu, Haiying
This dissertation describes three aspects of the comprehensive rotorcraft analysis. First, a physics-based methodology for the modeling of hydraulic devices within multibody-based comprehensive models of rotorcraft systems is developed. This newly proposed approach can predict the fully nonlinear behavior of hydraulic devices, and pressure levels in the hydraulic chambers are coupled with the dynamic response of the system. The proposed hydraulic device models are implemented in a multibody code and calibrated by comparing their predictions with test bench measurements for the UH-60 helicopter lead-lag damper. Predicted peak damping forces were found to be in good agreement with measurements, while the model did not predict the entire time history of damper force to the same level of accuracy. The proposed model evaluates relevant hydraulic quantities such as chamber pressures, orifice flow rates, and pressure relief valve displacements. This model could be used to design lead-lag dampers with desirable force and damping characteristics. The second part of this research is in the area of computational aeroelasticity, in which an interface between computational fluid dynamics (CFD) and computational structural dynamics (CSD) is established. This interface enables data exchange between CFD and CSD with the goal of achieving accurate airloads predictions. In this work, a loose coupling approach based on the delta-airloads method is developed in a finite-element method based multibody dynamics formulation, DYMORE. To validate this aerodynamic interface, a CFD code, OVERFLOW-2, is loosely coupled with a CSD program, DYMORE, to compute the airloads of different flight conditions for Sikorsky UH-60 aircraft. This loose coupling approach has good convergence characteristics. The predicted airloads are found to be in good agreement with the experimental data, although not for all flight conditions. In addition, the tight coupling interface between the CFD program, OVERFLOW-2, and the CSD program, DYMORE, is also established. The ability to accurately capture the wake structure around a helicopter rotor is crucial for rotorcraft performance analysis. In the third part of this thesis, a new representation of the wake vortex structure based on Non-Uniform Rational B-Spline (NURBS) curves and surfaces is proposed to develop an efficient model for prescribed and free wakes. NURBS curves and surfaces are able to represent complex shapes with remarkably little data. The proposed formulation has the potential to reduce the computational cost associated with the use of Helmholtz's law and the Biot-Savart law when calculating the induced flow field around the rotor. An efficient free-wake analysis will considerably decrease the computational cost of comprehensive rotorcraft analysis, making the approach more attractive to routine use in industrial settings.
Marto, Aminaton; Jahed Armaghani, Danial; Tonnizam Mohamad, Edy; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856
NASA Astrophysics Data System (ADS)
Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra
2013-03-01
SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.
NASA Technical Reports Server (NTRS)
Koster, R.; Mahanama, S.; Livneh, B.; Lettenmaier, D.; Reichle, R.
2011-01-01
in this study we examine how knowledge of mid-winter snow accumulation and soil moisture conditions contribute to our ability to predict streamflow months in advance. A first "synthetic truth" analysis focuses on a series of numerical experiments with multiple sophisticated land surface models driven with a dataset of observations-based meteorological forcing spanning multiple decades and covering the continental United States. Snowpack information by itself obviously contributes to the skill attained in streamflow prediction, particularly in the mountainous west. The isolated contribution of soil moisture information, however, is found to be large and significant in many areas, particularly in the west but also in region surrounding the Great Lakes. The results are supported by a supplemental, observations-based analysis using (naturalized) March-July streamflow measurements covering much of the western U.S. Additional forecast experiments using start dates that span the year indicate a strong seasonality in the skill contributions; soil moisture information, for example, contributes to kill at much longer leads for forecasts issued in winter than for those issued in summer.
NASA Astrophysics Data System (ADS)
Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi
2018-04-01
Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.
Neuroimaging-based biomarkers in psychiatry: clinical opportunities of a paradigm shift.
Fu, Cynthia H Y; Costafreda, Sergi G
2013-09-01
Neuroimaging research has substantiated the functional and structural abnormalities underlying psychiatric disorders but has, thus far, failed to have a significant impact on clinical practice. Recently, neuroimaging-based diagnoses and clinical predictions derived from machine learning analysis have shown significant potential for clinical translation. This review introduces the key concepts of this approach, including how the multivariate integration of patterns of brain abnormalities is a crucial component. We survey recent findings that have potential application for diagnosis, in particular early and differential diagnoses in Alzheimer disease and schizophrenia, and the prediction of clinical response to treatment in depression. We discuss the specific clinical opportunities and the challenges for developing biomarkers for psychiatry in the absence of a diagnostic gold standard. We propose that longitudinal outcomes, such as early diagnosis and prediction of treatment response, offer definite opportunities for progress. We propose that efforts should be directed toward clinically challenging predictions in which neuroimaging may have added value, compared with the existing standard assessment. We conclude that diagnostic and prognostic biomarkers will be developed through the joint application of expert psychiatric knowledge in addition to advanced methods of analysis.
Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi
2018-03-13
Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models' performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.
Marto, Aminaton; Hajihassani, Mohsen; Armaghani, Danial Jahed; Mohamad, Edy Tonnizam; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches.
Schneider, Markus; Rosam, Mathias; Glaser, Manuel; Patronov, Atanas; Shah, Harpreet; Back, Katrin Christiane; Daake, Marina Angelika; Buchner, Johannes; Antes, Iris
2016-10-01
Substrate binding to Hsp70 chaperones is involved in many biological processes, and the identification of potential substrates is important for a comprehensive understanding of these events. We present a multi-scale pipeline for an accurate, yet efficient prediction of peptides binding to the Hsp70 chaperone BiP by combining sequence-based prediction with molecular docking and MMPBSA calculations. First, we measured the binding of 15mer peptides from known substrate proteins of BiP by peptide array (PA) experiments and performed an accuracy assessment of the PA data by fluorescence anisotropy studies. Several sequence-based prediction models were fitted using this and other peptide binding data. A structure-based position-specific scoring matrix (SB-PSSM) derived solely from structural modeling data forms the core of all models. The matrix elements are based on a combination of binding energy estimations, molecular dynamics simulations, and analysis of the BiP binding site, which led to new insights into the peptide binding specificities of the chaperone. Using this SB-PSSM, peptide binders could be predicted with high selectivity even without training of the model on experimental data. Additional training further increased the prediction accuracies. Subsequent molecular docking (DynaDock) and MMGBSA/MMPBSA-based binding affinity estimations for predicted binders allowed the identification of the correct binding mode of the peptides as well as the calculation of nearly quantitative binding affinities. The general concept behind the developed multi-scale pipeline can readily be applied to other protein-peptide complexes with linearly bound peptides, for which sufficient experimental binding data for the training of classical sequence-based prediction models is not available. Proteins 2016; 84:1390-1407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
RNAstructure: software for RNA secondary structure prediction and analysis.
Reuter, Jessica S; Mathews, David H
2010-03-15
To understand an RNA sequence's mechanism of action, the structure must be known. Furthermore, target RNA structure is an important consideration in the design of small interfering RNAs and antisense DNA oligonucleotides. RNA secondary structure prediction, using thermodynamics, can be used to develop hypotheses about the structure of an RNA sequence. RNAstructure is a software package for RNA secondary structure prediction and analysis. It uses thermodynamics and utilizes the most recent set of nearest neighbor parameters from the Turner group. It includes methods for secondary structure prediction (using several algorithms), prediction of base pair probabilities, bimolecular structure prediction, and prediction of a structure common to two sequences. This contribution describes new extensions to the package, including a library of C++ classes for incorporation into other programs, a user-friendly graphical user interface written in JAVA, and new Unix-style text interfaces. The original graphical user interface for Microsoft Windows is still maintained. The extensions to RNAstructure serve to make RNA secondary structure prediction user-friendly. The package is available for download from the Mathews lab homepage at http://rna.urmc.rochester.edu/RNAstructure.html.
Bogard, Matthieu; Ravel, Catherine; Paux, Etienne; Bordes, Jacques; Balfourier, François; Chapman, Scott C.; Le Gouis, Jacques; Allard, Vincent
2014-01-01
Prediction of wheat phenology facilitates the selection of cultivars with specific adaptations to a particular environment. However, while QTL analysis for heading date can identify major genes controlling phenology, the results are limited to the environments and genotypes tested. Moreover, while ecophysiological models allow accurate predictions in new environments, they may require substantial phenotypic data to parameterize each genotype. Also, the model parameters are rarely related to all underlying genes, and all the possible allelic combinations that could be obtained by breeding cannot be tested with models. In this study, a QTL-based model is proposed to predict heading date in bread wheat (Triticum aestivum L.). Two parameters of an ecophysiological model (V sat and P base, representing genotype vernalization requirements and photoperiod sensitivity, respectively) were optimized for 210 genotypes grown in 10 contrasting location × sowing date combinations. Multiple linear regression models predicting V sat and P base with 11 and 12 associated genetic markers accounted for 71 and 68% of the variance of these parameters, respectively. QTL-based V sat and P base estimates were able to predict heading date of an independent validation data set (88 genotypes in six location × sowing date combinations) with a root mean square error of prediction of 5 to 8.6 days, explaining 48 to 63% of the variation for heading date. The QTL-based model proposed in this study may be used for agronomic purposes and to assist breeders in suggesting locally adapted ideotypes for wheat phenology. PMID:25148833
Derivation of a target concentration of Pb in soil based on elevation of adult blood pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stern, A.H.
1996-04-01
The increase in systolic blood pressure in males appears to be the most sensitive adult endpoint appropriate for deriving a health risk-based target level of lead (Ph) in soil. Because the response of blood pressure to blood Ph concentration (PbB) has no apparent threshold, traditional approaches based on the application of a Reference Dose (RfD) are not applicable. An alternative approach is presented based on a model which predicts the population shift in systolic blood pressure from ingestion of Pb contaminated soil as a simultaneous function of exposure to Pb in soil, the baseline distribution of blood Pb concentration inmore » the population and the baseline distribution of systolic pressure in the population. This model is analyzed using Monte Carlo analysis to predict the population distribution of systolic pressure resulting from Ph exposure. Based on this analysis, it is predicted that for adult males 18-65 years old, exposure to 1000 ppm Pb in soil will result in an increase of approximately 1 mm Hg systolic pressure, an increase in the incidence of systolic hypertension (i.e., systolic pressure >140 mm Hg) of approximately 1% and an increase in PbB of 1-3 {mu}g/dl. Based on the proposition that these adverse effects can be considered de minimis, 1000 ppm Ph in soil is proposed as a target soil concentration for adult exposure. Available data do not appear to be adequate to predict the newborn PbB level which would result from exposure to this soil level during pregnancy, 36 refs., 6 figs.« less
Deterministic Multiaxial Creep and Creep Rupture Enhancements for CARES/Creep Integrated Design Code
NASA Technical Reports Server (NTRS)
Jadaan, Osama M.
1998-01-01
High temperature and long duration applications of monolithic ceramics can place their failure mode in the creep rupture regime. A previous model advanced by the authors described a methodology by which the creep rupture life of a loaded component can be predicted. That model was based on the life fraction damage accumulation rule in association with the modified Monkman-Grant creep rupture criterion. However, that model did not take into account the deteriorating state of the material due to creep damage (e.g., cavitation) as time elapsed. In addition, the material creep parameters used in that life prediction methodology, were based on uniaxial creep curves displaying primary and secondary creep behavior, with no tertiary regime. The objective of this paper is to present a creep life prediction methodology based on a modified form of the Kachanov-Rabotnov continuum damage mechanics (CDM) theory. In this theory, the uniaxial creep rate is described in terms of sum, temperature, time, and the current state of material damage. This scalar damage state parameter is basically an abstract measure of the current state of material damage due to creep deformation. The damage rate is assumed to vary with stress, temperature, time, and the current state of damage itself. Multiaxial creep and creep rupture formulations of the CDM approach are presented in this paper. Parameter estimation methodologies based on nonlinear regression analysis are also described for both, isothermal constant stress states and anisothermal variable stress conditions This creep life prediction methodology was preliminarily added to the integrated design code CARES/Creep (Ceramics Analysis and Reliability Evaluation of Structures/Creep), which is a postprocessor program to commercially available finite element analysis (FEA) packages. Two examples, showing comparisons between experimental and predicted creep lives of ceramic specimens, are used to demonstrate the viability of Ns methodology and the CARES/Creep program.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
NASA Technical Reports Server (NTRS)
Hunter, H. E.; Amato, R. A.
1972-01-01
The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.
Deep Visual Attention Prediction
NASA Astrophysics Data System (ADS)
Wang, Wenguan; Shen, Jianbing
2018-05-01
In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.
NASA Astrophysics Data System (ADS)
Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.
2017-12-01
The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.
Koul, Atesh; Becchio, Cristina; Cavallo, Andrea
2017-12-12
Recent years have seen an increased interest in machine learning-based predictive methods for analyzing quantitative behavioral data in experimental psychology. While these methods can achieve relatively greater sensitivity compared to conventional univariate techniques, they still lack an established and accessible implementation. The aim of current work was to build an open-source R toolbox - "PredPsych" - that could make these methods readily available to all psychologists. PredPsych is a user-friendly, R toolbox based on machine-learning predictive algorithms. In this paper, we present the framework of PredPsych via the analysis of a recently published multiple-subject motion capture dataset. In addition, we discuss examples of possible research questions that can be addressed with the machine-learning algorithms implemented in PredPsych and cannot be easily addressed with univariate statistical analysis. We anticipate that PredPsych will be of use to researchers with limited programming experience not only in the field of psychology, but also in that of clinical neuroscience, enabling computational assessment of putative bio-behavioral markers for both prognosis and diagnosis.
Understanding reproducibility of human IVF traits to predict next IVF cycle outcome.
Wu, Bin; Shi, Juanzi; Zhao, Wanqiu; Lu, Suzhen; Silva, Marta; Gelety, Timothy J
2014-10-01
Evaluating the failed IVF cycle often provides useful prognostic information. Before undergoing another attempt, patients experiencing an unsuccessful IVF cycle frequently request information about the probability of future success. Here, we introduced the concept of reproducibility and formulae to predict the next IVF cycle outcome. The experimental design was based on the retrospective review of IVF cycle data from 2006 to 2013 in two different IVF centers and statistical analysis. The reproducibility coefficients (r) of IVF traits including number of oocytes retrieved, oocyte maturity, fertilization, embryo quality and pregnancy were estimated using the interclass correlation coefficient between the repeated IVF cycle measurements for the same patient by variance component analysis. The formulae were designed to predict next IVF cycle outcome. The number of oocytes retrieved from patients and their fertilization rate had the highest reproducibility coefficients (r = 0.81 ~ 0.84), which indicated a very close correlation between the first retrieval cycle and subsequent IVF cycles. Oocyte maturity and number of top quality embryos had middle level reproducibility (r = 0.38 ~ 0.76) and pregnancy rate had a relative lower reproducibility (r = 0.23 ~ 0.27). Based on these parameters, the next outcome for these IVF traits might be accurately predicted by the designed formulae. The introduction of the concept of reproducibility to our human IVF program allows us to predict future IVF cycle outcomes. The traits of oocyte numbers retrieved, oocyte maturity, fertilization, and top quality embryos had higher or middle reproducibility, which provides a basis for accurate prediction of future IVF outcomes. Based on this prediction, physicians may counsel their patients or change patient's stimulation plans, and laboratory embryologists may improve their IVF techniques accordingly.
Base-Rate Neglect as a Function of Base Rates in Probabilistic Contingency Learning
ERIC Educational Resources Information Center
Kutzner, Florian; Freytag, Peter; Vogel, Tobias; Fiedler, Klaus
2008-01-01
When humans predict criterion events based on probabilistic predictors, they often lend excessive weight to the predictor and insufficient weight to the base rate of the criterion event. In an operant analysis, using a matching-to-sample paradigm, Goodie and Fantino (1996) showed that humans exhibit base-rate neglect when predictors are associated…
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, Paul R., E-mail: pwest@stemina.co; Weir, April M.; Smith, Alan M.
2010-08-15
Teratogens, substances that may cause fetal abnormalities during development, are responsible for a significant number of birth defects. Animal models used to predict teratogenicity often do not faithfully correlate to human response. Here, we seek to develop a more predictive developmental toxicity model based on an in vitro method that utilizes both human embryonic stem (hES) cells and metabolomics to discover biomarkers of developmental toxicity. We developed a method where hES cells were dosed with several drugs of known teratogenicity then LC-MS analysis was performed to measure changes in abundance levels of small molecules in response to drug dosing. Statisticalmore » analysis was employed to select for specific mass features that can provide a prediction of the developmental toxicity of a substance. These molecules can serve as biomarkers of developmental toxicity, leading to better prediction of teratogenicity. In particular, our work shows a correlation between teratogenicity and changes of greater than 10% in the ratio of arginine to asymmetric dimethylarginine levels. In addition, this study resulted in the establishment of a predictive model based on the most informative mass features. This model was subsequently tested for its predictive accuracy in two blinded studies using eight drugs of known teratogenicity, where it correctly predicted the teratogenicity for seven of the eight drugs. Thus, our initial data shows that this platform is a robust alternative to animal and other in vitro models for the prediction of the developmental toxicity of chemicals that may also provide invaluable information about the underlying biochemical pathways.« less
Method of Testing and Predicting Failures of Electronic Mechanical Systems
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, Frances A.
1996-01-01
A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.
A simple next-best alternative to seasonal predictions in Europe
NASA Astrophysics Data System (ADS)
Buontempo, Carlo; De Felice, Matteo
2016-04-01
In order to build a climate proof society, we need to learn how to best use the climate information we have. Having spent time and resources in developing complex numerical models has often blinded us on the value some of this information really has in the eyes of a decision maker. An effective way to assess this is to check the quality of the forecast (and its cost) to the quality of the forecast from a prediction system based on simpler assumption (and thus cheaper to run). Such a practice is common in marketing analysis where it is often referred to as the next-best alternative. As a way to facilitate such an analysis, climate service providers should always provide alongside the predictions a set of skill scores. These are usually based on climatological means, anomaly persistence or more recently multiple linear regressions. We here present an equally simple benchmark based on a Markov chain process locally trained at a monthly or seasonal time-scale. We demonstrate that in spite of its simplicity the model easily outperforms not only the standard benchmark but also most of the seasonal predictions system at least in EUROPE. We suggest that a benchmark of this kind could represent a useful next-best alternative for a number of users.
Ghanem, Eman; Hopfer, Helene; Navarro, Andrea; Ritzer, Maxwell S; Mahmood, Lina; Fredell, Morgan; Cubley, Ashley; Bolen, Jessica; Fattah, Rabia; Teasdale, Katherine; Lieu, Linh; Chua, Tedmund; Marini, Federico; Heymann, Hildegarde; Anslyn, Eric V
2015-05-20
Differential sensing using synthetic receptors as mimics of the mammalian senses of taste and smell is a powerful approach for the analysis of complex mixtures. Herein, we report on the effectiveness of a cross-reactive, supramolecular, peptide-based sensing array in differentiating and predicting the composition of red wine blends. Fifteen blends of Cabernet Sauvignon, Merlot and Cabernet Franc, in addition to the mono varietals, were used in this investigation. Linear Discriminant Analysis (LDA) showed a clear differentiation of blends based on tannin concentration and composition where certain mono varietals like Cabernet Sauvignon seemed to contribute less to the overall characteristics of the blend. Partial Least Squares (PLS) Regression and cross validation were used to build a predictive model for the responses of the receptors to eleven binary blends and the three mono varietals. The optimized model was later used to predict the percentage of each mono varietal in an independent test set composted of four tri-blends with a 15% average error. A partial least square regression model using the mouth-feel and taste descriptive sensory attributes of the wine blends revealed a strong correlation of the receptors to perceived astringency, which is indicative of selective binding to polyphenols in wine.
Kaur, Parminder; Kiselar, Janna; Yang, Sichun; Chance, Mark R.
2015-01-01
Hydroxyl radical footprinting based MS for protein structure assessment has the goal of understanding ligand induced conformational changes and macromolecular interactions, for example, protein tertiary and quaternary structure, but the structural resolution provided by typical peptide-level quantification is limiting. In this work, we present experimental strategies using tandem-MS fragmentation to increase the spatial resolution of the technique to the single residue level to provide a high precision tool for molecular biophysics research. Overall, in this study we demonstrated an eightfold increase in structural resolution compared with peptide level assessments. In addition, to provide a quantitative analysis of residue based solvent accessibility and protein topography as a basis for high-resolution structure prediction; we illustrate strategies of data transformation using the relative reactivity of side chains as a normalization strategy and predict side-chain surface area from the footprinting data. We tested the methods by examination of Ca+2-calmodulin showing highly significant correlations between surface area and side-chain contact predictions for individual side chains and the crystal structure. Tandem ion based hydroxyl radical footprinting-MS provides quantitative high-resolution protein topology information in solution that can fill existing gaps in structure determination for large proteins and macromolecular complexes. PMID:25687570
Wagner, Christian; Pan, Yuzhuo; Hsu, Vicky; Grillo, Joseph A; Zhang, Lei; Reynolds, Kellie S; Sinha, Vikram; Zhao, Ping
2015-01-01
The US Food and Drug Administration (FDA) has seen a recent increase in the application of physiologically based pharmacokinetic (PBPK) modeling towards assessing the potential of drug-drug interactions (DDI) in clinically relevant scenarios. To continue our assessment of such approaches, we evaluated the predictive performance of PBPK modeling in predicting cytochrome P450 (CYP)-mediated DDI. This evaluation was based on 15 substrate PBPK models submitted by nine sponsors between 2009 and 2013. For these 15 models, a total of 26 DDI studies (cases) with various CYP inhibitors were available. Sponsors developed the PBPK models, reportedly without considering clinical DDI data. Inhibitor models were either developed by sponsors or provided by PBPK software developers and applied with minimal or no modification. The metric for assessing predictive performance of the sponsors' PBPK approach was the R predicted/observed value (R predicted/observed = [predicted mean exposure ratio]/[observed mean exposure ratio], with the exposure ratio defined as [C max (maximum plasma concentration) or AUC (area under the plasma concentration-time curve) in the presence of CYP inhibition]/[C max or AUC in the absence of CYP inhibition]). In 81 % (21/26) and 77 % (20/26) of cases, respectively, the R predicted/observed values for AUC and C max ratios were within a pre-defined threshold of 1.25-fold of the observed data. For all cases, the R predicted/observed values for AUC and C max were within a 2-fold range. These results suggest that, based on the submissions to the FDA to date, there is a high degree of concordance between PBPK-predicted and observed effects of CYP inhibition, especially CYP3A-based, on the exposure of drug substrates.
AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku
2014-05-27
The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting classifier which averaged the results of multi-nominal logistic regression and voting feature intervals classifiers. Of 42 final model risk factors, discharge disposition, discretized age, and indicators of anemia were the most significant. This model achieved a c-statistic of 86.8%. The proposed three-step analytical approach enhanced predictive model performance for CHF readmissions. It could potentially be leveraged to improve predictive model performance in other areas of clinical medicine.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak
2016-03-01
One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi-supervised learning model is one more appropriate tool for survival analysis in clinical cancer research.
Tan, Bruce K; Lu, Guanning; Kwasny, Mary J; Hsueh, Wayne D; Shintani-Smith, Stephanie; Conley, David B; Chandra, Rakesh K; Kern, Robert C; Leung, Randy
2013-11-01
Current symptom criteria poorly predict a diagnosis of chronic rhinosinusitis (CRS) resulting in excessive treatment of patients with presumed CRS. The objective of this study was analyze the positive predictive value of individual symptoms, or symptoms in combination, in patients with CRS symptoms and examine the costs of the subsequent diagnostic algorithm using a decision tree-based cost analysis. We analyzed previously collected patient-reported symptoms from a cross-sectional study of patients who had received a computed tomography (CT) scan of their sinuses at a tertiary care otolaryngology clinic for evaluation of CRS symptoms to calculate the positive predictive value of individual symptoms. Classification and regression tree (CART) analysis then optimized combinations of symptoms and thresholds to identify CRS patients. The calculated positive predictive values were applied to a previously developed decision tree that compared an upfront CT (uCT) algorithm against an empiric medical therapy (EMT) algorithm with further analysis that considered the availability of point of care (POC) imaging. The positive predictive value of individual symptoms ranged from 0.21 for patients reporting forehead pain and to 0.69 for patients reporting hyposmia. The CART model constructed a dichotomous model based on forehead pain, maxillary pain, hyposmia, nasal discharge, and facial pain (C-statistic 0.83). If POC CT were available, median costs ($64-$415) favored using the upfront CT for all individual symptoms. If POC CT was unavailable, median costs favored uCT for most symptoms except intercanthal pain (-$15), hyposmia (-$100), and discolored nasal discharge (-$24), although these symptoms became equivocal on cost sensitivity analysis. The three-tiered CART model could subcategorize patients into tiers where uCT was always favorable (median costs: $332-$504) and others for which EMT was always favorable (median costs -$121 to -$275). The uCT algorithm was always more costly if the nasal endoscopy was positive. Among patients with classic CRS symptoms, the frequency of individual symptoms varied the likelihood of a CRS diagnosis marginally. Only hyposmia, the absence of facial pain, and discolored discharge sufficiently increased the likelihood of diagnosis to potentially make EMT less costly. The development of an evidence-based, multisymptom-based risk stratification model could substantially affect the management costs of the subsequent diagnostic algorithm. © 2013 ARS-AAOA, LLC.
Reka, Ajaya Kumar; Chen, Guoan; Keshamouni, Venkateshwar G.
2014-01-01
In cancer cells, the process of epithelial–mesenchymal transition (EMT) confers migratory and invasive capacity, resistance to apoptosis, drug resistance, evasion of host immune surveillance and tumor stem cell traits. Cells undergoing EMT may represent tumor cells with metastatic potential. Characterizing the EMT secretome may identify biomarkers to monitor EMT in tumor progression and provide a prognostic signature to predict patient survival. Utilizing a transforming growth factor-β-induced cell culture model of EMT, we quantitatively profiled differentially secreted proteins, by GeLC-tandem mass spectrometry. Integrating with the corresponding transcriptome, we derived an EMT-associated secretory phenotype (EASP) comprising of proteins that were differentially upregulated both at protein and mRNA levels. Four independent primary tumor-derived gene expression data sets of lung cancers were used for survival analysis by the random survival forests (RSF) method. Analysis of 97-gene EASP expression in human lung adenocarcinoma tumors revealed strong positive correlations with lymph node metastasis, advanced tumor stage and histological grade. RSF analysis built on a training set (n = 442), including age, sex and stage as variables, stratified three independent lung cancer data sets into low-, medium- and high-risk groups with significant differences in overall survival. We further refined EASP to a 20 gene signature (rEASP) based on variable importance scores from RSF analysis. Similar to EASP, rEASP predicted survival of both adenocarcinoma and squamous carcinoma patients. More importantly, it predicted survival in the early-stage cancers. These results demonstrate that integrative analysis of the critical biological process of EMT provides mechanism-based and clinically relevant biomarkers with significant prognostic value. PMID:24510113
Genders, Tessa S S; Steyerberg, Ewout W; Nieman, Koen; Galema, Tjebbe W; Mollet, Nico R; de Feyter, Pim J; Krestin, Gabriel P; Alkadhi, Hatem; Leschka, Sebastian; Desbiolles, Lotus; Meijs, Matthijs F L; Cramer, Maarten J; Knuuti, Juhani; Kajander, Sami; Bogaert, Jan; Goetschalckx, Kaatje; Cademartiri, Filippo; Maffei, Erica; Martini, Chiara; Seitun, Sara; Aldrovandi, Annachiara; Wildermuth, Simon; Stinn, Björn; Fornaro, Jürgen; Feuchtner, Gudrun; De Zordo, Tobias; Auer, Thomas; Plank, Fabian; Friedrich, Guy; Pugliese, Francesca; Petersen, Steffen E; Davies, L Ceri; Schoepf, U Joseph; Rowe, Garrett W; van Mieghem, Carlos A G; van Driessche, Luc; Sinitsyn, Valentin; Gopalan, Deepa; Nikolaou, Konstantin; Bamberg, Fabian; Cury, Ricardo C; Battle, Juan; Maurovich-Horvat, Pál; Bartykowszki, Andrea; Merkely, Bela; Becker, Dávid; Hadamitzky, Martin; Hausleiter, Jörg; Dewey, Marc; Zimmermann, Elke; Laule, Michael
2012-01-01
Objectives To develop prediction models that better estimate the pretest probability of coronary artery disease in low prevalence populations. Design Retrospective pooled analysis of individual patient data. Setting 18 hospitals in Europe and the United States. Participants Patients with stable chest pain without evidence for previous coronary artery disease, if they were referred for computed tomography (CT) based coronary angiography or catheter based coronary angiography (indicated as low and high prevalence settings, respectively). Main outcome measures Obstructive coronary artery disease (≥50% diameter stenosis in at least one vessel found on catheter based coronary angiography). Multiple imputation accounted for missing predictors and outcomes, exploiting strong correlation between the two angiography procedures. Predictive models included a basic model (age, sex, symptoms, and setting), clinical model (basic model factors and diabetes, hypertension, dyslipidaemia, and smoking), and extended model (clinical model factors and use of the CT based coronary calcium score). We assessed discrimination (c statistic), calibration, and continuous net reclassification improvement by cross validation for the four largest low prevalence datasets separately and the smaller remaining low prevalence datasets combined. Results We included 5677 patients (3283 men, 2394 women), of whom 1634 had obstructive coronary artery disease found on catheter based coronary angiography. All potential predictors were significantly associated with the presence of disease in univariable and multivariable analyses. The clinical model improved the prediction, compared with the basic model (cross validated c statistic improvement from 0.77 to 0.79, net reclassification improvement 35%); the coronary calcium score in the extended model was a major predictor (0.79 to 0.88, 102%). Calibration for low prevalence datasets was satisfactory. Conclusions Updated prediction models including age, sex, symptoms, and cardiovascular risk factors allow for accurate estimation of the pretest probability of coronary artery disease in low prevalence populations. Addition of coronary calcium scores to the prediction models improves the estimates. PMID:22692650
Predicting preference-based SF-6D index scores from the SF-8 health survey.
Wang, P; Fu, A Z; Wee, H L; Lee, J; Tai, E S; Thumboo, J; Luo, N
2013-09-01
To develop and test functions for predicting the preference-based SF-6D index scores from the SF-8 health survey. This study was a secondary analysis of data collected in a population health survey in which respondents (n = 7,529) completed both the SF-36 and the SF-8 questionnaires. We examined seven ordinary least-square estimators for their performance in predicting SF-6D scores from the SF-8 at both the individual and the group levels. In general, all functions performed similarly well in predicting SF-6D scores, and the predictions at the group level were better than predictions at the individual level. At the individual level, 42.5-51.5% of prediction errors were smaller than the minimally important difference (MID) of the SF-6D scores, depending on the function specifications, while almost all prediction errors of the tested functions were smaller than the MID of SF-6D at the group level. At both individual and group levels, the tested functions predicted lower than actual scores at the higher end of the SF-6D scale. Our study developed functions to generate preference-based SF-6D index scores from the SF-8 health survey, the first of its kind. Further research is needed to evaluate the performance and validity of the prediction functions.
A Study on Re-entry Predictions of Uncontrolled Space Objects for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Choi, Eun-Jung; Cho, Sungki; Lee, Deok-Jin; Kim, Siwoo; Jo, Jung Hyun
2017-12-01
The key risk analysis technologies for the re-entry of space objects into Earth’s atmosphere are divided into four categories: cataloguing and databases of the re-entry of space objects, lifetime and re-entry trajectory predictions, break-up models after re-entry and multiple debris distribution predictions, and ground impact probability models. In this study, we focused on re- entry prediction, including orbital lifetime assessments, for space situational awareness systems. Re-entry predictions are very difficult and are affected by various sources of uncertainty. In particular, during uncontrolled re-entry, large spacecraft may break into several pieces of debris, and the surviving fragments can be a significant hazard for persons and properties on the ground. In recent years, specific methods and procedures have been developed to provide clear information for predicting and analyzing the re-entry of space objects and for ground-risk assessments. Representative tools include object reentry survival analysis tool (ORSAT) and debris assessment software (DAS) developed by National Aeronautics and Space Administration (NASA), spacecraft atmospheric re-entry and aerothermal break-up (SCARAB) and debris risk assessment and mitigation analysis (DRAMA) developed by European Space Agency (ESA), and semi-analytic tool for end of life analysis (STELA) developed by Centre National d’Etudes Spatiales (CNES). In this study, various surveys of existing re-entry space objects are reviewed, and an efficient re-entry prediction technique is suggested based on STELA, the life-cycle analysis tool for satellites, and DRAMA, a re-entry analysis tool. To verify the proposed method, the re-entry of the Tiangong-1 Space Lab, which is expected to re-enter Earth’s atmosphere shortly, was simulated. Eventually, these results will provide a basis for space situational awareness risk analyses of the re-entry of space objects.
Analysis and Test Correlation of Proof of Concept Box for Blended Wing Body-Low Speed Vehicle
NASA Technical Reports Server (NTRS)
Spellman, Regina L.
2003-01-01
The Low Speed Vehicle (LSV) is a 14.2% scale remotely piloted vehicle of the revolutionary Blended Wing Body concept. The design of the LSV includes an all composite airframe. Due to internal manufacturing capability restrictions, room temperature layups were necessary. An extensive materials testing and manufacturing process development effort was underwent to establish a process that would achieve the high modulus/low weight properties required to meet the design requirements. The analysis process involved a loads development effort that incorporated aero loads to determine internal forces that could be applied to a traditional FEM of the vehicle and to conduct detailed component analyses. A new tool, Hypersizer, was added to the design process to address various composite failure modes and to optimize the skin panel thickness of the upper and lower skins for the vehicle. The analysis required an iterative approach as material properties were continually changing. As a part of the material characterization effort, test articles, including a proof of concept wing box and a full-scale wing, were fabricated. The proof of concept box was fabricated based on very preliminary material studies and tested in bending, torsion, and shear. The box was then tested to failure under shear. The proof of concept box was also analyzed using Nastran and Hypersizer. The results of both analyses were scaled to determine the predicted failure load. The test results were compared to both the Nastran and Hypersizer analytical predictions. The actual failure occurred at 899 lbs. The failure was predicted at 1167 lbs based on the Nastran analysis. The Hypersizer analysis predicted a lower failure load of 960 lbs. The Nastran analysis alone was not sufficient to predict the failure load because it does not identify local composite failure modes. This analysis has traditionally been done using closed form solutions. Although Hypersizer is typically used as an optimizer for the design process, the failure prediction was used to help gain acceptance and confidence in this new tool. The correlated models and process were to be used to analyze the full BWB-LSV airframe design. The analysis and correlation with test results of the proof of concept box is presented here, including the comparison of the Nastran and Hypersizer results.
Four Major South Korea's Rivers Using Deep Learning Models.
Lee, Sangmok; Lee, Donghyun
2018-06-24
Harmful algal blooms are an annual phenomenon that cause environmental damage, economic losses, and disease outbreaks. A fundamental solution to this problem is still lacking, thus, the best option for counteracting the effects of algal blooms is to improve advance warnings (predictions). However, existing physical prediction models have difficulties setting a clear coefficient indicating the relationship between each factor when predicting algal blooms, and many variable data sources are required for the analysis. These limitations are accompanied by high time and economic costs. Meanwhile, artificial intelligence and deep learning methods have become increasingly common in scientific research; attempts to apply the long short-term memory (LSTM) model to environmental research problems are increasing because the LSTM model exhibits good performance for time-series data prediction. However, few studies have applied deep learning models or LSTM to algal bloom prediction, especially in South Korea, where algal blooms occur annually. Therefore, we employed the LSTM model for algal bloom prediction in four major rivers of South Korea. We conducted short-term (one week) predictions by employing regression analysis and deep learning techniques on a newly constructed water quality and quantity dataset drawn from 16 dammed pools on the rivers. Three deep learning models (multilayer perceptron, MLP; recurrent neural network, RNN; and long short-term memory, LSTM) were used to predict chlorophyll-a, a recognized proxy for algal activity. The results were compared to those from OLS (ordinary least square) regression analysis and actual data based on the root mean square error (RSME). The LSTM model showed the highest prediction rate for harmful algal blooms and all deep learning models out-performed the OLS regression analysis. Our results reveal the potential for predicting algal blooms using LSTM and deep learning.
NASA Technical Reports Server (NTRS)
Kradinov, V.; Madenci, E.; Ambur, D. R.
2004-01-01
Although two-dimensional methods provide accurate predictions of contact stresses and bolt load distribution in bolted composite joints with multiple bolts, they fail to capture the effect of thickness on the strength prediction. Typically, the plies close to the interface of laminates are expected to be the most highly loaded, due to bolt deformation, and they are usually the first to fail. This study presents an analysis method to account for the variation of stresses in the thickness direction by augmenting a two-dimensional analysis with a one-dimensional through the thickness analysis. The two-dimensional in-plane solution method based on the combined complex potential and variational formulation satisfies the equilibrium equations exactly, and satisfies the boundary conditions and constraints by minimizing the total potential. Under general loading conditions, this method addresses multiple bolt configurations without requiring symmetry conditions while accounting for the contact phenomenon and the interaction among the bolts explicitly. The through-the-thickness analysis is based on the model utilizing a beam on an elastic foundation. The bolt, represented as a short beam while accounting for bending and shear deformations, rests on springs, where the spring coefficients represent the resistance of the composite laminate to bolt deformation. The combined in-plane and through-the-thickness analysis produces the bolt/hole displacement in the thickness direction, as well as the stress state in each ply. The initial ply failure predicted by applying the average stress criterion is followed by a simple progressive failure. Application of the model is demonstrated by considering single- and double-lap joints of metal plates bolted to composite laminates.
Tu, Huakang; Sun, Liping; Dong, Xiao; Gong, Yuehua; Xu, Qian; Jing, Jingjing; Bostick, Roberd M; Wu, Xifeng; Yuan, Yuan
2017-05-01
We aimed to assess a serological biopsy using five stomach-specific circulating biomarkers-pepsinogen I (PGI), PGII, PGI/II ratio, anti-Helicobacter pylori (H. pylori) antibody, and gastrin-17 (G-17)-for identifying high-risk individuals and predicting risk of developing gastric cancer (GC). Among 12,112 participants with prospective follow-up from an ongoing population-based screening program using both serology and gastroscopy in China, we conducted a multi-phase study involving a cross-sectional analysis, a follow-up analysis, and an integrative risk prediction modeling analysis. In the cross-sectional analysis, the five biomarkers (especially PGII, the PGI/II ratio, and H. pylori sero-positivity) were associated with the presence of precancerous gastric lesions or GC at enrollment. In the follow-up analysis, low PGI levels and PGI/II ratios were associated with higher risk of developing GC, and both low (<0.5 pmol/l) and high (>4.7 pmol/l) G-17 levels were associated with higher risk of developing GC, suggesting a J-shaped association. In the risk prediction modeling analysis, the five biomarkers combined yielded a C statistic of 0.803 (95% confidence interval (CI)=0.789-0.816) and improved prediction beyond traditional risk factors (C statistic from 0.580 to 0.811, P<0.001) for identifying precancerous lesions at enrollment, and higher serological biopsy scores based on the five biomarkers at enrollment were associated with higher risk of developing GC during follow-up (P for trend <0.001). A serological biopsy composed of the five stomach-specific circulating biomarkers could be used to identify high-risk individuals for further diagnostic gastroscopy, and to stratify individuals' risk of developing GC and thus to guide targeted screening and precision prevention.
An operational global-scale ocean thermal analysis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.
1990-04-01
The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less
Generic Hypersonic Inlet Module Analysis
NASA Technical Reports Server (NTRS)
Cockrell, Chares E., Jr.; Huebner, Lawrence D.
2004-01-01
A computational study associated with an internal inlet drag analysis was performed for a generic hypersonic inlet module. The purpose of this study was to determine the feasibility of computing the internal drag force for a generic scramjet engine module using computational methods. The computational study consisted of obtaining two-dimensional (2D) and three-dimensional (3D) computational fluid dynamics (CFD) solutions using the Euler and parabolized Navier-Stokes (PNS) equations. The solution accuracy was assessed by comparisons with experimental pitot pressure data. The CFD analysis indicates that the 3D PNS solutions show the best agreement with experimental pitot pressure data. The internal inlet drag analysis consisted of obtaining drag force predictions based on experimental data and 3D CFD solutions. A comparative assessment of each of the drag prediction methods is made and the sensitivity of CFD drag values to computational procedures is documented. The analysis indicates that the CFD drag predictions are highly sensitive to the computational procedure used.
Using Time Series Analysis to Predict Cardiac Arrest in a PICU.
Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P
2015-11-01
To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.
Fatigue of notched fiber composite laminates. Part 1: Analytical model
NASA Technical Reports Server (NTRS)
Mclaughlin, P. V., Jr.; Kulkarni, S. V.; Huang, S. N.; Rosen, B. W.
1975-01-01
A description is given of a semi-empirical, deterministic analysis for prediction and correlation of fatigue crack growth, residual strength, and fatigue lifetime for fiber composite laminates containing notches (holes). The failure model used for the analysis is based upon composite heterogeneous behavior and experimentally observed failure modes under both static and fatigue loading. The analysis is consistent with the wearout philosophy. Axial cracking and transverse cracking failure modes are treated together in the analysis. Cracking off-axis is handled by making a modification to the axial cracking analysis. The analysis predicts notched laminate failure from unidirectional material fatique properties using constant strain laminate analysis techniques. For multidirectional laminates, it is necessary to know lamina fatique behavior under axial normal stress, transverse normal stress and axial shear stress. Examples of the analysis method are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.; Frincke, Deborah A.
2010-09-01
The purpose of this chapter is to motivate the combination of traditional cyber security audit data with psychosocial data, so as to move from an insider threat detection stance to one that enables prediction of potential insider presence. Two distinctive aspects of the approach are the objective of predicting or anticipating potential risks and the use of organizational data in addition to cyber data to support the analysis. The chapter describes the challenges of this endeavor and progress in defining a usable set of predictive indicators, developing a framework for integrating the analysis of organizational and cyber security data tomore » yield predictions about possible insider exploits, and developing the knowledge base and reasoning capability of the system. We also outline the types of errors that one expects in a predictive system versus a detection system and discuss how those errors can affect the usefulness of the results.« less
Real-time Tsunami Inundation Prediction Using High Performance Computers
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.
2014-12-01
Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the earthquake occurs took about 2 minutes, which would be sufficient for a practical tsunami inundation predictions. In the presentation, the computational performance of our faster-than-real-time tsunami inundation model will be shown, and preferable tsunami wave source analysis for an accurate inundation prediction will also be discussed.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed us- ing a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify de ciencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the mea- sured data, a sensitivity analysis of the model parameters with emphasis on the de nition of the convection velocity parameter, and a least-squares t of the predicted to the mea- sured shock-associated noise component spectra, resulted in a new de nition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
Du, Juan; Yang, Fang; Zhang, Zhiqiang; Hu, Jingze; Xu, Qiang; Hu, Jianping; Zeng, Fanyong; Lu, Guangming; Liu, Xinfeng
2018-05-15
An accurate prediction of long term outcome after stroke is urgently required to provide early individualized neurorehabilitation. This study aimed to examine the added value of early neuroimaging measures and identify the best approaches for predicting motor outcome after stroke. This prospective study involved 34 first-ever ischemic stroke patients (time since stroke: 1-14 days) with upper limb impairment. All patients underwent baseline multimodal assessments that included clinical (age, motor impairment), neurophysiological (motor-evoked potentials, MEP) and neuroimaging (diffusion tensor imaging and motor task-based fMRI) measures, and also underwent reassessment 3 months after stroke. Bivariate analysis and multivariate linear regression models were used to predict the motor scores (Fugl-Meyer assessment, FMA) at 3 months post-stroke. With bivariate analysis, better motor outcome significantly correlated with (1) less initial motor impairment and disability, (2) less corticospinal tract injury, (3) the initial presence of MEPs, (4) stronger baseline motor fMRI activations. In multivariate analysis, incorporating neuroimaging data improved the predictive accuracy relative to only clinical and neurophysiological assessments. Baseline fMRI activation in SMA was an independent predictor of motor outcome after stroke. A multimodal model incorporating fMRI and clinical measures best predicted the motor outcome following stroke. fMRI measures obtained early after stroke provided independent prediction of long-term motor outcome.