Composite Stress Rupture: A New Reliability Model Based on Strength Decay
NASA Technical Reports Server (NTRS)
Reeder, James R.
2012-01-01
A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puskar, Joseph David; Quintana, Michael A.; Sorensen, Neil Robert
A program is underway at Sandia National Laboratories to predict long-term reliability of photovoltaic (PV) systems. The vehicle for the reliability predictions is a Reliability Block Diagram (RBD), which models system behavior. Because this model is based mainly on field failure and repair times, it can be used to predict current reliability, but it cannot currently be used to accurately predict lifetime. In order to be truly predictive, physics-informed degradation processes and failure mechanisms need to be included in the model. This paper describes accelerated life testing of metal foil tapes used in thin-film PV modules, and how tape jointmore » degradation, a possible failure mode, can be incorporated into the model.« less
A Reliability Estimation in Modeling Watershed Runoff With Uncertainties
NASA Astrophysics Data System (ADS)
Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.
1990-10-01
The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.
Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.
2015-08-19
Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less
Wu, X; Lund, M S; Sun, D; Zhang, Q; Su, G
2015-10-01
One of the factors affecting the reliability of genomic prediction is the relationship among the animals of interest. This study investigated the reliability of genomic prediction in various scenarios with regard to the relationship between test and training animals, and among animals within the training data set. Different training data sets were generated from EuroGenomics data and a group of Nordic Holstein bulls (born in 2005 and afterwards) as a common test data set. Genomic breeding values were predicted using a genomic best linear unbiased prediction model and a Bayesian mixture model. The results showed that a closer relationship between test and training animals led to a higher reliability of genomic predictions for the test animals, while a closer relationship among training animals resulted in a lower reliability. In addition, the Bayesian mixture model in general led to a slightly higher reliability of genomic prediction, especially for the scenario of distant relationships between training and test animals. Therefore, to prevent a decrease in reliability, constant updates of the training population with animals from more recent generations are required. Moreover, a training population consisting of less-related animals is favourable for reliability of genomic prediction. © 2015 Blackwell Verlag GmbH.
Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G
2017-12-05
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.
Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment
NASA Technical Reports Server (NTRS)
Davis, M. R.; Kamins, M.; Mooz, W. E.
1978-01-01
A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models.
Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme
2015-01-01
The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Software reliability models for fault-tolerant avionics computers and related topics
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1987-01-01
Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.
Tracking reliability for space cabin-borne equipment in development by Crow model.
Chen, J D; Jiao, S J; Sun, H L
2001-12-01
Objective. To study and track the reliability growth of manned spaceflight cabin-borne equipment in the course of its development. Method. A new technique of reliability growth estimation and prediction, which is composed of the Crow model and test data conversion (TDC) method was used. Result. The estimation and prediction value of the reliability growth conformed to its expectations. Conclusion. The method could dynamically estimate and predict the reliability of the equipment by making full use of various test information in the course of its development. It offered not only a possibility of tracking the equipment reliability growth, but also the reference for quality control in manned spaceflight cabin-borne equipment design and development process.
Care 3 model overview and user's guide, first revision
NASA Technical Reports Server (NTRS)
Bavuso, S. J.; Petersen, P. L.
1985-01-01
A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.
On the use and the performance of software reliability growth models
NASA Technical Reports Server (NTRS)
Keiller, Peter A.; Miller, Douglas R.
1991-01-01
We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.
Cuyabano, B C D; Su, G; Rosa, G J M; Lund, M S; Gianola, D
2015-10-01
This study compared the accuracy of genome-enabled prediction models using individual single nucleotide polymorphisms (SNP) or haplotype blocks as covariates when using either a single breed or a combined population of Nordic Red cattle. The main objective was to compare predictions of breeding values of complex traits using a combined training population with haplotype blocks, with predictions using a single breed as training population and individual SNP as predictors. To compare the prediction reliabilities, bootstrap samples were taken from the test data set. With the bootstrapped samples of prediction reliabilities, we built and graphed confidence ellipses to allow comparisons. Finally, measures of statistical distances were used to calculate the gain in predictive ability. Our analyses are innovative in the context of assessment of predictive models, allowing a better understanding of prediction reliabilities and providing a statistical basis to effectively calibrate whether one prediction scenario is indeed more accurate than another. An ANOVA indicated that use of haplotype blocks produced significant gains mainly when Bayesian mixture models were used but not when Bayesian BLUP was fitted to the data. Furthermore, when haplotype blocks were used to train prediction models in a combined Nordic Red cattle population, we obtained up to a statistically significant 5.5% average gain in prediction accuracy, over predictions using individual SNP and training the model with a single breed. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Confronting uncertainty in flood damage predictions
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno
2015-04-01
Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
Enhancing Flood Prediction Reliability Using Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Liu, Z.; Merwade, V.
2017-12-01
Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.
What do we gain with Probabilistic Flood Loss Models?
NASA Astrophysics Data System (ADS)
Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.
2015-12-01
The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.
2009-01-01
An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.
Genomic prediction using imputed whole-genome sequence data in Holstein Friesian cattle.
van Binsbergen, Rianne; Calus, Mario P L; Bink, Marco C A M; van Eeuwijk, Fred A; Schrooten, Chris; Veerkamp, Roel F
2015-09-17
In contrast to currently used single nucleotide polymorphism (SNP) panels, the use of whole-genome sequence data is expected to enable the direct estimation of the effects of causal mutations on a given trait. This could lead to higher reliabilities of genomic predictions compared to those based on SNP genotypes. Also, at each generation of selection, recombination events between a SNP and a mutation can cause decay in reliability of genomic predictions based on markers rather than on the causal variants. Our objective was to investigate the use of imputed whole-genome sequence genotypes versus high-density SNP genotypes on (the persistency of) the reliability of genomic predictions using real cattle data. Highly accurate phenotypes based on daughter performance and Illumina BovineHD Beadchip genotypes were available for 5503 Holstein Friesian bulls. The BovineHD genotypes (631,428 SNPs) of each bull were used to impute whole-genome sequence genotypes (12,590,056 SNPs) using the Beagle software. Imputation was done using a multi-breed reference panel of 429 sequenced individuals. Genomic estimated breeding values for three traits were predicted using a Bayesian stochastic search variable selection (BSSVS) model and a genome-enabled best linear unbiased prediction model (GBLUP). Reliabilities of predictions were based on 2087 validation bulls, while the other 3416 bulls were used for training. Prediction reliabilities ranged from 0.37 to 0.52. BSSVS performed better than GBLUP in all cases. Reliabilities of genomic predictions were slightly lower with imputed sequence data than with BovineHD chip data. Also, the reliabilities tended to be lower for both sequence data and BovineHD chip data when relationships between training animals were low. No increase in persistency of prediction reliability using imputed sequence data was observed. Compared to BovineHD genotype data, using imputed sequence data for genomic prediction produced no advantage. To investigate the putative advantage of genomic prediction using (imputed) sequence data, a training set with a larger number of individuals that are distantly related to each other and genomic prediction models that incorporate biological information on the SNPs or that apply stricter SNP pre-selection should be considered.
Opportunities of probabilistic flood loss models
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno
2016-04-01
Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved in comparison to uni-variable Stage damage function. Overall, Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
NASA Technical Reports Server (NTRS)
Wilson, Larry
1991-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
Feasibility of developing LSI microcircuit reliability prediction models
NASA Technical Reports Server (NTRS)
Ryerson, C. M.
1972-01-01
In the proposed modeling approach, when any of the essential key factors are not known initially, they can be approximated in various ways with a known impact on the accuracy of the final predictions. For example, on any program where reliability predictions are started at interim states of project completion, a-priori approximate estimates of the key factors are established for making preliminary predictions. Later these are refined for greater accuracy as subsequent program information of a more definitive nature becomes available. Specific steps to develop, validate and verify these new models are described.
NASA Technical Reports Server (NTRS)
Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald
2007-01-01
In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.
Development of confidence limits by pivotal functions for estimating software reliability
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.
Evaluation of ceramics for stator application: Gas turbine engine report
NASA Technical Reports Server (NTRS)
Trela, W.; Havstad, P. H.
1978-01-01
Current ceramic materials, component fabrication processes, and reliability prediction capability for ceramic stators in an automotive gas turbine engine environment are assessed. Simulated engine duty cycle testing of stators conducted at temperatures up to 1093 C is discussed. Materials evaluated are SiC and Si3N4 fabricated from two near-net-shape processes: slip casting and injection molding. Stators for durability cycle evaluation and test specimens for material property characterization, and reliability prediction model prepared to predict stator performance in the simulated engine environment are considered. The status and description of the work performed for the reliability prediction modeling, stator fabrication, material property characterization, and ceramic stator evaluation efforts are reported.
NASA Astrophysics Data System (ADS)
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
NASA Technical Reports Server (NTRS)
Wilson, Larry W.
1989-01-01
The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.
Sazonovas, A; Japertas, P; Didziapetris, R
2010-01-01
This study presents a new type of acute toxicity (LD(50)) prediction that enables automated assessment of the reliability of predictions (which is synonymous with the assessment of the Model Applicability Domain as defined by the Organization for Economic Cooperation and Development). Analysis involved nearly 75,000 compounds from six animal systems (acute rat toxicity after oral and intraperitoneal administration; acute mouse toxicity after oral, intraperitoneal, intravenous, and subcutaneous administration). Fragmental Partial Least Squares (PLS) with 100 bootstraps yielded baseline predictions that were automatically corrected for non-linear effects in local chemical spaces--a combination called Global, Adjusted Locally According to Similarity (GALAS) modelling methodology. Each prediction obtained in this manner is provided with a reliability index value that depends on both compound's similarity to the training set (that accounts for similar trends in LD(50) variations within multiple bootstraps) and consistency of experimental results with regard to the baseline model in the local chemical environment. The actual performance of the Reliability Index (RI) was proven by its good (and uniform) correlations with Root Mean Square Error (RMSE) in all validation sets, thus providing quantitative assessment of the Model Applicability Domain. The obtained models can be used for compound screening in the early stages of drug development and prioritization for experimental in vitro testing or later in vivo animal acute toxicity studies.
Validation of behave fire behavior predictions in oak savannas
Grabner, Keith W.; Dwyer, John; Cutter, Bruce E.
1997-01-01
Prescribed fire is a valuable tool in the restoration and management of oak savannas. BEHAVE, a fire behavior prediction system developed by the United States Forest Service, can be a useful tool when managing oak savannas with prescribed fire. BEHAVE predictions of fire rate-of-spread and flame length were validated using four standardized fuel models: Fuel Model 1 (short grass), Fuel Model 2 (timber and grass), Fuel Model 3 (tall grass), and Fuel Model 9 (hardwood litter). Also, a customized oak savanna fuel model (COSFM) was created and validated. Results indicate that standardized fuel model 2 and the COSFM reliably estimate mean rate-of-spread (MROS). The COSFM did not appreciably reduce MROS variation when compared to fuel model 2. Fuel models 1, 3, and 9 did not reliably predict MROS. Neither the standardized fuel models nor the COSFM adequately predicted flame lengths. We concluded that standardized fuel model 2 should be used with BEHAVE when predicting fire rates-of-spread in established oak savannas.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
Application of an Integrated HPC Reliability Prediction Framework to HMMWV Suspension System
2010-09-13
model number M966 (TOW Missle Carrier, Basic Armor without weapons), since they were available. Tires used for all simulations were the bias-type...vehicle fleet, including consideration of all kinds of uncertainty, especially including model uncertainty. The end result will be a tool to use...building an adequate vehicle reliability prediction framework for military vehicles is the accurate modeling of the integration of various types of
Rollover risk prediction of heavy vehicles by reliability index and empirical modelling
NASA Astrophysics Data System (ADS)
Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles
2018-03-01
This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.
Genomic selection in a commercial winter wheat population.
He, Sang; Schulthess, Albert Wilhelm; Mirdita, Vilson; Zhao, Yusheng; Korzun, Viktor; Bothe, Reiner; Ebmeyer, Erhard; Reif, Jochen C; Jiang, Yong
2016-03-01
Genomic selection models can be trained using historical data and filtering genotypes based on phenotyping intensity and reliability criterion are able to increase the prediction ability. We implemented genomic selection based on a large commercial population incorporating 2325 European winter wheat lines. Our objectives were (1) to study whether modeling epistasis besides additive genetic effects results in enhancement on prediction ability of genomic selection, (2) to assess prediction ability when training population comprised historical or less-intensively phenotyped lines, and (3) to explore the prediction ability in subpopulations selected based on the reliability criterion. We found a 5 % increase in prediction ability when shifting from additive to additive plus epistatic effects models. In addition, only a marginal loss from 0.65 to 0.50 in accuracy was observed using the data collected from 1 year to predict genotypes of the following year, revealing that stable genomic selection models can be accurately calibrated to predict subsequent breeding stages. Moreover, prediction ability was maximized when the genotypes evaluated in a single location were excluded from the training set but subsequently decreased again when the phenotyping intensity was increased above two locations, suggesting that the update of the training population should be performed considering all the selected genotypes but excluding those evaluated in a single location. The genomic prediction ability was substantially higher in subpopulations selected based on the reliability criterion, indicating that phenotypic selection for highly reliable individuals could be directly replaced by applying genomic selection to them. We empirically conclude that there is a high potential to assist commercial wheat breeding programs employing genomic selection approaches.
Kowinsky, Amy M; Shovel, Judith; McLaughlin, Maribeth; Vertacnik, Lisa; Greenhouse, Pamela K; Martin, Susan Christie; Minnier, Tamra E
2012-01-01
Predictable and unpredictable patient care tasks compete for caregiver time and attention, making it difficult for patient care staff to reliably and consistently meet patient needs. We have piloted a redesigned care model that separates the work of patient care technicians based on task predictability and creates role specificity. This care model shows promise in improving the ability of staff to reliably complete tasks in a more consistent and timely manner.
A new lifetime estimation model for a quicker LED reliability prediction
NASA Astrophysics Data System (ADS)
Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.
2014-09-01
LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.
Predicting wettability behavior of fluorosilica coated metal surface using optimum neural network
NASA Astrophysics Data System (ADS)
Taghipour-Gorjikolaie, Mehran; Valipour Motlagh, Naser
2018-02-01
The interaction between variables, which are effective on the surface wettability, is very complex to predict the contact angles and sliding angles of liquid drops. In this paper, in order to solve this complexity, artificial neural network was used to develop reliable models for predicting the angles of liquid drops. Experimental data are divided into training data and testing data. By using training data and feed forward structure for the neural network and using particle swarm optimization for training the neural network based models, the optimum models were developed. The obtained results showed that regression index for the proposed models for the contact angles and sliding angles are 0.9874 and 0.9920, respectively. As it can be seen, these values are close to unit and it means the reliable performance of the models. Also, it can be inferred from the results that the proposed model have more reliable performance than multi-layer perceptron and radial basis function based models.
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.
Chen, J D; Sun, H L
1999-04-01
Objective. To assess and predict reliability of an equipment dynamically by making full use of various test informations in the development of products. Method. A new reliability growth assessment method based on army material system analysis activity (AMSAA) model was developed. The method is composed of the AMSAA model and test data conversion technology. Result. The assessment and prediction results of a space-borne equipment conform to its expectations. Conclusion. It is suggested that this method should be further researched and popularized.
Validation of urban freeway models. [supporting datasets
DOT National Transportation Integrated Search
2015-01-01
The goal of the SHRP 2 Project L33 Validation of Urban Freeway Models was to assess and enhance the predictive travel time reliability models developed in the SHRP 2 Project L03, Analytic Procedures for Determining the Impacts of Reliability Mitigati...
PREDICTING CLIMATE-INDUCED RANGE SHIFTS: MODEL DIFFERENCES AND MODEL RELIABILITY
Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common ...
A System for Integrated Reliability and Safety Analyses
NASA Technical Reports Server (NTRS)
Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Coumeri, Marc; Scheidler, Peter, Jr.; Bonesteel, Charles
1999-01-01
We present an integrated reliability and aviation safety analysis tool. The reliability models for selected infrastructure components of the air traffic control system are described. The results of this model are used to evaluate the likelihood of seeing outcomes predicted by simulations with failures injected. We discuss the design of the simulation model, and the user interface to the integrated toolset.
A Compatible Hardware/Software Reliability Prediction Model.
1981-07-22
machines. In particular, he was interested in the following problem: assu me that one has a collection of connected elements computing and transmitting...software reliability prediction model is desirable, the findings about the Weibull distribution are intriguing. After collecting failure data from several...capacitor, some of the added charge carriers are collected by the capacitor. If the added charge is sufficiently large, the information stored is changed
Cost prediction model for various payloads and instruments for the Space Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Hoffman, F. E.
1984-01-01
The following cost parameters of the space shuttle were undertaken: (1) to develop a cost prediction model for various payload classes of instruments and experiments for the Space Shuttle Orbiter; and (2) to show the implications of various payload classes on the cost of: reliability analysis, quality assurance, environmental design requirements, documentation, parts selection, and other reliability enhancing activities.
Using Pareto points for model identification in predictive toxicology
2013-01-01
Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.
1981-06-01
Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS
Prediction of Geomagnetic Activity and Key Parameters in High-Latitude Ionosphere-Basic Elements
NASA Technical Reports Server (NTRS)
Lyatsky, W.; Khazanov, G. V.
2007-01-01
Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere is an important task of the Space Weather program. Prediction reliability is dependent on the prediction method and elements included in the prediction scheme. Two main elements are a suitable geomagnetic activity index and coupling function -- the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity. The appropriate choice of these two elements is imperative for any reliable prediction model. The purpose of this work was to elaborate on these two elements -- the appropriate geomagnetic activity index and the coupling function -- and investigate the opportunity to improve the reliability of the prediction of geomagnetic activity and other events in the Earth's magnetosphere. The new polar magnetic index of geomagnetic activity and the new version of the coupling function lead to a significant increase in the reliability of predicting the geomagnetic activity and some key parameters, such as cross-polar cap voltage and total Joule heating in high-latitude ionosphere, which play a very important role in the development of geomagnetic and other activity in the Earth s magnetosphere, and are widely used as key input parameters in modeling magnetospheric, ionospheric, and thermospheric processes.
The Real World Significance of Performance Prediction
ERIC Educational Resources Information Center
Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu
2012-01-01
In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…
NASA Astrophysics Data System (ADS)
Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro
2016-04-01
The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.
Reliability Growth and Its Applications to Dormant Reliability
1981-12-01
ability to make projection about future reli- ability (Rof 9:41-42). Barlow and Scheuer Model. Richard E. Barlow and Ernest M. Sch~uvr, of the University...Reliability Growth Prediction Models," Operations Research, 18(l):S2-6S (January/February 1970). 7. Bauer, John, William Hadley, and Robert Dietz... Texarkana , Texas, May 1973. (AD 768 119). 10. Bonis, Austin J. "Reliability Growth Curves for One Shot Devices," Proceedings 1977 Annual Reliability and
Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information
Wang, Xiaohong; Wang, Lizhi
2017-01-01
Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system. PMID:28926930
Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.
Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi
2017-09-15
Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.
Towards more accurate and reliable predictions for nuclear applications
NASA Astrophysics Data System (ADS)
Goriely, Stephane; Hilaire, Stephane; Dubray, Noel; Lemaître, Jean-François
2017-09-01
The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. Nowadays mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenological inputs in the evaluation of nuclear data. The latest achievements to determine nuclear masses within the non-relativistic HFB approach, including the related uncertainties in the model predictions, are discussed. Similarly, recent efforts to determine fission observables within the mean-field approach are described and compared with more traditional existing models.
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann
2014-12-01
Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Reliability Prediction Approaches For Domestic Intelligent Electric Energy Meter Based on IEC62380
NASA Astrophysics Data System (ADS)
Li, Ning; Tong, Guanghua; Yang, Jincheng; Sun, Guodong; Han, Dongjun; Wang, Guixian
2018-01-01
The reliability of intelligent electric energy meter is a crucial issue considering its large calve application and safety of national intelligent grid. This paper developed a procedure of reliability prediction for domestic intelligent electric energy meter according to IEC62380, especially to identify the determination of model parameters combining domestic working conditions. A case study was provided to show the effectiveness and validation.
Height prediction equations for even-aged upland oak stands
Donald E. Hilt; Martin E. Dale
1982-01-01
Forest growth models that use predicted tree diameters or diameter distributions require a reliable height-prediction model to obtain volume estimates because future height-diameter relationships will not necessarily be the same as the present height-diameter relationship. A total tree height prediction equation for even-aged upland oak stands is presented. Predicted...
Modeling and simulation of reliability of unmanned intelligent vehicles
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Dixit, Arati M.; Mustapha, Adam; Singh, Kuldip; Aggarwal, K. K.; Gerhart, Grant R.
2008-04-01
Unmanned ground vehicles have a large number of scientific, military and commercial applications. A convoy of such vehicles can have collaboration and coordination. For the movement of such a convoy, it is important to predict the reliability of the system. A number of approaches are available in literature which describes the techniques for determining the reliability of the system. Graph theoretic approaches are popular in determining terminal reliability and system reliability. In this paper we propose to exploit Fuzzy and Neuro-Fuzzy approaches for predicting the node and branch reliability of the system while Boolean algebra approaches are used to determine terminal reliability and system reliability. Hence a combination of intelligent approaches like Fuzzy, Neuro-Fuzzy and Boolean approaches is used to predict the overall system reliability of a convoy of vehicles. The node reliabilities may correspond to the collaboration of vehicles while branch reliabilities will determine the terminal reliabilities between different nodes. An algorithm is proposed for determining the system reliabilities of a convoy of vehicles. The simulation of the overall system is proposed. Such simulation should be helpful to the commander to take an appropriate action depending on the predicted reliability in different terrain and environmental conditions. It is hoped that results of this paper will lead to more important techniques to have a reliable convoy of vehicles in a battlefield.
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Gyekenyesi, John P.
1989-01-01
Presently there are many opportunities for the application of ceramic materials at elevated temperatures. In the near future ceramic materials are expected to supplant high temperature metal alloys in a number of applications. It thus becomes essential to develop a capability to predict the time-dependent response of these materials. The creep rupture phenomenon is discussed, and a time-dependent reliability model is outlined that integrates continuum damage mechanics principles and Weibull analysis. Several features of the model are presented in a qualitative fashion, including predictions of both reliability and hazard rate. In addition, a comparison of the continuum and the microstructural kinetic equations highlights a strong resemblance in the two approaches.
Lifetime Reliability Prediction of Ceramic Structures Under Transient Thermomechanical Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Jadaan, Osama J.; Gyekenyesi, John P.
2005-01-01
An analytical methodology is developed to predict the probability of survival (reliability) of ceramic components subjected to harsh thermomechanical loads that can vary with time (transient reliability analysis). This capability enables more accurate prediction of ceramic component integrity against fracture in situations such as turbine startup and shutdown, operational vibrations, atmospheric reentry, or other rapid heating or cooling situations (thermal shock). The transient reliability analysis methodology developed herein incorporates the following features: fast-fracture transient analysis (reliability analysis without slow crack growth, SCG); transient analysis with SCG (reliability analysis with time-dependent damage due to SCG); a computationally efficient algorithm to compute the reliability for components subjected to repeated transient loading (block loading); cyclic fatigue modeling using a combined SCG and Walker fatigue law; proof testing for transient loads; and Weibull and fatigue parameters that are allowed to vary with temperature or time. Component-to-component variation in strength (stochastic strength response) is accounted for with the Weibull distribution, and either the principle of independent action or the Batdorf theory is used to predict the effect of multiaxial stresses on reliability. The reliability analysis can be performed either as a function of the component surface (for surface-distributed flaws) or component volume (for volume-distributed flaws). The transient reliability analysis capability has been added to the NASA CARES/ Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code. CARES/Life was also updated to interface with commercially available finite element analysis software, such as ANSYS, when used to model the effects of transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.
Adaptive vehicle motion estimation and prediction
NASA Astrophysics Data System (ADS)
Zhao, Liang; Thorpe, Chuck E.
1999-01-01
Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.
Multi-model ensemble hydrologic prediction using Bayesian model averaging
NASA Astrophysics Data System (ADS)
Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh
2007-05-01
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.
2009-01-01
Background Feed composition has a large impact on the growth of animals, particularly marine fish. We have developed a quantitative dynamic model that can predict the growth and body composition of marine fish for a given feed composition over a timespan of several months. The model takes into consideration the effects of environmental factors, particularly temperature, on growth, and it incorporates detailed kinetics describing the main metabolic processes (protein, lipid, and central metabolism) known to play major roles in growth and body composition. Results For validation, we compared our model's predictions with the results of several experimental studies. We showed that the model gives reliable predictions of growth, nutrient utilization (including amino acid retention), and body composition over a timespan of several months, longer than most of the previously developed predictive models. Conclusion We demonstrate that, despite the difficulties involved, multiscale models in biology can yield reasonable and useful results. The model predictions are reliable over several timescales and in the presence of strong temperature fluctuations, which are crucial factors for modeling marine organism growth. The model provides important improvements over existing models. PMID:19903354
Product component genealogy modeling and field-failure prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Caleb; Hong, Yili; Meeker, William Q.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Product component genealogy modeling and field-failure prediction
King, Caleb; Hong, Yili; Meeker, William Q.
2016-04-13
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Reliability of IGBT in a STATCOM for Harmonic Compensation and Power Factor Correction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak
With smart grid integration, there is a need to characterize reliability of a power system by including reliability of power semiconductors in grid related applications. In this paper, the reliability of IGBTs in a STATCOM application is presented for two different applications, power factor correction and harmonic elimination. The STATCOM model is developed in EMTP, and analytical equations for average conduction losses in an IGBT and a diode are derived and compared with experimental data. A commonly used reliability model is used to predict reliability of IGBT.
Reliable probabilities through statistical post-processing of ensemble predictions
NASA Astrophysics Data System (ADS)
Van Schaeybroeck, Bert; Vannitsem, Stéphane
2013-04-01
We develop post-processing or calibration approaches based on linear regression that make ensemble forecasts more reliable. We enforce climatological reliability in the sense that the total variability of the prediction is equal to the variability of the observations. Second, we impose ensemble reliability such that the spread around the ensemble mean of the observation coincides with the one of the ensemble members. In general the attractors of the model and reality are inhomogeneous. Therefore ensemble spread displays a variability not taken into account in standard post-processing methods. We overcome this by weighting the ensemble by a variable error. The approaches are tested in the context of the Lorenz 96 model (Lorenz 1996). The forecasts become more reliable at short lead times as reflected by a flatter rank histogram. Our best method turns out to be superior to well-established methods like EVMOS (Van Schaeybroeck and Vannitsem, 2011) and Nonhomogeneous Gaussian Regression (Gneiting et al., 2005). References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Lorenz, E. N., 1996: Predictability - a problem partly solved. Proceedings, Seminar on Predictability ECMWF. 1, 1-18. [3] Van Schaeybroeck, B., and S. Vannitsem, 2011: Post-processing through linear regression, Nonlin. Processes Geophys., 18, 147.
Evaluation of 3D-Jury on CASP7 models.
Kaján, László; Rychlewski, Leszek
2007-08-21
3D-Jury, the structure prediction consensus method publicly available in the Meta Server http://meta.bioinfo.pl/, was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature http://meta.bioinfo.pl/compare_your_model_example.pl available in the Meta Server.
Predicting climate-induced range shifts: model differences and model reliability.
Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein
2006-01-01
Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...
Error associated with model predictions of wildland fire rate of spread
Miguel G. Cruz; Martin E. Alexander
2015-01-01
How well can we expect to predict the spread rate of wildfires and prescribed fires? The degree of accuracy in model predictions of wildland fire behaviour characteristics are dependent on the model's applicability to a given situation, the validity of the model's relationships, and the reliability of the model input data (Alexander and Cruz 2013b#. We...
Evaluation of 3D-Jury on CASP7 models
Kaján, László; Rychlewski, Leszek
2007-01-01
Background 3D-Jury, the structure prediction consensus method publicly available in the Meta Server , was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. Results The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. Conclusion The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature available in the Meta Server. PMID:17711571
The Yale-Brown Obsessive Compulsive Scale: A Reliability Generalization Meta-Analysis.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Maria; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-10-01
The Yale-Brown Obsessive Compulsive Scale (Y-BOCS) is the most frequently applied test to assess obsessive compulsive symptoms. We conducted a reliability generalization meta-analysis on the Y-BOCS to estimate the average reliability, examine the variability among the reliability estimates, search for moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the Y-BOCS. We included studies where the Y-BOCS was applied to a sample of adults and reliability estimate was reported. Out of the 11,490 references located, 144 studies met the selection criteria. For the total scale, the mean reliability was 0.866 for coefficients alpha, 0.848 for test-retest correlations, and 0.922 for intraclass correlations. The moderator analyses led to a predictive model where the standard deviation of the total test and the target population (clinical vs. nonclinical) explained 38.6% of the total variability among coefficients alpha. Finally, clinical implications of the results are discussed. © The Author(s) 2014.
Reliability analysis and initial requirements for FC systems and stacks
NASA Astrophysics Data System (ADS)
Åström, K.; Fontell, E.; Virtanen, S.
In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-22
... Commission, adopts a point-to-point predictive model for determining the ability of individual locations to... predictive model for reliably and presumptively determining the ability of individual locations, through the... adopted a point-to-point predictive model for determining the ability of individual locations to receive...
NASA Technical Reports Server (NTRS)
Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. The Hybrid Automated Reliability Predictor (HARP) tutorial provides insight into HARP modeling techniques and the interactive textual prompting input language via a step-by-step explanation and demonstration of HARP's fault occurrence/repair model and the fault/error handling models. Example applications are worked in their entirety and the HARP tabular output data are presented for each. Simple models are presented at first with each succeeding example demonstrating greater modeling power and complexity. This document is not intended to present the theoretical and mathematical basis for HARP.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
NASA Astrophysics Data System (ADS)
Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.
2016-12-01
Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.
USDA-ARS?s Scientific Manuscript database
Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...
Reliability Analysis of Uniaxially Ground Brittle Materials
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Nemeth, Noel N.; Powers, Lynn M.; Choi, Sung R.
1995-01-01
The fast fracture strength distribution of uniaxially ground, alpha silicon carbide was investigated as a function of grinding angle relative to the principal stress direction in flexure. Both as-ground and ground/annealed surfaces were investigated. The resulting flexural strength distributions were used to verify reliability models and predict the strength distribution of larger plate specimens tested in biaxial flexure. Complete fractography was done on the specimens. Failures occurred from agglomerates, machining cracks, or hybrid flaws that consisted of a machining crack located at a processing agglomerate. Annealing eliminated failures due to machining damage. Reliability analyses were performed using two and three parameter Weibull and Batdorf methodologies. The Weibull size effect was demonstrated for machining flaws. Mixed mode reliability models reasonably predicted the strength distributions of uniaxial flexure and biaxial plate specimens.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindsay, WD; Oncora Medical, LLC, Philadelphia, PA; Berlind, CG
Purpose: While rates of local control have been well characterized after stereotactic body radiotherapy (SBRT) for stage I non-small cell lung cancer (NSCLC), less data are available characterizing survival and normal tissue toxicities, and no validated models exist assessing these parameters after SBRT. We evaluate the reliability of various machine learning techniques when applied to radiation oncology datasets to create predictive models of mortality, tumor control, and normal tissue complications. Methods: A dataset of 204 consecutive patients with stage I non-small cell lung cancer (NSCLC) treated with stereotactic body radiotherapy (SBRT) at the University of Pennsylvania between 2009 and 2013more » was used to create predictive models of tumor control, normal tissue complications, and mortality in this IRB-approved study. Nearly 200 data fields of detailed patient- and tumor-specific information, radiotherapy dosimetric measurements, and clinical outcomes data were collected. Predictive models were created for local tumor control, 1- and 3-year overall survival, and nodal failure using 60% of the data (leaving the remainder as a test set). After applying feature selection and dimensionality reduction, nonlinear support vector classification was applied to the resulting features. Models were evaluated for accuracy and area under ROC curve on the 81-patient test set. Results: Models for common events in the dataset (such as mortality at one year) had the highest predictive power (AUC = .67, p < 0.05). For rare occurrences such as radiation pneumonitis and local failure (each occurring in less than 10% of patients), too few events were present to create reliable models. Conclusion: Although this study demonstrates the validity of predictive analytics using information extracted from patient medical records and can most reliably predict for survival after SBRT, larger sample sizes are needed to develop predictive models for normal tissue toxicities and more advanced machine learning methodologies need be consider in the future.« less
NASA Astrophysics Data System (ADS)
Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles
2017-04-01
An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in France (major spring floods in June 2016 on the Loire river tributaries and flash floods in fall 2016) will be shown and discussed. References Bourgin, F. (2014). How to assess the predictive uncertainty in hydrological modelling? An exploratory work on a large sample of watersheds, AgroParisTech Wang, Q. J., Shrestha, D. L., Robertson, D. E. and Pokhrel, P (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, , W05514, doi:10.1029/2011WR010973
Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model
NASA Astrophysics Data System (ADS)
Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.
2015-12-01
Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.
Byrne, Patrick A; Crawford, J Douglas
2010-06-01
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
An improved grey model for the prediction of real-time GPS satellite clock bias
NASA Astrophysics Data System (ADS)
Zheng, Z. Y.; Chen, Y. Q.; Lu, X. S.
2008-07-01
In real-time GPS precise point positioning (PPP), real-time and reliable satellite clock bias (SCB) prediction is a key to implement real-time GPS PPP. It is difficult to hold the nuisance and inenarrable performance of space-borne GPS satellite atomic clock because of its high-frequency, sensitivity and impressionable, it accords with the property of grey model (GM) theory, i. e. we can look on the variable process of SCB as grey system. Firstly, based on limits of quadratic polynomial (QP) and traditional GM to predict SCB, a modified GM (1,1) is put forward to predict GPS SCB in this paper; and then, taking GPS SCB data for example, we analyzed clock bias prediction with different sample interval, the relationship between GM exponent and prediction accuracy, precision comparison of GM to QP, and concluded the general rule of different type SCB and GM exponent; finally, to test the reliability and validation of the modified GM what we put forward, taking IGS clock bias ephemeris product as reference, we analyzed the prediction precision with the modified GM, It is showed that the modified GM is reliable and validation to predict GPS SCB and can offer high precise SCB prediction for real-time GPS PPP.
Using Toxicological Evidence from QSAR Models in Practice
The new generation of QSAR models provides supporting documentation in addition to the predicted toxicological value. Such information enables the toxicologist to explore the properties of chemical substances and to review and increase the reliability of toxicity predictions. Thi...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tippett, Michael K.
2014-04-09
This report is a progress report of the accomplishments of the research grant “Collaborative Research: Separating Forced and Unforced Decadal Predictability in Models and Observa- tions” during the period 1 May 2011- 31 August 2013. This project is a collaborative one between Columbia University and George Mason University. George Mason University will submit a final technical report at the conclusion of their no-cost extension. The purpose of the proposed research is to identify unforced predictable components on decadal time scales, distinguish these components from forced predictable components, and to assess the reliability of model predictions of these components. Components ofmore » unforced decadal predictability will be isolated by maximizing the Average Predictability Time (APT) in long, multimodel control runs from state-of-the-art climate models. Components with decadal predictability have large APT, so maximizing APT ensures that components with decadal predictability will be detected. Optimal fingerprinting techniques, as used in detection and attribution analysis, will be used to separate variations due to natural and anthropogenic forcing from those due to unforced decadal predictability. This methodology will be applied to the decadal hindcasts generated by the CMIP5 project to assess the reliability of model projections. The question of whether anthropogenic forcing changes decadal predictability, or gives rise to new forms of decadal predictability, also will be investigated.« less
Predictive models of safety based on audit findings: Part 1: Model development and reliability.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-03-01
This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Bayesian methods in reliability
NASA Astrophysics Data System (ADS)
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.
Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo
2013-01-01
Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256
Testing DRAINMOD-FOREST for predicting evapotranspiration in a mid-rotation pine plantation
Shiying Tian; Mohamed A. Youssef; Ge Sun; George M. Chescheir; Asko Noormets; Devendra M. Amatya; R. Wayne Skaggs; John S. King; Steve McNulty; Michael Gavazzi; Guofang Miao; Jean-Christophe Domec
2015-01-01
Evapotranspiration (ET) is a key component of the hydrologic cycle in terrestrial ecosystems and accurate description of ET processes is essential for developing reliable ecohydrological models. This study investigated the accuracy of ET prediction by the DRAINMOD-FOREST after its calibration/validation for predicting commonly measured hydrological variables. The model...
Coughtrie, A R; Borman, D J; Sleigh, P A
2013-06-01
Flow in a gas-lift digester with a central draft-tube was investigated using computational fluid dynamics (CFD) and different turbulence closure models. The k-ω Shear-Stress-Transport (SST), Renormalization-Group (RNG) k-∊, Linear Reynolds-Stress-Model (RSM) and Transition-SST models were tested for a gas-lift loop reactor under Newtonian flow conditions validated against published experimental work. The results identify that flow predictions within the reactor (where flow is transitional) are particularly sensitive to the turbulence model implemented; the Transition-SST model was found to be the most robust for capturing mixing behaviour and predicting separation reliably. Therefore, Transition-SST is recommended over k-∊ models for use in comparable mixing problems. A comparison of results obtained using multiphase Euler-Lagrange and singlephase approaches are presented. The results support the validity of the singlephase modelling assumptions in obtaining reliable predictions of the reactor flow. Solver independence of results was verified by comparing two independent finite-volume solvers (Fluent-13.0sp2 and OpenFOAM-2.0.1). Copyright © 2013 Elsevier Ltd. All rights reserved.
Flood loss model transfer: on the value of additional data
NASA Astrophysics Data System (ADS)
Schröter, Kai; Lüdtke, Stefan; Vogel, Kristin; Kreibich, Heidi; Thieken, Annegret; Merz, Bruno
2017-04-01
The transfer of models across geographical regions and flood events is a key challenge in flood loss estimation. Variations in local characteristics and continuous system changes require regional adjustments and continuous updating with current evidence. However, acquiring data on damage influencing factors is expensive and therefore assessing the value of additional data in terms of model reliability and performance improvement is of high relevance. The present study utilizes empirical flood loss data on direct damage to residential buildings available from computer aided telephone interviews that were carried out after the floods in 2002, 2005, 2006, 2010, 2011 and 2013 mainly in the Elbe and Danube catchments in Germany. Flood loss model performance is assessed for incrementally increased numbers of loss data which are differentiated according to region and flood event. Two flood loss modeling approaches are considered: (i) a multi-variable flood loss model approach using Random Forests and (ii) a uni-variable stage damage function. Both model approaches are embedded in a bootstrapping process which allows evaluating the uncertainty of model predictions. Predictive performance of both models is evaluated with regard to mean bias, mean absolute and mean squared errors, as well as hit rate and sharpness. Mean bias and mean absolute error give information about the accuracy of model predictions; mean squared error and sharpness about precision and hit rate is an indicator for model reliability. The results of incremental, regional and temporal updating demonstrate the usefulness of additional data to improve model predictive performance and increase model reliability, particularly in a spatial-temporal transfer setting.
Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices
NASA Astrophysics Data System (ADS)
Michaelides, Stylianos
Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face-down devices without the underfill, based on the thorough understanding of the failure modes. Also, practical design guidelines for material, geometry and process parameters for reliable flip-chip devices have been developed.
System and Software Reliability (C103)
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.
Predicting enhancer activity and variant impact using gkm-SVM.
Beer, Michael A
2017-09-01
We participated in the Critical Assessment of Genome Interpretation eQTL challenge to further test computational models of regulatory variant impact and their association with human disease. Our prediction model is based on a discriminative gapped-kmer SVM (gkm-SVM) trained on genome-wide chromatin accessibility data in the cell type of interest. The comparisons with massively parallel reporter assays (MPRA) in lymphoblasts show that gkm-SVM is among the most accurate prediction models even though all other models used the MPRA data for model training, and gkm-SVM did not. In addition, we compare gkm-SVM with other MPRA datasets and show that gkm-SVM is a reliable predictor of expression and that deltaSVM is a reliable predictor of variant impact in K562 cells and mouse retina. We further show that DHS (DNase-I hypersensitive sites) and ATAC-seq (assay for transposase-accessible chromatin using sequencing) data are equally predictive substrates for training gkm-SVM, and that DHS regions flanked by H3K27Ac and H3K4me1 marks are more predictive than DHS regions alone. © 2017 Wiley Periodicals, Inc.
Ren, Y Y; Zhou, L C; Yang, L; Liu, P Y; Zhao, B W; Liu, H X
2016-09-01
The paper highlights the use of the logistic regression (LR) method in the construction of acceptable statistically significant, robust and predictive models for the classification of chemicals according to their aquatic toxic modes of action. Essentials accounting for a reliable model were all considered carefully. The model predictors were selected by stepwise forward discriminant analysis (LDA) from a combined pool of experimental data and chemical structure-based descriptors calculated by the CODESSA and DRAGON software packages. Model predictive ability was validated both internally and externally. The applicability domain was checked by the leverage approach to verify prediction reliability. The obtained models are simple and easy to interpret. In general, LR performs much better than LDA and seems to be more attractive for the prediction of the more toxic compounds, i.e. compounds that exhibit excess toxicity versus non-polar narcotic compounds and more reactive compounds versus less reactive compounds. In addition, model fit and regression diagnostics was done through the influence plot which reflects the hat-values, studentized residuals, and Cook's distance statistics of each sample. Overdispersion was also checked for the LR model. The relationships between the descriptors and the aquatic toxic behaviour of compounds are also discussed.
Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds
NASA Astrophysics Data System (ADS)
Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea
2013-04-01
Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
Marchioro, C A; Krechemer, F S; de Moraes, C P; Foerster, L A
2015-12-01
The diamondback moth, Plutella xylostella (L.), is a cosmopolitan pest of brassicaceous crops occurring in regions with highly distinct climate conditions. Several studies have investigated the relationship between temperature and P. xylostella development rate, providing degree-day models for populations from different geographical regions. However, there are no data available to date to demonstrate the suitability of such models to make reliable projections on the development time for this species in field conditions. In the present study, 19 models available in the literature were tested regarding their ability to accurately predict the development time of two cohorts of P. xylostella under field conditions. Only 11 out of the 19 models tested accurately predicted the development time for the first cohort of P. xylostella, but only seven for the second cohort. Five models correctly predicted the development time for both cohorts evaluated. Our data demonstrate that the accuracy of the models available for P. xylostella varies widely and therefore should be used with caution for pest management purposes.
Calibration plots for risk prediction models in the presence of competing risks.
Gerds, Thomas A; Andersen, Per K; Kattan, Michael W
2014-08-15
A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.
A simulation model for risk assessment of turbine wheels
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Hage, Richard T.
1991-01-01
A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.
A simulation model for risk assessment of turbine wheels
NASA Astrophysics Data System (ADS)
Safie, Fayssal M.; Hage, Richard T.
A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.
Study of complete interconnect reliability for a GaAs MMIC power amplifier
NASA Astrophysics Data System (ADS)
Lin, Qian; Wu, Haifeng; Chen, Shan-ji; Jia, Guoqing; Jiang, Wei; Chen, Chao
2018-05-01
By combining the finite element analysis (FEA) and artificial neural network (ANN) technique, the complete prediction of interconnect reliability for a monolithic microwave integrated circuit (MMIC) power amplifier (PA) at the both of direct current (DC) and alternating current (AC) operation conditions is achieved effectively in this article. As a example, a MMIC PA is modelled to study the electromigration failure of interconnect. This is the first time to study the interconnect reliability for an MMIC PA at the conditions of DC and AC operation simultaneously. By training the data from FEA, a high accuracy ANN model for PA reliability is constructed. Then, basing on the reliability database which is obtained from the ANN model, it can give important guidance for improving the reliability design for IC.
Culture Representation in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Gertman; Julie Marble; Steven Novack
Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991)more » cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.« less
NASA Astrophysics Data System (ADS)
Wang, Xin; Li, Yan; Chen, Tongjun; Yan, Qiuyan; Ma, Li
2017-04-01
The thickness of tectonically deformed coal (TDC) has positive correlation associations with gas outbursts. In order to predict the TDC thickness of coal beds, we propose a new quantitative predicting method using an extreme learning machine (ELM) algorithm, a principal component analysis (PCA) algorithm, and seismic attributes. At first, we build an ELM prediction model using the PCA attributes of a synthetic seismic section. The results suggest that the ELM model can produce a reliable and accurate prediction of the TDC thickness for synthetic data, preferring Sigmoid activation function and 20 hidden nodes. Then, we analyze the applicability of the ELM model on the thickness prediction of the TDC with real application data. Through the cross validation of near-well traces, the results suggest that the ELM model can produce a reliable and accurate prediction of the TDC. After that, we use 250 near-well traces from 10 wells to build an ELM predicting model and use the model to forecast the TDC thickness of the No. 15 coal in the study area using the PCA attributes as the inputs. Comparing the predicted results, it is noted that the trained ELM model with two selected PCA attributes yields better predication results than those from the other combinations of the attributes. Finally, the trained ELM model with real seismic data have a different number of hidden nodes (10) than the trained ELM model with synthetic seismic data. In summary, it is feasible to use an ELM model to predict the TDC thickness using the calculated PCA attributes as the inputs. However, the input attributes, the activation function and the number of hidden nodes in the ELM model should be selected and tested carefully based on individual application.
Using beta binomials to estimate classification uncertainty for ensemble models.
Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin
2014-01-01
Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.
Protein-Protein Interface Predictions by Data-Driven Methods: A Review
Xue, Li C; Dobbs, Drena; Bonvin, Alexandre M.J.J.; Honavar, Vasant
2015-01-01
Reliably pinpointing which specific amino acid residues form the interface(s) between a protein and its binding partner(s) is critical for understanding the structural and physicochemical determinants of protein recognition and binding affinity, and has wide applications in modeling and validating protein interactions predicted by high-throughput methods, in engineering proteins, and in prioritizing drug targets. Here, we review the basic concepts, principles and recent advances in computational approaches to the analysis and prediction of protein-protein interfaces. We point out caveats for objectively evaluating interface predictors, and discuss various applications of data-driven interface predictors for improving energy model-driven protein-protein docking. Finally, we stress the importance of exploiting binding partner information in reliably predicting interfaces and highlight recent advances in this emerging direction. PMID:26460190
Method of Testing and Predicting Failures of Electronic Mechanical Systems
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, Frances A.
1996-01-01
A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.
Reliability models applicable to space telescope solar array assembly system
NASA Technical Reports Server (NTRS)
Patil, S. A.
1986-01-01
A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.
Improved Rubin-Bodner Model for the Prediction of Soft Tissue Deformations
Zhang, Guangming; Xia, James J.; Liebschner, Michael; Zhang, Xiaoyan; Kim, Daeseung; Zhou, Xiaobo
2016-01-01
In craniomaxillofacial (CMF) surgery, a reliable way of simulating the soft tissue deformation resulted from skeletal reconstruction is vitally important for preventing the risks of facial distortion postoperatively. However, it is difficult to simulate the soft tissue behaviors affected by different types of CMF surgery. This study presents an integrated bio-mechanical and statistical learning model to improve accuracy and reliability of predictions on soft facial tissue behavior. The Rubin-Bodner (RB) model is initially used to describe the biomechanical behavior of the soft facial tissue. Subsequently, a finite element model (FEM) computers the stress of each node in soft facial tissue mesh data resulted from bone displacement. Next, the Generalized Regression Neural Network (GRNN) method is implemented to obtain the relationship between the facial soft tissue deformation and the stress distribution corresponding to different CMF surgical types and to improve evaluation of elastic parameters included in the RB model. Therefore, the soft facial tissue deformation can be predicted by biomechanical properties and statistical model. Leave-one-out cross-validation is used on eleven patients. As a result, the average prediction error of our model (0.7035mm) is lower than those resulting from other approaches. It also demonstrates that the more accurate bio-mechanical information the model has, the better prediction performance it could achieve. PMID:27717593
Predicting pedestrian flow: a methodology and a proof of concept based on real-life data.
Davidich, Maria; Köster, Gerta
2013-01-01
Building a reliable predictive model of pedestrian motion is very challenging: Ideally, such models should be based on observations made in both controlled experiments and in real-world environments. De facto, models are rarely based on real-world observations due to the lack of available data; instead, they are largely based on intuition and, at best, literature values and laboratory experiments. Such an approach is insufficient for reliable simulations of complex real-life scenarios: For instance, our analysis of pedestrian motion under natural conditions at a major German railway station reveals that the values for free-flow velocities and the flow-density relationship differ significantly from widely used literature values. It is thus necessary to calibrate and validate the model against relevant real-life data to make it capable of reproducing and predicting real-life scenarios. In this work we aim at constructing such realistic pedestrian stream simulation. Based on the analysis of real-life data, we present a methodology that identifies key parameters and interdependencies that enable us to properly calibrate the model. The success of the approach is demonstrated for a benchmark model, a cellular automaton. We show that the proposed approach significantly improves the reliability of the simulation and hence the potential prediction accuracy. The simulation is validated by comparing the local density evolution of the measured data to that of the simulated data. We find that for our model the most sensitive parameters are: the source-target distribution of the pedestrian trajectories, the schedule of pedestrian appearances in the scenario and the mean free-flow velocity. Our results emphasize the need for real-life data extraction and analysis to enable predictive simulations.
Emery, John M.; Field, Richard V.; Foulk, James W.; ...
2015-05-26
Laser welds are prevalent in complex engineering systems and they frequently govern failure. The weld process often results in partial penetration of the base metals, leaving sharp crack-like features with a high degree of variability in the geometry and material properties of the welded structure. Furthermore, accurate finite element predictions of the structural reliability of components containing laser welds requires the analysis of a large number of finite element meshes with very fine spatial resolution, where each mesh has different geometry and/or material properties in the welded region to address variability. We found that traditional modeling approaches could not bemore » efficiently employed. Consequently, a method is presented for constructing a surrogate model, based on stochastic reduced-order models, and is proposed to represent the laser welds within the component. Here, the uncertainty in weld microstructure and geometry is captured by calibrating plasticity parameters to experimental observations of necking as, because of the ductility of the welds, necking – and thus peak load – plays the pivotal role in structural failure. The proposed method is exercised for a simplified verification problem and compared with the traditional Monte Carlo simulation with rather remarkable results.« less
Short-term prediction of solar energy in Saudi Arabia using automated-design fuzzy logic systems
2017-01-01
Solar energy is considered as one of the main sources for renewable energy in the near future. However, solar energy and other renewable energy sources have a drawback related to the difficulty in predicting their availability in the near future. This problem affects optimal exploitation of solar energy, especially in connection with other resources. Therefore, reliable solar energy prediction models are essential to solar energy management and economics. This paper presents work aimed at designing reliable models to predict the global horizontal irradiance (GHI) for the next day in 8 stations in Saudi Arabia. The designed models are based on computational intelligence methods of automated-design fuzzy logic systems. The fuzzy logic systems are designed and optimized with two models using fuzzy c-means clustering (FCM) and simulated annealing (SA) algorithms. The first model uses FCM based on the subtractive clustering algorithm to automatically design the predictor fuzzy rules from data. The second model is using FCM followed by simulated annealing algorithm to enhance the prediction accuracy of the fuzzy logic system. The objective of the predictor is to accurately predict next-day global horizontal irradiance (GHI) using previous-day meteorological and solar radiation observations. The proposed models use observations of 10 variables of measured meteorological and solar radiation data to build the model. The experimentation and results of the prediction are detailed where the root mean square error of the prediction was approximately 88% for the second model tuned by simulated annealing compared to 79.75% accuracy using the first model. This results demonstrate a good modeling accuracy of the second model despite that the training and testing of the proposed models were carried out using spatially and temporally independent data. PMID:28806754
Short-term prediction of solar energy in Saudi Arabia using automated-design fuzzy logic systems.
Almaraashi, Majid
2017-01-01
Solar energy is considered as one of the main sources for renewable energy in the near future. However, solar energy and other renewable energy sources have a drawback related to the difficulty in predicting their availability in the near future. This problem affects optimal exploitation of solar energy, especially in connection with other resources. Therefore, reliable solar energy prediction models are essential to solar energy management and economics. This paper presents work aimed at designing reliable models to predict the global horizontal irradiance (GHI) for the next day in 8 stations in Saudi Arabia. The designed models are based on computational intelligence methods of automated-design fuzzy logic systems. The fuzzy logic systems are designed and optimized with two models using fuzzy c-means clustering (FCM) and simulated annealing (SA) algorithms. The first model uses FCM based on the subtractive clustering algorithm to automatically design the predictor fuzzy rules from data. The second model is using FCM followed by simulated annealing algorithm to enhance the prediction accuracy of the fuzzy logic system. The objective of the predictor is to accurately predict next-day global horizontal irradiance (GHI) using previous-day meteorological and solar radiation observations. The proposed models use observations of 10 variables of measured meteorological and solar radiation data to build the model. The experimentation and results of the prediction are detailed where the root mean square error of the prediction was approximately 88% for the second model tuned by simulated annealing compared to 79.75% accuracy using the first model. This results demonstrate a good modeling accuracy of the second model despite that the training and testing of the proposed models were carried out using spatially and temporally independent data.
Jarošík, Vojtěch; Pyšek, Petr; Foxcroft, Llewellyn C.; Richardson, David M.; Rouget, Mathieu; MacFadyen, Sandra
2011-01-01
Background Overcoming boundaries is crucial for incursion of alien plant species and their successful naturalization and invasion within protected areas. Previous work showed that in Kruger National Park, South Africa, this process can be quantified and that factors determining the incursion of invasive species can be identified and predicted confidently. Here we explore the similarity between determinants of incursions identified by the general model based on a multispecies assemblage, and those identified by species-specific models. We analyzed the presence and absence of six invasive plant species in 1.0×1.5 km segments along the border of the park as a function of environmental characteristics from outside and inside the KNP boundary, using two data-mining techniques: classification trees and random forests. Principal Findings The occurrence of Ageratum houstonianum, Chromolaena odorata, Xanthium strumarium, Argemone ochroleuca, Opuntia stricta and Lantana camara can be reliably predicted based on landscape characteristics identified by the general multispecies model, namely water runoff from surrounding watersheds and road density in a 10 km radius. The presence of main rivers and species-specific combinations of vegetation types are reliable predictors from inside the park. Conclusions The predictors from the outside and inside of the park are complementary, and are approximately equally reliable for explaining the presence/absence of current invaders; those from the inside are, however, more reliable for predicting future invasions. Landscape characteristics determined as crucial predictors from outside the KNP serve as guidelines for management to enact proactive interventions to manipulate landscape features near the KNP to prevent further incursions. Predictors from the inside the KNP can be used reliably to identify high-risk areas to improve the cost-effectiveness of management, to locate invasive plants and target them for eradication. PMID:22194893
Jarošík, Vojtěch; Pyšek, Petr; Foxcroft, Llewellyn C; Richardson, David M; Rouget, Mathieu; MacFadyen, Sandra
2011-01-01
Overcoming boundaries is crucial for incursion of alien plant species and their successful naturalization and invasion within protected areas. Previous work showed that in Kruger National Park, South Africa, this process can be quantified and that factors determining the incursion of invasive species can be identified and predicted confidently. Here we explore the similarity between determinants of incursions identified by the general model based on a multispecies assemblage, and those identified by species-specific models. We analyzed the presence and absence of six invasive plant species in 1.0×1.5 km segments along the border of the park as a function of environmental characteristics from outside and inside the KNP boundary, using two data-mining techniques: classification trees and random forests. The occurrence of Ageratum houstonianum, Chromolaena odorata, Xanthium strumarium, Argemone ochroleuca, Opuntia stricta and Lantana camara can be reliably predicted based on landscape characteristics identified by the general multispecies model, namely water runoff from surrounding watersheds and road density in a 10 km radius. The presence of main rivers and species-specific combinations of vegetation types are reliable predictors from inside the park. The predictors from the outside and inside of the park are complementary, and are approximately equally reliable for explaining the presence/absence of current invaders; those from the inside are, however, more reliable for predicting future invasions. Landscape characteristics determined as crucial predictors from outside the KNP serve as guidelines for management to enact proactive interventions to manipulate landscape features near the KNP to prevent further incursions. Predictors from the inside the KNP can be used reliably to identify high-risk areas to improve the cost-effectiveness of management, to locate invasive plants and target them for eradication.
A semi-supervised learning approach for RNA secondary structure prediction.
Yonemoto, Haruka; Asai, Kiyoshi; Hamada, Michiaki
2015-08-01
RNA secondary structure prediction is a key technology in RNA bioinformatics. Most algorithms for RNA secondary structure prediction use probabilistic models, in which the model parameters are trained with reliable RNA secondary structures. Because of the difficulty of determining RNA secondary structures by experimental procedures, such as NMR or X-ray crystal structural analyses, there are still many RNA sequences that could be useful for training whose secondary structures have not been experimentally determined. In this paper, we introduce a novel semi-supervised learning approach for training parameters in a probabilistic model of RNA secondary structures in which we employ not only RNA sequences with annotated secondary structures but also ones with unknown secondary structures. Our model is based on a hybrid of generative (stochastic context-free grammars) and discriminative models (conditional random fields) that has been successfully applied to natural language processing. Computational experiments indicate that the accuracy of secondary structure prediction is improved by incorporating RNA sequences with unknown secondary structures into training. To our knowledge, this is the first study of a semi-supervised learning approach for RNA secondary structure prediction. This technique will be useful when the number of reliable structures is limited. Copyright © 2015 Elsevier Ltd. All rights reserved.
Model for the prediction of subsurface strata movement due to underground mining
NASA Astrophysics Data System (ADS)
Cheng, Jianwei; Liu, Fangyuan; Li, Siyuan
2017-12-01
The problem of ground control stability due to large underground mining operations is often associated with large movements and deformations of strata. It is a complicated problem, and can induce severe safety or environmental hazards either at the surface or in strata. Hence, knowing the subsurface strata movement characteristics, and making any subsidence predictions in advance, are desirable for mining engineers to estimate any damage likely to affect the ground surface or subsurface strata. Based on previous research findings, this paper broadly applies a surface subsidence prediction model based on the influence function method to subsurface strata, in order to predict subsurface stratum movement. A step-wise prediction model is proposed, to investigate the movement of underground strata. The model involves a dynamic iteration calculation process to derive the movements and deformations for each stratum layer; modifications to the influence method function are also made for more precise calculations. The critical subsidence parameters, incorporating stratum mechanical properties and the spatial relationship of interest at the mining level, are thoroughly considered, with the purpose of improving the reliability of input parameters. Such research efforts can be very helpful to mining engineers’ understanding of the moving behavior of all strata over underground excavations, and assist in making any damage mitigation plan. In order to check the reliability of the model, two methods are carried out and cross-validation applied. One is to use a borehole TV monitor recording to identify the progress of subsurface stratum bedding and caving in a coal mine, the other is to conduct physical modelling of the subsidence in underground strata. The results of these two methods are used to compare with theoretical results calculated by the proposed mathematical model. The testing results agree well with each other, and the acceptable accuracy and reliability of the proposed prediction model are thus validated.
Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.
Soleimani, Hossein; Hensman, James; Saria, Suchi
2017-08-21
Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.
Rainfall prediction of Cimanuk watershed regions with canonical correlation analysis (CCA)
NASA Astrophysics Data System (ADS)
Rustiana, Shailla; Nurani Ruchjana, Budi; Setiawan Abdullah, Atje; Hermawan, Eddy; Berliana Sipayung, Sinta; Gede Nyoman Mindra Jaya, I.; Krismianto
2017-10-01
Rainfall prediction in Indonesia is very influential on various development sectors, such as agriculture, fisheries, water resources, industry, and other sectors. The inaccurate predictions can lead to negative effects. Cimanuk watershed is one of the main pillar of water resources in West Java. This watersheds divided into three parts, which is a headwater of Cimanuk sub-watershed, Middle of Cimanuk sub-watershed and downstream of Cimanuk sub- watershed. The flow of this watershed will flow through the Jatigede reservoir and will supply water to the north-coast area in the next few years. So, the reliable model of rainfall prediction is very needed in this watershed. Rainfall prediction conducted with Canonical Correlation Analysis (CCA) method using Climate Predictability Tool (CPT) software. The prediction is every 3months on 2016 (after January) based on Climate Hazards group Infrared Precipitation with Stations (CHIRPS) data over West Java. Predictors used in CPT were the monthly data index of Nino3.4, Dipole Mode (DMI), and Monsoon Index (AUSMI-ISMI-WNPMI-WYMI) with initial condition January. The initial condition is chosen by the last data update. While, the predictant were monthly rainfall data CHIRPS region of West Java. The results of prediction rainfall showed by skill map from Pearson Correlation. High correlation of skill map are on MAM (Mar-Apr-May), AMJ (Apr-May-Jun), and JJA (Jun-Jul-Aug) which means the model is reliable to forecast rainfall distribution over Cimanuk watersheds region (over West Java) on those seasons. CCA score over those season prediction mostly over 0.7. The accuracy of the model CPT also indicated by the Relative Operating Characteristic (ROC) curve of the results of Pearson correlation 3 representative point of sub-watershed (Sumedang, Majalengka, and Cirebon), were mostly located in the top line of non-skill, and evidenced by the same of rainfall patterns between observation and forecast. So, the model of CPT with CCA method is reliable to use.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
Fuel model selection for BEHAVE in midwestern oak savannas
Grabner, K.W.; Dwyer, J.P.; Cutter, B.E.
2001-01-01
BEHAVE, a fire behavior prediction system, can be a useful tool for managing areas with prescribed fire. However, the proper choice of fuel models can be critical in developing management scenarios. BEHAVE predictions were evaluated using four standardized fuel models that partially described oak savanna fuel conditions: Fuel Model 1 (Short Grass), 2 (Timber and Grass), 3 (Tall Grass), and 9 (Hardwood Litter). Although all four models yielded regressions with R2 in excess of 0.8, Fuel Model 2 produced the most reliable fire behavior predictions.
A Bayesian modification to the Jelinski-Moranda software reliability growth model
NASA Technical Reports Server (NTRS)
Littlewood, B.; Sofer, A.
1983-01-01
The Jelinski-Moranda (JM) model for software reliability was examined. It is suggested that a major reason for the poor results given by this model is the poor performance of the maximum likelihood method (ML) of parameter estimation. A reparameterization and Bayesian analysis, involving a slight modelling change, are proposed. It is shown that this new Bayesian-Jelinski-Moranda model (BJM) is mathematically quite tractable, and several metrics of interest to practitioners are obtained. The BJM and JM models are compared by using several sets of real software failure data collected and in all cases the BJM model gives superior reliability predictions. A change in the assumption which underlay both models to present the debugging process more accurately is discussed.
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.
NASA Astrophysics Data System (ADS)
Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai
2017-09-01
Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, John M.; Coffin, Peter; Robbins, Brian A.
Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less
A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gossman, W. E.
1986-01-01
A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.
A lightweight thermal heat switch for redundant cryocooling on satellites
NASA Astrophysics Data System (ADS)
Dietrich, M.; Euler, A.; Thummes, G.
2017-04-01
A previously designed cryogenic thermal heat switch for space applications has been optimized for low mass, high structural stability, and reliability. The heat switch makes use of the large linear thermal expansion coefficient (CTE) of the thermoplastic UHMW-PE for actuation. A structure model, which includes the temperature dependent properties of the actuator, is derived to be able to predict the contact pressure between the switch parts. This pressure was used in a thermal model in order to predict the switch performance under different heat loads and operating temperatures. The two models were used to optimize the mass and stability of the switch. Its reliability was proven by cyclic actuation of the switch and by shaker tests.
Use of Landsat data to predict the trophic state of Minnesota lakes
NASA Technical Reports Server (NTRS)
Lillesand, T. M.; Johnson, W. L.; Deuell, R. L.; Lindstrom, O. M.; Meisner, D. E.
1983-01-01
Near-concurrent Landsat Multispectral Scanner (MSS) and ground data were obtained for 60 lakes distributed in two Landsat scene areas. The ground data included measurement of secchi disk depth, chlorophyll-a, total phosphorous, turbidity, color, and total nitrogen, as well as Carlson Trophic State Index (TSI) values derived from the first three parameters. The Landsat data best correlated with the TSI values. Prediction models were developed to classify some 100 'test' lakes appearing in the two analysis scenes on the basis of TSI estimates. Clouds, wind, poor image data, small lake size, and shallow lake depth caused some problems in lake TSI prediction. Overall, however, the Landsat-predicted TSI estimates were judged to be very reliable for the secchi-derived TSI estimation, moderately reliable for prediction of the chlorophyll-a TSI, and unreliable for the phosphorous value. Numerous Landsat data extraction procedures were compared, and the success of the Landsat TSI prediction models was a strong function of the procedure employed.
Richard S. Holthausen; Michael J. Wisdom; John Pierce; Daniel K. Edwards; Mary M. Rowland
1994-01-01
We used expert opinion to evaluate the predictive reliability of a habitat effectiveness model for elk in western Oregon and Washington. Twenty-five experts in elk ecology were asked to rate habitat quality for 16 example landscapes. Rankings and ratings of 21 experts were significantly correlated with model output. Expert opinion and model predictions differed for 4...
Forward modelling requires intention recognition and non-impoverished predictions.
de Ruiter, Jan P; Cummins, Chris
2013-08-01
We encourage Pickering & Garrod (P&G) to implement this promising theory in a computational model. The proposed theory crucially relies on having an efficient and reliable mechanism for early intention recognition. Furthermore, the generation of impoverished predictions is incompatible with a number of key phenomena that motivated P&G's theory. Explaining these phenomena requires fully specified perceptual predictions in both comprehension and production.
Lindsay M. Grayson; Robert A. Progar; Sharon M. Hood
2017-01-01
Fire is a driving force in the North American landscape and predicting post-fire tree mortality is vital to land management. Post-fire tree mortality can have substantial economic and social impacts, and natural resource managers need reliable predictive methods to anticipate potential mortality following fire events. Current fire mortality models are limited to a few...
Probabilistic Prediction of Lifetimes of Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.
2006-01-01
ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.
Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai
2015-01-01
Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368
Carlisle, Daren M.; Wolock, David M.; Howard, Jeannette K.; Grantham, Theodore E.; Fesenmyer, Kurt; Wieczorek, Michael
2016-12-12
Because natural patterns of streamflow are a fundamental property of the health of streams, there is a critical need to quantify the degree to which human activities have modified natural streamflows. A requirement for assessing streamflow modification in a given stream is a reliable estimate of flows expected in the absence of human influences. Although there are many techniques to predict streamflows in specific river basins, there is a lack of approaches for making predictions of natural conditions across large regions and over many decades. In this study conducted by the U.S. Geological Survey, in cooperation with The Nature Conservancy and Trout Unlimited, the primary objective was to develop empirical models that predict natural (that is, unaffected by land use or water management) monthly streamflows from 1950 to 2012 for all stream segments in California. Models were developed using measured streamflow data from the existing network of streams where daily flow monitoring occurs, but where the drainage basins have minimal human influences. Widely available data on monthly weather conditions and the physical attributes of river basins were used as predictor variables. Performance of regional-scale models was comparable to that of published mechanistic models for specific river basins, indicating the models can be reliably used to estimate natural monthly flows in most California streams. A second objective was to develop a model that predicts the likelihood that streams experience modified hydrology. New models were developed to predict modified streamflows at 558 streamflow monitoring sites in California where human activities affect the hydrology, using basin-scale geospatial indicators of land use and water management. Performance of these models was less reliable than that for the natural-flow models, but results indicate the models could be used to provide a simple screening tool for identifying, across the State of California, which streams may be experiencing anthropogenic flow modification.
Rosa, Isabel M D; Ahmed, Sadia E; Ewers, Robert M
2014-06-01
Land-use and land-cover (LULC) change is one of the largest drivers of biodiversity loss and carbon emissions globally. We use the tropical rainforests of the Amazon, the Congo basin and South-East Asia as a case study to investigate spatial predictive models of LULC change. Current predictions differ in their modelling approaches, are highly variable and often poorly validated. We carried out a quantitative review of 48 modelling methodologies, considering model spatio-temporal scales, inputs, calibration and validation methods. In addition, we requested model outputs from each of the models reviewed and carried out a quantitative assessment of model performance for tropical LULC predictions in the Brazilian Amazon. We highlight existing shortfalls in the discipline and uncover three key points that need addressing to improve the transparency, reliability and utility of tropical LULC change models: (1) a lack of openness with regard to describing and making available the model inputs and model code; (2) the difficulties of conducting appropriate model validations; and (3) the difficulty that users of tropical LULC models face in obtaining the model predictions to help inform their own analyses and policy decisions. We further draw comparisons between tropical LULC change models in the tropics and the modelling approaches and paradigms in other disciplines, and suggest that recent changes in the climate change and species distribution modelling communities may provide a pathway that tropical LULC change modellers may emulate to further improve the discipline. Climate change models have exerted considerable influence over public perceptions of climate change and now impact policy decisions at all political levels. We suggest that tropical LULC change models have an equally high potential to influence public opinion and impact the development of land-use policies based on plausible future scenarios, but, to do that reliably may require further improvements in the discipline. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang
2014-05-01
Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate our suggested approach with an application to model selection between different soil-plant models following up on a study by Wöhling et al. (2013). Results show that measurement noise compromises the reliability of model ranking and causes a significant amount of weighting uncertainty, if the calibration data time series is not long enough to compensate for its noisiness. This additional contribution to the overall predictive uncertainty is neglected without our approach. Thus, we strongly advertise to include our suggested upgrade in the Bayesian model averaging routine.
A probabilistic and adaptive approach to modeling performance of pavement infrastructure
DOT National Transportation Integrated Search
2007-08-01
Accurate prediction of pavement performance is critical to pavement management agencies. Reliable and accurate predictions of pavement infrastructure performance can save significant amounts of money for pavement infrastructure management agencies th...
NASA Astrophysics Data System (ADS)
Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.
2017-04-01
Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.
Cao, Qi; Leung, K M
2014-09-22
Reliable computer models for the prediction of chemical biodegradability from molecular descriptors and fingerprints are very important for making health and environmental decisions. Coupling of the differential evolution (DE) algorithm with the support vector classifier (SVC) in order to optimize the main parameters of the classifier resulted in an improved classifier called the DE-SVC, which is introduced in this paper for use in chemical biodegradability studies. The DE-SVC was applied to predict the biodegradation of chemicals on the basis of extensive sample data sets and known structural features of molecules. Our optimization experiments showed that DE can efficiently find the proper parameters of the SVC. The resulting classifier possesses strong robustness and reliability compared with grid search, genetic algorithm, and particle swarm optimization methods. The classification experiments conducted here showed that the DE-SVC exhibits better classification performance than models previously used for such studies. It is a more effective and efficient prediction model for chemical biodegradability.
ASME V\\&V challenge problem: Surrogate-based V&V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beghini, Lauren L.; Hough, Patricia D.
2015-12-18
The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less
Development and evaluation of height diameter at breast models for native Chinese Metasequoia.
Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia
Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600
Blum, Meike; Distl, Ottmar
2014-01-01
In the present study, breeding values for canine congenital sensorineural deafness, the presence of blue eyes and patches have been predicted using multivariate animal models to test the reliability of the breeding values for planned matings. The dataset consisted of 6669 German Dalmatian dogs born between 1988 and 2009. Data were provided by the Dalmatian kennel clubs which are members of the German Association for Dog Breeding and Husbandry (VDH). The hearing status for all dogs was evaluated using brainstem auditory evoked potentials. The reliability using the prediction error variance of breeding values and the realized reliability of the prediction of the phenotype of future progeny born in each one year between 2006 and 2009 were used as parameters to evaluate the goodness of prediction through breeding values. All animals from the previous birth years were used for prediction of the breeding values of the progeny in each of the up-coming birth years. The breeding values based on pedigree records achieved an average reliability of 0.19 for the future 1951 progeny. The predictive accuracy (R2) for the hearing status of single future progeny was at 1.3%. Combining breeding values for littermates increased the predictive accuracy to 3.5%. Corresponding values for maternal and paternal half-sib groups were at 3.2 and 7.3%. The use of breeding values for planned matings increases the phenotypic selection response over mass selection. The breeding values of sires may be used for planned matings because reliabilities and predictive accuracies for future paternal progeny groups were highest.
Predicting Software Suitability Using a Bayesian Belief Network
NASA Technical Reports Server (NTRS)
Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.
2005-01-01
The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.
Park, Soo Hyun; Talebi, Mohammad; Amos, Ruth I J; Tyteca, Eva; Haddad, Paul R; Szucs, Roman; Pohl, Christopher A; Dolan, John W
2017-11-10
Quantitative Structure-Retention Relationships (QSRR) are used to predict retention times of compounds based only on their chemical structures encoded by molecular descriptors. The main concern in QSRR modelling is to build models with high predictive power, allowing reliable retention prediction for the unknown compounds across the chromatographic space. With the aim of enhancing the prediction power of the models, in this work, our previously proposed QSRR modelling approach called "federation of local models" is extended in ion chromatography to predict retention times of unknown ions, where a local model for each target ion (unknown) is created using only structurally similar ions from the dataset. A Tanimoto similarity (TS) score was utilised as a measure of structural similarity and training sets were developed by including ions that were similar to the target ion, as defined by a threshold value. The prediction of retention parameters (a- and b-values) in the linear solvent strength (LSS) model in ion chromatography, log k=a - blog[eluent], allows the prediction of retention times under all eluent concentrations. The QSRR models for a- and b-values were developed by a genetic algorithm-partial least squares method using the retention data of inorganic and small organic anions and larger organic cations (molecular mass up to 507) on four Thermo Fisher Scientific columns (AS20, AS19, AS11HC and CS17). The corresponding predicted retention times were calculated by fitting the predicted a- and b-values of the models into the LSS model equation. The predicted retention times were also plotted against the experimental values to evaluate the goodness of fit and the predictive power of the models. The application of a TS threshold of 0.6 was found to successfully produce predictive and reliable QSRR models (Q ext(F2) 2 >0.8 and Mean Absolute Error<0.1), and hence accurate retention time predictions with an average Mean Absolute Error of 0.2min. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng
2017-12-01
Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.
Spatiotemporal models for predicting high pollen concentration level of Corylus, Alnus, and Betula.
Nowosad, Jakub
2016-06-01
Corylus, Alnus, and Betula trees are among the most important sources of allergic pollen in the temperate zone of the Northern Hemisphere and have a large impact on the quality of life and productivity of allergy sufferers. Therefore, it is important to predict high pollen concentrations, both in time and space. The aim of this study was to create and evaluate spatiotemporal models for predicting high Corylus, Alnus, and Betula pollen concentration levels, based on gridded meteorological data. Aerobiological monitoring was carried out in 11 cities in Poland and gathered, depending on the site, between 2 and 16 years of measurements. According to the first allergy symptoms during exposure, a high pollen count level was established for each taxon. An optimizing probability threshold technique was used for mitigation of the problem of imbalance in the pollen concentration levels. For each taxon, the model was built using a random forest method. The study revealed the possibility of moderately reliable prediction of Corylus and highly reliable prediction of Alnus and Betula high pollen concentration levels, using preprocessed gridded meteorological data. Cumulative growing degree days and potential evaporation proved to be two of the most important predictor variables in the models. The final models predicted not only for single locations but also for continuous areas. Furthermore, the proposed modeling framework could be used to predict high pollen concentrations of Corylus, Alnus, Betula, and other taxa, and in other countries.
Spatiotemporal models for predicting high pollen concentration level of Corylus, Alnus, and Betula
NASA Astrophysics Data System (ADS)
Nowosad, Jakub
2016-06-01
Corylus, Alnus, and Betula trees are among the most important sources of allergic pollen in the temperate zone of the Northern Hemisphere and have a large impact on the quality of life and productivity of allergy sufferers. Therefore, it is important to predict high pollen concentrations, both in time and space. The aim of this study was to create and evaluate spatiotemporal models for predicting high Corylus, Alnus, and Betula pollen concentration levels, based on gridded meteorological data. Aerobiological monitoring was carried out in 11 cities in Poland and gathered, depending on the site, between 2 and 16 years of measurements. According to the first allergy symptoms during exposure, a high pollen count level was established for each taxon. An optimizing probability threshold technique was used for mitigation of the problem of imbalance in the pollen concentration levels. For each taxon, the model was built using a random forest method. The study revealed the possibility of moderately reliable prediction of Corylus and highly reliable prediction of Alnus and Betula high pollen concentration levels, using preprocessed gridded meteorological data. Cumulative growing degree days and potential evaporation proved to be two of the most important predictor variables in the models. The final models predicted not only for single locations but also for continuous areas. Furthermore, the proposed modeling framework could be used to predict high pollen concentrations of Corylus, Alnus, Betula, and other taxa, and in other countries.
NASA Astrophysics Data System (ADS)
Yang, Xiu-Qun; Yang, Dejian; Xie, Qian; Zhang, Yaocun; Ren, Xuejuan; Tang, Youmin
2017-04-01
Based on historical forecasts of three quasi-operational multi-model ensemble (MME) systems, this study assesses the superiority of coupled MME over contributing single-model ensembles (SMEs) and over uncoupled atmospheric MME in predicting the Western North Pacific-East Asian summer monsoon variability. The probabilistic and deterministic forecast skills are measured by Brier skill score (BSS) and anomaly correlation (AC), respectively. A forecast-format dependent MME superiority over SMEs is found. The probabilistic forecast skill of the MME is always significantly better than that of each SME, while the deterministic forecast skill of the MME can be lower than that of some SMEs. The MME superiority arises from both the model diversity and the ensemble size increase in the tropics, and primarily from the ensemble size increase in the subtropics. The BSS is composed of reliability and resolution, two attributes characterizing probabilistic forecast skill. The probabilistic skill increase of the MME is dominated by the dramatic improvement in reliability, while resolution is not always improved, similar to AC. A monotonic resolution-AC relationship is further found and qualitatively explained, whereas little relationship can be identified between reliability and AC. It is argued that the MME's success in improving the reliability arises from an effective reduction of the overconfidence in forecast distributions. Moreover, it is examined that the seasonal predictions with coupled MME are more skillful than those with the uncoupled atmospheric MME forced by persisting sea surface temperature (SST) anomalies, since the coupled MME has better predicted the SST anomaly evolution in three key regions.
Atmospheric Models for Over-Ocean Propagation Loss
2015-05-15
Atmospheric Models For Over-Ocean Propagation Loss Bruce McGuffin1 MIT Lincoln Laboratory Introduction Air -to-surface radio links differ from...from radiosonde profiles collected along the Atlantic coast of the United States, in order to accurately estimate high-reliability SHF/EHF air -to...predict required link performance to achieve high reliability at different locations and times of year. Data Acquisition Radiosonde balloons are
Hattotuwagama, Channa K; Guan, Pingping; Doytchinova, Irini A; Flower, Darren R
2004-11-21
Quantitative structure-activity relationship (QSAR) analysis is a main cornerstone of modern informatic disciplines. Predictive computational models, based on QSAR technology, of peptide-major histocompatibility complex (MHC) binding affinity have now become a vital component of modern day computational immunovaccinology. Historically, such approaches have been built around semi-qualitative, classification methods, but these are now giving way to quantitative regression methods. The additive method, an established immunoinformatics technique for the quantitative prediction of peptide-protein affinity, was used here to identify the sequence dependence of peptide binding specificity for three mouse class I MHC alleles: H2-D(b), H2-K(b) and H2-K(k). As we show, in terms of reliability the resulting models represent a significant advance on existing methods. They can be used for the accurate prediction of T-cell epitopes and are freely available online ( http://www.jenner.ac.uk/MHCPred).
Frey, Jennifer K.; Lewis, Jeremy C.; Guy, Rachel K.; Stuart, James N.
2013-01-01
Simple Summary We evaluated the influence of occurrence records with different reliability on predicted distribution of a unique, rare mammal in the American Southwest, the white-nosed coati (Nasua narica). We concluded that occurrence datasets that include anecdotal records can be used to infer species distributions, providing such data are used only for easily-identifiable species and based on robust modeling methods such as maximum entropy. Use of a reliability rating system is critical for using anecdotal data. Abstract Species distributions are usually inferred from occurrence records. However, these records are prone to errors in spatial precision and reliability. Although influence of spatial errors has been fairly well studied, there is little information on impacts of poor reliability. Reliability of an occurrence record can be influenced by characteristics of the species, conditions during the observation, and observer’s knowledge. Some studies have advocated use of anecdotal data, while others have advocated more stringent evidentiary standards such as only accepting records verified by physical evidence, at least for rare or elusive species. Our goal was to evaluate the influence of occurrence records with different reliability on species distribution models (SDMs) of a unique mammal, the white-nosed coati (Nasua narica) in the American Southwest. We compared SDMs developed using maximum entropy analysis of combined bioclimatic and biophysical variables and based on seven subsets of occurrence records that varied in reliability and spatial precision. We found that the predicted distribution of the coati based on datasets that included anecdotal occurrence records were similar to those based on datasets that only included physical evidence. Coati distribution in the American Southwest was predicted to occur in southwestern New Mexico and southeastern Arizona and was defined primarily by evenness of climate and Madrean woodland and chaparral land-cover types. Coati distribution patterns in this region suggest a good model for understanding the biogeographic structure of range margins. We concluded that occurrence datasets that include anecdotal records can be used to infer species distributions, providing such data are used only for easily-identifiable species and based on robust modeling methods such as maximum entropy. Use of a reliability rating system is critical for using anecdotal data. PMID:26487405
NASA Technical Reports Server (NTRS)
Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.
2009-01-01
Composite Overwrapped Pressure Vessels (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Flight certification is dependent on the reliability analysis to quantify the risk of stress rupture failure in existing flight vessels. Full certification of this reliability model would require a statistically significant number of lifetime tests to be performed and is impractical given the cost and limited flight hardware for certification testing purposes. One approach to confirm the reliability model is to perform a stress rupture test on a flight COPV. Currently, testing of such a Kevlar49 (Dupont)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the database and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio model is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one "nine," that is, reducing the predicted probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several vessels would be necessary.
Kelly, Natasha B.; Alonzo, Suzanne H.
2009-01-01
Existing theory predicts that male signalling can be an unreliable indicator of paternal care, but assumes that males with high levels of mating success can have high current reproductive success, without providing any parental care. As a result, this theory does not hold for the many species where offspring survival depends on male parental care. We modelled male allocation of resources between advertisement and care for species with male care where males vary in quality, and the effect of care and advertisement on male fitness is multiplicative rather than additive. Our model predicts that males will allocate proportionally more of their resources to whichever trait (advertisement or paternal care) is more fitness limiting. In contrast to previous theory, we find that male advertisement is always a reliable indicator of paternal care and male phenotypic quality (e.g. males with higher levels of advertisement never allocate less to care than males with lower levels of advertisement). Our model shows that the predicted pattern of male allocation and the reliability of male signalling depend very strongly on whether paternal care is assumed to be necessary for offspring survival and how male care affects offspring survival and male fitness. PMID:19520802
New techniques for modeling the reliability of reactor pressure vessels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, K.I.; Simonen, F.A.; Liebetrau, A.M.
1985-12-01
In recent years several probabilistic fracture mechanics codes, including the VISA code, have been developed to predict the reliability of reactor pressure vessels. This paper describes new modeling techniques used in a second generation of the VISA code entitled VISA-II. Results are presented that show the sensitivity of vessel reliability predictions to such factors as inservice inspection to detect flaws, random positioning of flaws within the vessel walls thickness, and fluence distributions that vary through-out the vessel. The algorithms used to implement these modeling techniques are also described. Other new options in VISA-II are also described in this paper. Themore » effect of vessel cladding has been included in the heat transfer, stress, and fracture mechanics solutions in VISA-II. The algorithm for simulating flaws has been changed to consider an entire vessel rather than a single flaw in a single weld. The flaw distribution was changed to include the distribution of both flaw depth and length. A menu of several alternate equations has been included to predict the shift in RTNDT. For flaws that arrest and later re-initiate, an option was also included to allow correlating the current arrest thoughness with subsequent initiation toughnesses. 21 refs.« less
Kelly, Natasha B; Alonzo, Suzanne H
2009-09-07
Existing theory predicts that male signalling can be an unreliable indicator of paternal care, but assumes that males with high levels of mating success can have high current reproductive success, without providing any parental care. As a result, this theory does not hold for the many species where offspring survival depends on male parental care. We modelled male allocation of resources between advertisement and care for species with male care where males vary in quality, and the effect of care and advertisement on male fitness is multiplicative rather than additive. Our model predicts that males will allocate proportionally more of their resources to whichever trait (advertisement or paternal care) is more fitness limiting. In contrast to previous theory, we find that male advertisement is always a reliable indicator of paternal care and male phenotypic quality (e.g. males with higher levels of advertisement never allocate less to care than males with lower levels of advertisement). Our model shows that the predicted pattern of male allocation and the reliability of male signalling depend very strongly on whether paternal care is assumed to be necessary for offspring survival and how male care affects offspring survival and male fitness.
Thermal Cycling Life Prediction of Sn-3.0Ag-0.5Cu Solder Joint Using Type-I Censored Data
Mi, Jinhua; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-01-01
Because solder joint interconnections are the weaknesses of microelectronic packaging, their reliability has great influence on the reliability of the entire packaging structure. Based on an accelerated life test the reliability assessment and life prediction of lead-free solder joints using Weibull distribution are investigated. The type-I interval censored lifetime data were collected from a thermal cycling test, which was implemented on microelectronic packaging with lead-free ball grid array (BGA) and fine-pitch ball grid array (FBGA) interconnection structures. The number of cycles to failure of lead-free solder joints is predicted by using a modified Engelmaier fatigue life model and a type-I censored data processing method. Then, the Pan model is employed to calculate the acceleration factor of this test. A comparison of life predictions between the proposed method and the ones calculated directly by Matlab and Minitab is conducted to demonstrate the practicability and effectiveness of the proposed method. At last, failure analysis and microstructure evolution of lead-free solders are carried out to provide useful guidance for the regular maintenance, replacement of substructure, and subsequent processing of electronic products. PMID:25121138
Comparisons Between Experimental and Semi-theoretical Cutting Forces of CCS Disc Cutters
NASA Astrophysics Data System (ADS)
Xia, Yimin; Guo, Ben; Tan, Qing; Zhang, Xuhui; Lan, Hao; Ji, Zhiyong
2018-05-01
This paper focuses on comparisons between the experimental and semi-theoretical forces of CCS disc cutters acting on different rocks. The experimental forces obtained from LCM tests were used to evaluate the prediction accuracy of a semi-theoretical CSM model. The results show that the CSM model reliably predicts the normal forces acting on red sandstone and granite, but underestimates the normal forces acting on marble. Some additional LCM test data from the literature were collected to further explore the ability of the CSM model to predict the normal forces acting on rocks of different strengths. The CSM model underestimates the normal forces acting on soft rocks, semi-hard rocks and hard rocks by approximately 38, 38 and 10%, respectively, but very accurately predicts those acting on very hard and extremely hard rocks. A calibration factor is introduced to modify the normal forces estimated by the CSM model. The overall trend of the calibration factor is characterized by an exponential decrease with increasing rock uniaxial compressive strength. The mean fitting ratios between the normal forces estimated by the modified CSM model and the experimental normal forces acting on soft rocks, semi-hard rocks and hard rocks are 1.076, 0.879 and 1.013, respectively. The results indicate that the prediction accuracy and the reliability of the CSM model have been improved.
Independent data validation of an in vitro method for ...
In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must be shown to reliably predict in vivo RBA that is determined in an established animal model. Previous studies correlating soil As IVBA with RBA have been limited by the use of few soil types as the source of As. Furthermore, the predictive value of As IVBA assays has not been validated using an independent set of As-contaminated soils. Therefore, the current study was undertaken to develop a robust linear model to predict As RBA in mice using an IVBA assay and to independently validate the predictive capability of this assay using a unique set of As-contaminated soils. Thirty-six As-contaminated soils varying in soil type, As contaminant source, and As concentration were included in this study, with 27 soils used for initial model development and nine soils used for independent model validation. The initial model reliably predicted As RBA values in the independent data set, with a mean As RBA prediction error of 5.3% (range 2.4 to 8.4%). Following validation, all 36 soils were used for final model development, resulting in a linear model with the equation: RBA = 0.59 * IVBA + 9.8 and R2 of 0.78. The in vivo-in vitro correlation and independent data validation presented here provide
Wilker, Sarah; Pfeiffer, Anett; Kolassa, Stephan; Koslowski, Daniela; Elbert, Thomas; Kolassa, Iris-Tatjana
2015-01-01
While studies with survivors of single traumatic experiences highlight individual response variation following trauma, research from conflict regions shows that almost everyone develops posttraumatic stress disorder (PTSD) if trauma exposure reaches extreme levels. Therefore, evaluating the effects of cumulative trauma exposure is of utmost importance in studies investigating risk factors for PTSD. Yet, little research has been devoted to evaluate how this important environmental risk factor can be best quantified. We investigated the retest reliability and predictive validity of different trauma measures in a sample of 227 Ugandan rebel war survivors. Trauma exposure was modeled as the number of traumatic event types experienced or as a score considering traumatic event frequencies. In addition, we investigated whether age at trauma exposure can be reliably measured and improves PTSD risk prediction. All trauma measures showed good reliability. While prediction of lifetime PTSD was most accurate from the number of different traumatic event types experienced, inclusion of event frequencies slightly improved the prediction of current PTSD. As assessing the number of traumatic events experienced is the least stressful and time-consuming assessment and leads to the best prediction of lifetime PTSD, we recommend this measure for research on PTSD etiology.
NASA Astrophysics Data System (ADS)
Tan, C. H.; Matjafri, M. Z.; Lim, H. S.
2015-10-01
This paper presents the prediction models which analyze and compute the CO2 emission in Malaysia. Each prediction model for CO2 emission will be analyzed based on three main groups which is transportation, electricity and heat production as well as residential buildings and commercial and public services. The prediction models were generated using data obtained from World Bank Open Data. Best subset method will be used to remove irrelevant data and followed by multi linear regression to produce the prediction models. From the results, high R-square (prediction) value was obtained and this implies that the models are reliable to predict the CO2 emission by using specific data. In addition, the CO2 emissions from these three groups are forecasted using trend analysis plots for observation purpose.
Ceramic component reliability with the restructured NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.
1992-01-01
The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).
Susan J. Prichard; Eva C. Karau; Roger D. Ottmar; Maureen C. Kennedy; James B. Cronan; Clinton S. Wright; Robert E. Keane
2014-01-01
Reliable predictions of fuel consumption are critical in the eastern United States (US), where prescribed burning is frequently applied to forests and air quality is of increasing concern. CONSUME and the First Order Fire Effects Model (FOFEM), predictive models developed to estimate fuel consumption and emissions from wildland fires, have not been systematically...
Cross-organism learning method to discover new gene functionalities.
Domeniconi, Giacomo; Masseroli, Marco; Moro, Gianluca; Pinoli, Pietro
2016-04-01
Knowledge of gene and protein functions is paramount for the understanding of physiological and pathological biological processes, as well as in the development of new drugs and therapies. Analyses for biomedical knowledge discovery greatly benefit from the availability of gene and protein functional feature descriptions expressed through controlled terminologies and ontologies, i.e., of gene and protein biomedical controlled annotations. In the last years, several databases of such annotations have become available; yet, these valuable annotations are incomplete, include errors and only some of them represent highly reliable human curated information. Computational techniques able to reliably predict new gene or protein annotations with an associated likelihood value are thus paramount. Here, we propose a novel cross-organisms learning approach to reliably predict new functionalities for the genes of an organism based on the known controlled annotations of the genes of another, evolutionarily related and better studied, organism. We leverage a new representation of the annotation discovery problem and a random perturbation of the available controlled annotations to allow the application of supervised algorithms to predict with good accuracy unknown gene annotations. Taking advantage of the numerous gene annotations available for a well-studied organism, our cross-organisms learning method creates and trains better prediction models, which can then be applied to predict new gene annotations of a target organism. We tested and compared our method with the equivalent single organism approach on different gene annotation datasets of five evolutionarily related organisms (Homo sapiens, Mus musculus, Bos taurus, Gallus gallus and Dictyostelium discoideum). Results show both the usefulness of the perturbation method of available annotations for better prediction model training and a great improvement of the cross-organism models with respect to the single-organism ones, without influence of the evolutionary distance between the considered organisms. The generated ranked lists of reliably predicted annotations, which describe novel gene functionalities and have an associated likelihood value, are very valuable both to complement available annotations, for better coverage in biomedical knowledge discovery analyses, and to quicken the annotation curation process, by focusing it on the prioritized novel annotations predicted. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.
2002-01-01
A generalized reliability model was developed for use in the design of structural components made from brittle, homogeneous anisotropic materials such as single crystals. The model is based on the Weibull distribution and incorporates a variable strength distribution and any equivalent stress failure criteria. In addition to the reliability model, an energy based failure criterion for elastically anisotropic materials was formulated. The model is different from typical Weibull-based models in that it accounts for strength anisotropy arising from fracture toughness anisotropy and thereby allows for strength and reliability predictions of brittle, anisotropic single crystals subjected to multiaxial stresses. The model is also applicable to elastically isotropic materials exhibiting strength anisotropy due to an anisotropic distribution of flaws. In order to develop and experimentally verify the model, the uniaxial and biaxial strengths of a single crystal nickel aluminide were measured. The uniaxial strengths of the <100> and <110> crystal directions were measured in three and four-point flexure. The biaxial strength was measured by subjecting <100> plates to a uniform pressure in a test apparatus that was developed and experimentally verified. The biaxial strengths of the single crystal plates were estimated by extending and verifying the displacement solution for a circular, anisotropic plate to the case of a variable radius and thickness. The best correlation between the experimental strength data and the model predictions occurred when an anisotropic stress analysis was combined with the normal stress criterion and the strength parameters associated with the <110> crystal direction.
Marshall, Andrew J; Evanovich, Emma K; David, Sarah Jo; Mumma, Gregory H
2018-01-17
High comorbidity rates among emotional disorders have led researchers to examine transdiagnostic factors that may contribute to shared psychopathology. Bifactor models provide a unique method for examining transdiagnostic variables by modelling the common and unique factors within measures. Previous findings suggest that the bifactor model of the Depression Anxiety and Stress Scale (DASS) may provide a method for examining transdiagnostic factors within emotional disorders. This study aimed to replicate the bifactor model of the DASS, a multidimensional measure of psychological distress, within a US adult sample and provide initial estimates of the reliability of the general and domain-specific factors. Furthermore, this study hypothesized that Worry, a theorized transdiagnostic variable, would show stronger relations to general emotional distress than domain-specific subscales. Confirmatory factor analysis was used to evaluate the bifactor model structure of the DASS in 456 US adult participants (279 females and 177 males, mean age 35.9 years) recruited online. The DASS bifactor model fitted well (CFI = 0.98; RMSEA = 0.05). The General Emotional Distress factor accounted for most of the reliable variance in item scores. Domain-specific subscales accounted for modest portions of reliable variance in items after accounting for the general scale. Finally, structural equation modelling indicated that Worry was strongly predicted by the General Emotional Distress factor. The DASS bifactor model is generalizable to a US community sample and General Emotional Distress, but not domain-specific factors, strongly predict the transdiagnostic variable Worry.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-01-01
The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.
Reliability formulation for the strength and fire endurance of glued-laminated beams
D. A. Bender
A model was developed for predicting the statistical distribution of glued-laminated beam strength and stiffness under normal temperature conditions using available long span modulus of elasticity data, end joint tension test data, and tensile strength data for laminating-grade lumber. The beam strength model predictions compared favorably with test data for glued-...
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1991-01-01
The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.
NASA Astrophysics Data System (ADS)
Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe
2018-01-01
In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.
Signal verification can promote reliable signalling.
Broom, Mark; Ruxton, Graeme D; Schaefer, H Martin
2013-11-22
The central question in communication theory is whether communication is reliable, and if so, which mechanisms select for reliability. The primary approach in the past has been to attribute reliability to strategic costs associated with signalling as predicted by the handicap principle. Yet, reliability can arise through other mechanisms, such as signal verification; but the theoretical understanding of such mechanisms has received relatively little attention. Here, we model whether verification can lead to reliability in repeated interactions that typically characterize mutualisms. Specifically, we model whether fruit consumers that discriminate among poor- and good-quality fruits within a population can select for reliable fruit signals. In our model, plants either signal or they do not; costs associated with signalling are fixed and independent of plant quality. We find parameter combinations where discriminating fruit consumers can select for signal reliability by abandoning unprofitable plants more quickly. This self-serving behaviour imposes costs upon plants as a by-product, rendering it unprofitable for unrewarding plants to signal. Thus, strategic costs to signalling are not a prerequisite for reliable communication. We expect verification to more generally explain signal reliability in repeated consumer-resource interactions that typify mutualisms but also in antagonistic interactions such as mimicry and aposematism.
Multi-Model Ensemble Wake Vortex Prediction
NASA Technical Reports Server (NTRS)
Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.
2015-01-01
Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.
The Challenges of Credible Thermal Protection System Reliability Quantification
NASA Technical Reports Server (NTRS)
Green, Lawrence L.
2013-01-01
The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.
Gearbox Reliability Collaborative Investigation of High-Speed-Shaft Bearing Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Jonathan; Guo, Yi
2016-06-01
The loads and contact stresses in the bearings of the high speed shaft section of the Gearbox Reliability Collaborative gearbox are examined in this paper. The loads were measured though strain gauges installed on the bearing outer races during dynamometer testing of the gearbox. Loads and stresses were also predicted with a simple analytical model and higher-fidelity commercial models. The experimental data compared favorably to each model, and bearing stresses were below thresholds for contact fatigue and axial cracking.
Probabilistic fatigue life prediction of metallic and composite materials
NASA Astrophysics Data System (ADS)
Xiang, Yibing
Fatigue is one of the most common failure modes for engineering structures, such as aircrafts, rotorcrafts and aviation transports. Both metallic materials and composite materials are widely used and affected by fatigue damage. Huge uncertainties arise from material properties, measurement noise, imperfect models, future anticipated loads and environmental conditions. These uncertainties are critical issues for accurate remaining useful life (RUL) prediction for engineering structures in service. Probabilistic fatigue prognosis considering various uncertainties is of great importance for structural safety. The objective of this study is to develop probabilistic fatigue life prediction models for metallic materials and composite materials. A fatigue model based on crack growth analysis and equivalent initial flaw size concept is proposed for metallic materials. Following this, the developed model is extended to include structural geometry effects (notch effect), environmental effects (corroded specimens) and manufacturing effects (shot peening effects). Due to the inhomogeneity and anisotropy, the fatigue model suitable for metallic materials cannot be directly applied to composite materials. A composite fatigue model life prediction is proposed based on a mixed-mode delamination growth model and a stiffness degradation law. After the development of deterministic fatigue models of metallic and composite materials, a general probabilistic life prediction methodology is developed. The proposed methodology combines an efficient Inverse First-Order Reliability Method (IFORM) for the uncertainty propogation in fatigue life prediction. An equivalent stresstransformation has been developed to enhance the computational efficiency under realistic random amplitude loading. A systematical reliability-based maintenance optimization framework is proposed for fatigue risk management and mitigation of engineering structures.
Can history and exam alone reliably predict pneumonia?
Graffelman, A W; le Cessie, S; Knuistingh Neven, A; Wilemssen, F E J A; Zonderland, H M; van den Broek, P J
2007-06-01
Prediction rules based on clinical information have been developed to support the diagnosis of pneumonia and help limit the use of expensive diagnostic tests. However, these prediction rules need to be validated in the primary care setting. Adults who met our definition of lower respiratory tract infection (LRTI) were recruited for a prospective study on the causes of LRTI, between November 15, 1998 and June 1, 2001 in the Leiden region of The Netherlands. Clinical information was collected and chest radiography was performed. A literature search was also done to find prediction rules for pneumonia. 129 patients--26 with pneumonia and 103 without--were included, and 6 prediction rules were applied. Only the model with the addition of a test for C-reactive protein had a significant area under the curve of 0.69 (95% confidence interval [CI], 0.58-0.80), with a positive predictive value of 47% (95% CI, 23-71) and a negative predictive value of 84% (95% CI, 77-91). The pretest probabilities for the presence and absence of pneumonia were 20% and 80%, respectively. Models based only on clinical information do not reliably predict the presence of pneumonia. The addition of an elevated C-reactive protein level seems of little value.
Rainfall Induced Landslides in Puerto Rico (Invited)
NASA Astrophysics Data System (ADS)
Lepore, C.; Kamal, S.; Arnone, E.; Noto, V.; Shanahan, P.; Bras, R. L.
2009-12-01
Landslides are a major geologic hazard in the United States, typically triggered by rainfall, earthquakes, volcanoes and human activity. Rainfall-induced landslides are the most common type in the island of Puerto Rico, with one or two large events per year. We performed an island-wide determination of static landslide susceptibility and hazard assessment as well as dynamic modeling of rainfall-induced shallow landslides in a particular hydrologic basin. Based on statistical analysis of past landslides, we determined that reliable prediction of the susceptibility to landslides is strongly dependent on the resolution of the digital elevation model (DEM) employed and the reliability of the rainfall data. A distributed hydrology model capable of simulating landslides, tRIBS-VEGGIE, has been implemented for the first time in a humid tropical environment like Puerto Rico. The Mameyes basin, located in the Luquillo Experimental Forest in Puerto Rico, was selected for modeling based on the availability of soil, vegetation, topographical, meteorological and historic landslide data. .Application of the model yields a temporal and spatial distribution of predicted rainfall-induced landslides, which is used to predict the dynamic susceptibility of the basin to landslides.
Bauer, Julia; Chen, Wenjing; Nischwitz, Sebastian; Liebl, Jakob; Rieken, Stefan; Welzel, Thomas; Debus, Juergen; Parodi, Katia
2018-04-24
A reliable Monte Carlo prediction of proton-induced brain tissue activation used for comparison to particle therapy positron-emission-tomography (PT-PET) measurements is crucial for in vivo treatment verification. Major limitations of current approaches to overcome include the CT-based patient model and the description of activity washout due to tissue perfusion. Two approaches were studied to improve the activity prediction for brain irradiation: (i) a refined patient model using tissue classification based on MR information and (ii) a PT-PET data-driven refinement of washout model parameters. Improvements of the activity predictions compared to post-treatment PT-PET measurements were assessed in terms of activity profile similarity for six patients treated with a single or two almost parallel fields delivered by active proton beam scanning. The refined patient model yields a generally higher similarity for most of the patients, except in highly pathological areas leading to tissue misclassification. Using washout model parameters deduced from clinical patient data could considerably improve the activity profile similarity for all patients. Current methods used to predict proton-induced brain tissue activation can be improved with MR-based tissue classification and data-driven washout parameters, thus providing a more reliable basis for PT-PET verification. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg
2015-07-01
In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.
Predicting geomagnetic reversals via data assimilation: a feasibility study
NASA Astrophysics Data System (ADS)
Morzfeld, Matthias; Fournier, Alexandre; Hulot, Gauthier
2014-05-01
The system of three ordinary differential equations (ODE) presented by Gissinger in [1] was shown to exhibit chaotic reversals whose statistics compared well with those from the paleomagnetic record. We explore the geophysical relevance of this low-dimensional model via data assimilation, i.e. we update the solution of the ODE with information from data of the dipole variable. The data set we use is 'SINT' (Valet et al. [2]), and it provides the signed virtual axial dipole moment over the past 2 millions years. We can obtain an accurate reconstruction of these dipole data using implicit sampling (a fully nonlinear Monte Carlo sampling strategy) and assimilating 5 kyr of data per sweep. We confirm our calibration of the model using the PADM2M dipole data set of Ziegler et al. [3]. The Monte Carlo sampling strategy provides us with quantitative information about the uncertainty of our estimates, and -in principal- we can use this information for making (robust) predictions under uncertainty. We perform synthetic data experiments to explore the predictive capability of the ODE model updated by data assimilation. For each experiment, we produce 2 Myr of synthetic data (with error levels similar to the ones found in the SINT data), calibrate the model to this record, and then check if this calibrated model can reliably predict a reversal within the next 5 kyr. By performing a large number of such experiments, we can estimate the statistics that describe how reliably our calibrated model can predict a reversal of the geomagnetic field. It is found that the 1 kyr-ahead predictions of reversals produced by the model appear to be accurate and reliable. These encouraging results prompted us to also test predictions of the five reversals of the SINT (and PADM2M) data set, using a similarly calibrated model. Results will be presented and discussed. References Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137 Valet, J.P., Maynadier,L and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. Ziegler, L.B., Constable, C.G., Johnson, C.L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood moidel of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089.
NASA Astrophysics Data System (ADS)
Liang, Zhongmin; Li, Yujie; Hu, Yiming; Li, Binquan; Wang, Jun
2017-06-01
Accurate and reliable long-term forecasting plays an important role in water resources management and utilization. In this paper, a hybrid model called SVR-HUP is presented to predict long-term runoff and quantify the prediction uncertainty. The model is created based on three steps. First, appropriate predictors are selected according to the correlations between meteorological factors and runoff. Second, a support vector regression (SVR) model is structured and optimized based on the LibSVM toolbox and a genetic algorithm. Finally, using forecasted and observed runoff, a hydrologic uncertainty processor (HUP) based on a Bayesian framework is used to estimate the posterior probability distribution of the simulated values, and the associated uncertainty of prediction was quantitatively analyzed. Six precision evaluation indexes, including the correlation coefficient (CC), relative root mean square error (RRMSE), relative error (RE), mean absolute percentage error (MAPE), Nash-Sutcliffe efficiency (NSE), and qualification rate (QR), are used to measure the prediction accuracy. As a case study, the proposed approach is applied in the Han River basin, South Central China. Three types of SVR models are established to forecast the monthly, flood season and annual runoff volumes. The results indicate that SVR yields satisfactory accuracy and reliability at all three scales. In addition, the results suggest that the HUP cannot only quantify the uncertainty of prediction based on a confidence interval but also provide a more accurate single value prediction than the initial SVR forecasting result. Thus, the SVR-HUP model provides an alternative method for long-term runoff forecasting.
Stochastic estimation of plant-available soil water under fluctuating water table depths
NASA Astrophysics Data System (ADS)
Or, Dani; Groeneveld, David P.
1994-12-01
Preservation of native valley-floor phreatophytes while pumping groundwater for export from Owens Valley, California, requires reliable predictions of plant water use. These predictions are compared with stored soil water within well field regions and serve as a basis for managing groundwater resources. Soil water measurement errors, variable recharge, unpredictable climatic conditions affecting plant water use, and modeling errors make soil water predictions uncertain and error-prone. We developed and tested a scheme based on soil water balance coupled with implementation of Kalman filtering (KF) for (1) providing physically based soil water storage predictions with prediction errors projected from the statistics of the various inputs, and (2) reducing the overall uncertainty in both estimates and predictions. The proposed KF-based scheme was tested using experimental data collected at a location on the Owens Valley floor where the water table was artificially lowered by groundwater pumping and later allowed to recover. Vegetation composition and per cent cover, climatic data, and soil water information were collected and used for developing a soil water balance. Predictions and updates of soil water storage under different types of vegetation were obtained for a period of 5 years. The main results show that: (1) the proposed predictive model provides reliable and resilient soil water estimates under a wide range of external conditions; (2) the predicted soil water storage and the error bounds provided by the model offer a realistic and rational basis for decisions such as when to curtail well field operation to ensure plant survival. The predictive model offers a practical means for accommodating simple aspects of spatial variability by considering the additional source of uncertainty as part of modeling or measurement uncertainty.
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
Reliability and Maintainability model (RAM) user and maintenance manual. Part 2
NASA Technical Reports Server (NTRS)
Ebeling, Charles E.
1995-01-01
This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
Ko, Junsu; Park, Hahnbeom; Seok, Chaok
2012-08-10
Protein structures can be reliably predicted by template-based modeling (TBM) when experimental structures of homologous proteins are available. However, it is challenging to obtain structures more accurate than the single best templates by either combining information from multiple templates or by modeling regions that vary among templates or are not covered by any templates. We introduce GalaxyTBM, a new TBM method in which the more reliable core region is modeled first from multiple templates and less reliable, variable local regions, such as loops or termini, are then detected and re-modeled by an ab initio method. This TBM method is based on "Seok-server," which was tested in CASP9 and assessed to be amongst the top TBM servers. The accuracy of the initial core modeling is enhanced by focusing on more conserved regions in the multiple-template selection and multiple sequence alignment stages. Additional improvement is achieved by ab initio modeling of up to 3 unreliable local regions in the fixed framework of the core structure. Overall, GalaxyTBM reproduced the performance of Seok-server, with GalaxyTBM and Seok-server resulting in average GDT-TS of 68.1 and 68.4, respectively, when tested on 68 single-domain CASP9 TBM targets. For application to multi-domain proteins, GalaxyTBM must be combined with domain-splitting methods. Application of GalaxyTBM to CASP9 targets demonstrates that accurate protein structure prediction is possible by use of a multiple-template-based approach, and ab initio modeling of variable regions can further enhance the model quality.
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Statistical prediction of space motion sickness
NASA Technical Reports Server (NTRS)
Reschke, Millard F.
1990-01-01
Studies designed to empirically examine the etiology of motion sickness to develop a foundation for enhancing its prediction are discussed. Topics addressed include early attempts to predict space motion sickness, multiple test data base that uses provocative and vestibular function tests, and data base subjects; reliability of provocative tests of motion sickness susceptibility; prediction of space motion sickness using linear discriminate analysis; and prediction of space motion sickness susceptibility using the logistic model.
Snitkin, Evan S; Dudley, Aimée M; Janse, Daniel M; Wong, Kaisheen; Church, George M; Segrè, Daniel
2008-01-01
Background Understanding the response of complex biochemical networks to genetic perturbations and environmental variability is a fundamental challenge in biology. Integration of high-throughput experimental assays and genome-scale computational methods is likely to produce insight otherwise unreachable, but specific examples of such integration have only begun to be explored. Results In this study, we measured growth phenotypes of 465 Saccharomyces cerevisiae gene deletion mutants under 16 metabolically relevant conditions and integrated them with the corresponding flux balance model predictions. We first used discordance between experimental results and model predictions to guide a stage of experimental refinement, which resulted in a significant improvement in the quality of the experimental data. Next, we used discordance still present in the refined experimental data to assess the reliability of yeast metabolism models under different conditions. In addition to estimating predictive capacity based on growth phenotypes, we sought to explain these discordances by examining predicted flux distributions visualized through a new, freely available platform. This analysis led to insight into the glycerol utilization pathway and the potential effects of metabolic shortcuts on model results. Finally, we used model predictions and experimental data to discriminate between alternative raffinose catabolism routes. Conclusions Our study demonstrates how a new level of integration between high throughput measurements and flux balance model predictions can improve understanding of both experimental and computational results. The added value of a joint analysis is a more reliable platform for specific testing of biological hypotheses, such as the catabolic routes of different carbon sources. PMID:18808699
ERIC Educational Resources Information Center
Ardoin, Scott P.; Williams, Jessica C.; Christ, Theodore J.; Klubnik, Cynthia; Wellborn, Claire
2010-01-01
Beyond reliability and validity, measures used to model student growth must consist of multiple probes that are equivalent in level of difficulty to establish consistent measurement conditions across time. Although existing evidence supports the reliability of curriculum-based measurement in reading (CBMR), few studies have empirically evaluated…
ERIC Educational Resources Information Center
Schweig, Jonathan
2013-01-01
Measuring school and classroom environments has become central in a nation-wide effort to develop comprehensive programs that measure teacher quality and teacher effectiveness. Formulating successful programs necessitates accurate and reliable methods for measuring these environmental variables. This paper uses a generalizability theory framework…
Assessment of concrete damage and strength degradation caused by reinforcement corrosion
NASA Astrophysics Data System (ADS)
Nepal, Jaya; Chen, Hua-Peng
2015-07-01
Structural performance deterioration of reinforced concrete structures has been extensively investigated, but very limited studies have been carried out to investigate the effect of reinforcement corrosion on time-dependent reliability with consideration of the influence of mechanical characteristics of the bond interface due to corrosion. This paper deals with how corrosion in reinforcement creates different types of defects in concrete structure and how they are responsible for the structural capacity deterioration of corrosion affected reinforced concrete structures during their service life. Cracking in cover concrete due to reinforcement corrosion is investigated by using rebar-concrete model and realistic concrete properties. The flexural strength deterioration is analytically predicted on the basis of bond strength evolution due to reinforcement corrosion, which is examined by the experimental data available. The time-dependent reliability analysis is undertaken to calculate the life time structural reliability of corrosion damaged concrete structures by stochastic deterioration modelling of reinforced concrete. The results from the numerical example show that the proposed approach is capable of evaluating the damage caused by reinforcement corrosion and also predicting the structural reliability of concrete structures during their lifecycle.
A Critique of a Phenomenological Fiber Breakage Model for Stress Rupture of Composite Materials
NASA Technical Reports Server (NTRS)
Reeder, James R.
2010-01-01
Stress rupture is not a critical failure mode for most composite structures, but there are a few applications where it can be critical. One application where stress rupture can be a critical design issue is in Composite Overwrapped Pressure Vessels (COPV's), where the composite material is highly and uniformly loaded for long periods of time and where very high reliability is required. COPV's are normally required to be proof loaded before being put into service to insure strength, but it is feared that the proof load may cause damage that reduces the stress rupture reliability. Recently, a fiber breakage model was proposed specifically to estimate a reduced reliability due to proof loading. The fiber breakage model attempts to model physics believed to occur at the microscopic scale, but validation of the model has not occurred. In this paper, the fiber breakage model is re-derived while highlighting assumptions that were made during the derivation. Some of the assumptions are examined to assess their effect on the final predicted reliability.
Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio
2017-07-01
Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.
Jensen, Christian Gaden; Niclasen, Janni; Vangkilde, Signe Allerup; Petersen, Anders; Hasselbalch, Steen Gregers
2016-05-01
The Mindful Attention Awareness Scale (MAAS) measures perceived degree of inattentiveness in different contexts and is often used as a reversed indicator of mindfulness. MAAS is hypothesized to reflect a psychological trait or disposition when used outside attentional training contexts, but the long-term test-retest reliability of MAAS scores is virtually untested. It is unknown whether MAAS predicts psychological health after controlling for standardized socioeconomic status classifications. First, MAAS translated to Danish was validated psychometrically within a randomly invited healthy adult community sample (N = 490). Factor analysis confirmed that MAAS scores quantified a unifactorial construct of excellent composite reliability and consistent convergent validity. Structural equation modeling revealed that MAAS scores contributed independently to predicting psychological distress and mental health, after controlling for age, gender, income, socioeconomic occupational class, stressful life events, and social desirability (β = 0.32-.42, ps < .001). Second, MAAS scores showed satisfactory short-term test-retest reliability in 100 retested healthy university students. Finally, MAAS sample mean scores as well as individuals' scores demonstrated satisfactory test-retest reliability across a 6 months interval in the adult community (retested N = 407), intraclass correlations ≥ .74. MAAS scores displayed significantly stronger long-term test-retest reliability than scores measuring psychological distress (z = 2.78, p = .005). Test-retest reliability estimates did not differ within demographic and socioeconomic strata. Scores on the Danish MAAS were psychometrically validated in healthy adults. MAAS's inattentiveness scores reflected a unidimensional construct, long-term reliable disposition, and a factor of independent significance for predicting psychological health. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
New techniques for modeling the reliability of reactor pressure vessels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, K.I.; Simonen, F.A.; Liebetrau, A.M.
1986-01-01
In recent years several probabilistic fracture mechanics codes, including the VISA code, have been developed to predict the reliability of reactor pressure vessels. This paper describes several new modeling techniques used in a second generation of the VISA code entitled VISA-II. Results are presented that show the sensitivity of vessel reliability predictions to such factors as inservice inspection to detect flaws, random positioning of flaws within the vessel wall thickness, and fluence distributions that vary throughout the vessel. The algorithms used to implement these modeling techniques are also described. Other new options in VISA-II are also described in this paper.more » The effect of vessel cladding has been included in the heat transfer, stress, and fracture mechanics solutions in VISA-II. The algorithms for simulating flaws has been changed to consider an entire vessel rather than a single flaw in a single weld. The flaw distribution was changed to include the distribution of both flaw depth and length. A menu of several alternate equations has been included to predict the shift in RT/sub NDT/. For flaws that arrest and later re-initiate, an option was also included to allow correlating the current arrest toughness with subsequent initiation toughnesses.« less
NASA Astrophysics Data System (ADS)
Wang, Ruichen; Lu, Jingyang; Xu, Yiran; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik
2018-05-01
Due to the progressive expansion of public mobile networks and the dramatic growth of the number of wireless users in recent years, researchers are motivated to study the radio propagation in urban environments and develop reliable and fast path loss prediction models. During last decades, different types of propagation models are developed for urban scenario path loss predictions such as the Hata model and the COST 231 model. In this paper, the path loss prediction model is thoroughly investigated using machine learning approaches. Different non-linear feature selection methods are deployed and investigated to reduce the computational complexity. The simulation results are provided to demonstratethe validity of the machine learning based path loss prediction engine, which can correctly determine the signal propagation in a wireless urban setting.
Edwards, G P
1997-10-01
Seasonal diet selection in the yellow-bellied marmot (Marmota flaviventris) was studied at two sites in Montana during 1991 and 1992. A linear programming model of optimal diet selection successfully predicted the composition of observed diets (monocot versus dicot) in eight out of ten cases early in the active season (April-June). During this period, adult, yearling and juvenile marmots selected diets consistent with the predicted goal of energy maximisation. However, late in the active season (July-August), the model predicted the diet composition in only one out of six cases. In all six late-season determinations, the model underestimated the amount of monocot in the diet. Possible reasons why the model failed to reliably predict diet composition late in the active season are discussed.
Prediction of seasonal climate-induced variations in global food production
NASA Astrophysics Data System (ADS)
Iizumi, Toshichika; Sakuma, Hirofumi; Yokozawa, Masayuki; Luo, Jing-Jia; Challinor, Andrew J.; Brown, Molly E.; Sakurai, Gen; Yamagata, Toshio
2013-10-01
Consumers, including the poor in many countries, are increasingly dependent on food imports and are thus exposed to variations in yields, production and export prices in the major food-producing regions of the world. National governments and commercial entities are therefore paying increased attention to the cropping forecasts of important food-exporting countries as well as to their own domestic food production. Given the increased volatility of food markets and the rising incidence of climatic extremes affecting food production, food price spikes may increase in prevalence in future years. Here we present a global assessment of the reliability of crop failure hindcasts for major crops at two lead times derived by linking ensemble seasonal climatic forecasts with statistical crop models. We found that moderate-to-marked yield loss over a substantial percentage (26-33%) of the harvested area of these crops is reliably predictable if climatic forecasts are near perfect. However, only rice and wheat production are reliably predictable at three months before the harvest using within-season hindcasts. The reliabilities of estimates varied substantially by crop--rice and wheat yields were the most predictable, followed by soybean and maize. The reasons for variation in the reliability of the estimates included the differences in crop sensitivity to the climate and the technology used by the crop-producing regions. Our findings reveal that the use of seasonal climatic forecasts to predict crop failures will be useful for monitoring global food production and will encourage the adaptation of food systems toclimatic extremes.
NASA Astrophysics Data System (ADS)
Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang
2018-03-01
Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.
Predicting space telerobotic operator training performance from human spatial ability assessment
NASA Astrophysics Data System (ADS)
Liu, Andrew M.; Oman, Charles M.; Galvan, Raquel; Natapoff, Alan
2013-11-01
Our goal was to determine whether existing tests of spatial ability can predict an astronaut's qualification test performance after robotic training. Because training astronauts to be qualified robotics operators is so long and expensive, NASA is interested in tools that can predict robotics performance before training begins. Currently, the Astronaut Office does not have a validated tool to predict robotics ability as part of its astronaut selection or training process. Commonly used tests of human spatial ability may provide such a tool to predict robotics ability. We tested the spatial ability of 50 active astronauts who had completed at least one robotics training course, then used logistic regression models to analyze the correlation between spatial ability test scores and the astronauts' performance in their evaluation test at the end of the training course. The fit of the logistic function to our data is statistically significant for several spatial tests. However, the prediction performance of the logistic model depends on the criterion threshold assumed. To clarify the critical selection issues, we show how the probability of correct classification vs. misclassification varies as a function of the mental rotation test criterion level. Since the costs of misclassification are low, the logistic models of spatial ability and robotic performance are reliable enough only to be used to customize regular and remedial training. We suggest several changes in tracking performance throughout robotics training that could improve the range and reliability of predictive models.
NASA Astrophysics Data System (ADS)
Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata
2016-09-01
Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Technical Reports Server (NTRS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-01-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Astrophysics Data System (ADS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-03-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Langó, Tamás; Róna, Gergely; Hunyadi-Gulyás, Éva; Turiák, Lilla; Varga, Julia; Dobson, László; Várady, György; Drahos, László; Vértessy, Beáta G; Medzihradszky, Katalin F; Szakács, Gergely; Tusnády, Gábor E
2017-02-13
Transmembrane proteins play crucial role in signaling, ion transport, nutrient uptake, as well as in maintaining the dynamic equilibrium between the internal and external environment of cells. Despite their important biological functions and abundance, less than 2% of all determined structures are transmembrane proteins. Given the persisting technical difficulties associated with high resolution structure determination of transmembrane proteins, additional methods, including computational and experimental techniques remain vital in promoting our understanding of their topologies, 3D structures, functions and interactions. Here we report a method for the high-throughput determination of extracellular segments of transmembrane proteins based on the identification of surface labeled and biotin captured peptide fragments by LC/MS/MS. We show that reliable identification of extracellular protein segments increases the accuracy and reliability of existing topology prediction algorithms. Using the experimental topology data as constraints, our improved prediction tool provides accurate and reliable topology models for hundreds of human transmembrane proteins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voisin, Sophie; Pinto, Frank M; Morin-Ducote, Garnetta
2013-01-01
Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADsmore » images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.« less
Modelling of Rainfall Induced Landslides in Puerto Rico
NASA Astrophysics Data System (ADS)
Lepore, C.; Arnone, E.; Sivandran, G.; Noto, L. V.; Bras, R. L.
2010-12-01
We performed an island-wide determination of static landslide susceptibility and hazard assessment as well as dynamic modeling of rainfall-induced shallow landslides in a particular hydrologic basin. Based on statistical analysis of past landslides, we determined that reliable prediction of the susceptibility to landslides is strongly dependent on the resolution of the digital elevation model (DEM) employed and the reliability of the rainfall data. A distributed hydrology model, Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator with VEGetation Generator for Interactive Evolution (tRIBS-VEGGIE), tRIBS-VEGGIE, has been implemented for the first time in a humid tropical environment like Puerto Rico and validated against in-situ measurements. A slope-failure module has been added to tRIBS-VEGGIE’s framework, after analyzing several failure criterions to identify the most suitable for our application; the module is used to predict the location and timing of landsliding events. The Mameyes basin, located in the Luquillo Experimental Forest in Puerto Rico, was selected for modeling based on the availability of soil, vegetation, topographical, meteorological and historic landslide data. Application of the model yields a temporal and spatial distribution of predicted rainfall-induced landslides.
NASA Astrophysics Data System (ADS)
Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan
2014-09-01
A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.
Comparing the reliability of related populations with the probability of agreement
Stevens, Nathaniel T.; Anderson-Cook, Christine M.
2016-07-26
Combining information from different populations to improve precision, simplify future predictions, or improve underlying understanding of relationships can be advantageous when considering the reliability of several related sets of systems. Using the probability of agreement to help quantify the similarities of populations can help to give a realistic assessment of whether the systems have reliability that are sufficiently similar for practical purposes to be treated as a homogeneous population. In addition, the new method is described and illustrated with an example involving two generations of a complex system where the reliability is modeled using either a logistic or probit regressionmore » model. Note that supplementary materials including code, datasets, and added discussion are available online.« less
Comparing the reliability of related populations with the probability of agreement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Nathaniel T.; Anderson-Cook, Christine M.
Combining information from different populations to improve precision, simplify future predictions, or improve underlying understanding of relationships can be advantageous when considering the reliability of several related sets of systems. Using the probability of agreement to help quantify the similarities of populations can help to give a realistic assessment of whether the systems have reliability that are sufficiently similar for practical purposes to be treated as a homogeneous population. In addition, the new method is described and illustrated with an example involving two generations of a complex system where the reliability is modeled using either a logistic or probit regressionmore » model. Note that supplementary materials including code, datasets, and added discussion are available online.« less
CPHmodels-3.0--remote homology modeling using structure-guided sequence profiles.
Nielsen, Morten; Lundegaard, Claus; Lund, Ole; Petersen, Thomas Nordahl
2010-07-01
CPHmodels-3.0 is a web server predicting protein 3D structure by use of single template homology modeling. The server employs a hybrid of the scoring functions of CPHmodels-2.0 and a novel remote homology-modeling algorithm. A query sequence is first attempted modeled using the fast CPHmodels-2.0 profile-profile scoring function suitable for close homology modeling. The new computational costly remote homology-modeling algorithm is only engaged provided that no suitable PDB template is identified in the initial search. CPHmodels-3.0 was benchmarked in the CASP8 competition and produced models for 94% of the targets (117 out of 128), 74% were predicted as high reliability models (87 out of 117). These achieved an average RMSD of 4.6 A when superimposed to the 3D structure. The remaining 26% low reliably models (30 out of 117) could superimpose to the true 3D structure with an average RMSD of 9.3 A. These performance values place the CPHmodels-3.0 method in the group of high performing 3D prediction tools. Beside its accuracy, one of the important features of the method is its speed. For most queries, the response time of the server is <20 min. The web server is available at http://www.cbs.dtu.dk/services/CPHmodels/.
Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo
2017-12-01
Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Benchmark analysis of forecasted seasonal temperature over different climatic areas
NASA Astrophysics Data System (ADS)
Giunta, G.; Salerno, R.; Ceppi, A.; Ercolani, G.; Mancini, M.
2015-12-01
From a long-term perspective, an improvement of seasonal forecasting, which is often exclusively based on climatology, could provide a new capability for the management of energy resources in a time scale of just a few months. This paper regards a benchmark analysis in relation to long-term temperature forecasts over Italy in the year 2010, comparing the eni-kassandra meteo forecast (e-kmf®) model, the Climate Forecast System-National Centers for Environmental Prediction (CFS-NCEP) model, and the climatological reference (based on 25-year data) with observations. Statistical indexes are used to understand the reliability of the prediction of 2-m monthly air temperatures with a perspective of 12 weeks ahead. The results show how the best performance is achieved by the e-kmf® system which improves the reliability for long-term forecasts compared to climatology and the CFS-NCEP model. By using the reliable high-performance forecast system, it is possible to optimize the natural gas portfolio and management operations, thereby obtaining a competitive advantage in the European energy market.
Reliability modelling and analysis of thermal MEMS
NASA Astrophysics Data System (ADS)
Muratet, Sylvaine; Lavu, Srikanth; Fourniols, Jean-Yves; Bell, George; Desmulliez, Marc P. Y.
2006-04-01
This paper presents a MEMS reliability study methodology based on the novel concept of 'virtual prototyping'. This methodology can be used for the development of reliable sensors or actuators and also to characterize their behaviour in specific use conditions and applications. The methodology is demonstrated on the U-shaped micro electro thermal actuator used as test vehicle. To demonstrate this approach, a 'virtual prototype' has been developed with the modeling tools MatLab and VHDL-AMS. A best practice FMEA (Failure Mode and Effect Analysis) is applied on the thermal MEMS to investigate and assess the failure mechanisms. Reliability study is performed by injecting the identified defaults into the 'virtual prototype'. The reliability characterization methodology predicts the evolution of the behavior of these MEMS as a function of the number of cycles of operation and specific operational conditions.
Computer models for economic and silvicultural decisions
Rosalie J. Ingram
1989-01-01
Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.
Rating the raters in a mixed model: An approach to deciphering the rater reliability
NASA Astrophysics Data System (ADS)
Shang, Junfeng; Wang, Yougui
2013-05-01
Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.
Predicting oral relative bioavailability of arsenic in soil from in vitro bioaccessibility
Several investigations have been conducted to develop in vitro bioaccessibility (IVBA) assays that reliably predict in vivo oral relative bioavailability (RBA) of arsenic (As). This study describes a meta-regression model relating soil As RBA and IVBA that is based upon data comb...
Predicting acute pain after cesarean delivery using three simple questions.
Pan, Peter H; Tonidandel, Ashley M; Aschenbrenner, Carol A; Houle, Timothy T; Harris, Lynne C; Eisenach, James C
2013-05-01
Interindividual variability in postoperative pain presents a clinical challenge. Preoperative quantitative sensory testing is useful but time consuming in predicting postoperative pain intensity. The current study was conducted to develop and validate a predictive model of acute postcesarean pain using a simple three-item preoperative questionnaire. A total of 200 women scheduled for elective cesarean delivery under subarachnoid anesthesia were enrolled (192 subjects analyzed). Patients were asked to rate the intensity of loudness of audio tones, their level of anxiety and anticipated pain, and analgesic need from surgery. Postoperatively, patients reported the intensity of evoked pain. Regression analysis was performed to generate a predictive model for pain from these measures. A validation cohort of 151 women was enrolled to test the reliability of the model (131 subjects analyzed). Responses from each of the three preoperative questions correlated moderately with 24-h evoked pain intensity (r = 0.24-0.33, P < 0.001). Audio tone rating added uniquely, but minimally, to the model and was not included in the predictive model. The multiple regression analysis yielded a statistically significant model (R = 0.20, P < 0.001), whereas the validation cohort showed reliably a very similar regression line (R = 0.18). In predicting the upper 20th percentile of evoked pain scores, the optimal cut point was 46.9 (z =0.24) such that sensitivity of 0.68 and specificity of 0.67 were as balanced as possible. This simple three-item questionnaire is useful to help predict postcesarean evoked pain intensity, and could be applied to further research and clinical application to tailor analgesic therapy to those who need it most.
Ruediger, T M; Allison, S C; Moore, J M; Wainner, R S
2014-09-01
The purposes of this descriptive and exploratory study were to examine electrophysiological measures of ulnar sensory nerve function in disease free adults to determine reliability, determine reference values computed with appropriate statistical methods, and examine predictive ability of anthropometric variables. Antidromic sensory nerve conduction studies of the ulnar nerve using surface electrodes were performed on 100 volunteers. Reference values were computed from optimally transformed data. Reliability was computed from 30 subjects. Multiple linear regression models were constructed from four predictor variables. Reliability was greater than 0.85 for all paired measures. Responses were elicited in all subjects; reference values for sensory nerve action potential (SNAP) amplitude from above elbow stimulation are 3.3 μV and decrement across-elbow less than 46%. No single predictor variable accounted for more than 15% of the variance in the response. Electrophysiologic measures of the ulnar sensory nerve are reliable. Absent SNAP responses are inconsistent with disease free individuals. Reference values recommended in this report are based on appropriate transformations of non-normally distributed data. No strong statistical model of prediction could be derived from the limited set of predictor variables. Reliability analyses combined with relatively low level of measurement error suggest that ulnar sensory reference values may be used with confidence. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Soultan, Alaaeldin; Safi, Kamran
2017-01-01
Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.
Person, M.; Konikow, Leonard F.
1986-01-01
A solute-transport model of an irrigated stream-aquifer system was recalibrated because of discrepancies between prior predictions of ground-water salinity trends during 1971-1982 and the observed outcome in February 1982. The original model was calibrated with a 1-year record of data collected during 1971-1972 in an 18-km reach of the Arkansas River Valley in southeastern Colorado. The model is improved by incorporating additional hydrologic processes (salt transport through the unsaturated zone) and through reexamination of the reliability of some input data (regression relationship used to estimate salinity from specific conductance data). Extended simulations using the recalibrated model are made to investigate the usefulness of the model for predicting long-term trends of salinity and water levels within the study area. Predicted ground-water levels during 1971-1982 are in good agreement with the observed, indicating that the original 1971-1972 study period was sufficient to calibrate the flow model. However, long-term simulations using the recalibrated model based on recycling the 1971-1972 data alone yield an average ground-water salinity for 1982 that is too low by about 10%. Simulations that incorporate observed surface-water salinity variations yield better results, in that the calculated average ground-water salinity for 1982 is within 3% of the observed value. Statistical analysis of temporal salinity variations of the applied surface water indicates that at least a 4-year sampling period is needed to accurately calibrate the transport model. ?? 1986.
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.
Measurement-based reliability prediction methodology. M.S. Thesis
NASA Technical Reports Server (NTRS)
Linn, Linda Shen
1991-01-01
In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.
Gao, Chao; Sun, Hanbo; Wang, Tuo; Tang, Ming; Bohnen, Nicolaas I; Müller, Martijn L T M; Herman, Talia; Giladi, Nir; Kalinin, Alexandr; Spino, Cathie; Dauer, William; Hausdorff, Jeffrey M; Dinov, Ivo D
2018-05-08
In this study, we apply a multidisciplinary approach to investigate falls in PD patients using clinical, demographic and neuroimaging data from two independent initiatives (University of Michigan and Tel Aviv Sourasky Medical Center). Using machine learning techniques, we construct predictive models to discriminate fallers and non-fallers. Through controlled feature selection, we identified the most salient predictors of patient falls including gait speed, Hoehn and Yahr stage, postural instability and gait difficulty-related measurements. The model-based and model-free analytical methods we employed included logistic regression, random forests, support vector machines, and XGboost. The reliability of the forecasts was assessed by internal statistical (5-fold) cross validation as well as by external out-of-bag validation. Four specific challenges were addressed in the study: Challenge 1, develop a protocol for harmonizing and aggregating complex, multisource, and multi-site Parkinson's disease data; Challenge 2, identify salient predictive features associated with specific clinical traits, e.g., patient falls; Challenge 3, forecast patient falls and evaluate the classification performance; and Challenge 4, predict tremor dominance (TD) vs. posture instability and gait difficulty (PIGD). Our findings suggest that, compared to other approaches, model-free machine learning based techniques provide a more reliable clinical outcome forecasting of falls in Parkinson's patients, for example, with a classification accuracy of about 70-80%.
Prediction of wastewater treatment plants performance based on artificial fish school neural network
NASA Astrophysics Data System (ADS)
Zhang, Ruicheng; Li, Chong
2011-10-01
A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.
Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei
2014-01-01
A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508
Hydrologic Design in the Anthropocene
NASA Astrophysics Data System (ADS)
Vogel, R. M.; Farmer, W. H.; Read, L.
2014-12-01
In an era dubbed the Anthropocene, the natural world is being transformed by a myriad of human influences. As anthropogenic impacts permeate hydrologic systems, hydrologists are challenged to fully account for such changes and develop new methods of hydrologic design. Deterministic watershed models (DWM), which can account for the impacts of changes in land use, climate and infrastructure, are becoming increasing popular for the design of flood and/or drought protection measures. As with all models that are calibrated to existing datasets, DWMs are subject to model error or uncertainty. In practice, the model error component of DWM predictions is typically ignored yet DWM simulations which ignore model error produce model output which cannot reproduce the statistical properties of the observations they are intended to replicate. In the context of hydrologic design, we demonstrate how ignoring model error can lead to systematic downward bias in flood quantiles, upward bias in drought quantiles and upward bias in water supply yields. By reincorporating model error, we document how DWM models can be used to generate results that mimic actual observations and preserve their statistical behavior. In addition to use of DWM for improved predictions in a changing world, improved communication of the risk and reliability is also needed. Traditional statements of risk and reliability in hydrologic design have been characterized by return periods, but such statements often assume that the annual probability of experiencing a design event remains constant throughout the project horizon. We document the general impact of nonstationarity on the average return period and reliability in the context of hydrologic design. Our analyses reveal that return periods do not provide meaningful expressions of the likelihood of future hydrologic events. Instead, knowledge of system reliability over future planning horizons can more effectively prepare society and communicate the likelihood of future hydrologic events of interest.
NASA Astrophysics Data System (ADS)
McPhee, J.; William, Y. W.
2005-12-01
This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system
RF model of the distribution system as a communication channel, phase 2. Volume 2: Task reports
NASA Technical Reports Server (NTRS)
Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.
1982-01-01
Based on the established feasibility of predicting, via a model, the propagation of Power Line Frequency on radial type distribution feeders, verification studies comparing model predictions against measurements were undertaken using more complicated feeder circuits and situations. Detailed accounts of the major tasks are presented. These include: (1) verification of model; (2) extension, implementation, and verification of perturbation theory; (3) parameter sensitivity; (4) transformer modeling; and (5) compensation of power distribution systems for enhancement of power line carrier communication reliability.
Cryopreserved trout hepatocytes provide a convenient in vitro system for measuring the intrinsic clearance of xenobiotics. Measured clearance rates can then be extrapolated to the whole animal as a means of improving modeled bioaccumulation predictions. To date, however, the in...
Research agenda for integrated landscape modeling
Samuel A. Cushman; Donald McKenzie; David L. Peterson; Jeremy Littell; Kevin S. McKelvey
2007-01-01
Reliable predictions of how changing climate and disturbance regimes will affect forest ecosystems are crucial for effective forest management. Current fire and climate research in forest ecosystem and community ecology offers data and methods that can inform such predictions. However, research in these fields occurs at different scales, with disparate goals, methods,...
Research agenda for integrated landscape modeling
Samuel A. Cushman; Donald McKenzie; David L. Peterson; Jeremy Littell; Kevin S. McKelvey
2006-01-01
Reliable predictions of the effects changing climate and disturbance regimes will have on forest ecosystems are crucial for effective forest management. Current fire and climate research in forest ecosystem and community ecology offers data and methods that can inform such predictions. However, research in these fields occurs at different scales, with disparate goals,...
Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...
Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...
Time Dependent Dielectric Breakdown in Copper Low-k Interconnects: Mechanisms and Reliability Models
Wong, Terence K.S.
2012-01-01
The time dependent dielectric breakdown phenomenon in copper low-k damascene interconnects for ultra large-scale integration is reviewed. The loss of insulation between neighboring interconnects represents an emerging back end-of-the-line reliability issue that is not fully understood. After describing the main dielectric leakage mechanisms in low-k materials (Poole-Frenkel and Schottky emission), the major dielectric reliability models that had appeared in the literature are discussed, namely: the Lloyd model, 1/E model, thermochemical E model, E1/2 models, E2 model and the Haase model. These models can be broadly categorized into those that consider only intrinsic breakdown (Lloyd, 1/E, E and Haase) and those that take into account copper migration in low-k materials (E1/2, E2). For each model, the physical assumptions and the proposed breakdown mechanism will be discussed, together with the quantitative relationship predicting the time to breakdown and supporting experimental data. Experimental attempts on validation of dielectric reliability models using data obtained from low field stressing are briefly discussed. The phenomenon of soft breakdown, which often precedes hard breakdown in porous ultra low-k materials, is highlighted for future research.
Wang, Shuangquan; Sun, Huiyong; Liu, Hui; Li, Dan; Li, Youyong; Hou, Tingjun
2016-08-01
Blockade of human ether-à-go-go related gene (hERG) channel by compounds may lead to drug-induced QT prolongation, arrhythmia, and Torsades de Pointes (TdP), and therefore reliable prediction of hERG liability in the early stages of drug design is quite important to reduce the risk of cardiotoxicity-related attritions in the later development stages. In this study, pharmacophore modeling and machine learning approaches were combined to construct classification models to distinguish hERG active from inactive compounds based on a diverse data set. First, an optimal ensemble of pharmacophore hypotheses that had good capability to differentiate hERG active from inactive compounds was identified by the recursive partitioning (RP) approach. Then, the naive Bayesian classification (NBC) and support vector machine (SVM) approaches were employed to construct classification models by integrating multiple important pharmacophore hypotheses. The integrated classification models showed improved predictive capability over any single pharmacophore hypothesis, suggesting that the broad binding polyspecificity of hERG can only be well characterized by multiple pharmacophores. The best SVM model achieved the prediction accuracies of 84.7% for the training set and 82.1% for the external test set. Notably, the accuracies for the hERG blockers and nonblockers in the test set reached 83.6% and 78.2%, respectively. Analysis of significant pharmacophores helps to understand the multimechanisms of action of hERG blockers. We believe that the combination of pharmacophore modeling and SVM is a powerful strategy to develop reliable theoretical models for the prediction of potential hERG liability.
Predicting operator workload during system design
NASA Technical Reports Server (NTRS)
Aldrich, Theodore B.; Szabo, Sandra M.
1988-01-01
A workload prediction methodology was developed in response to the need to measure workloads associated with operation of advanced aircraft. The application of the methodology will involve: (1) conducting mission/task analyses of critical mission segments and assigning estimates of workload for the sensory, cognitive, and psychomotor workload components of each task identified; (2) developing computer-based workload prediction models using the task analysis data; and (3) exercising the computer models to produce predictions of crew workload under varying automation and/or crew configurations. Critical issues include reliability and validity of workload predictors and selection of appropriate criterion measures.
Predicting Spike Occurrence and Neuronal Responsiveness from LFPs in Primary Somatosensory Cortex
Storchi, Riccardo; Zippo, Antonio G.; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E. M.
2012-01-01
Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role. PMID:22586452
Predicting spike occurrence and neuronal responsiveness from LFPs in primary somatosensory cortex.
Storchi, Riccardo; Zippo, Antonio G; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E M
2012-01-01
Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neuronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role.
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
Mei, Wenjuan; Zeng, Xianping; Yang, Chenglin; Zhou, Xiuyun
2017-01-01
The insulated gate bipolar transistor (IGBT) is a kind of excellent performance switching device used widely in power electronic systems. How to estimate the remaining useful life (RUL) of an IGBT to ensure the safety and reliability of the power electronics system is currently a challenging issue in the field of IGBT reliability. The aim of this paper is to develop a prognostic technique for estimating IGBTs’ RUL. There is a need for an efficient prognostic algorithm that is able to support in-situ decision-making. In this paper, a novel prediction model with a complete structure based on optimally pruned extreme learning machine (OPELM) and Volterra series is proposed to track the IGBT’s degradation trace and estimate its RUL; we refer to this model as Volterra k-nearest neighbor OPELM prediction (VKOPP) model. This model uses the minimum entropy rate method and Volterra series to reconstruct phase space for IGBTs’ ageing samples, and a new weight update algorithm, which can effectively reduce the influence of the outliers and noises, is utilized to establish the VKOPP network; then a combination of the k-nearest neighbor method (KNN) and least squares estimation (LSE) method is used to calculate the output weights of OPELM and predict the RUL of the IGBT. The prognostic results show that the proposed approach can predict the RUL of IGBT modules with small error and achieve higher prediction precision and lower time cost than some classic prediction approaches. PMID:29099811
Predictive Scheduling for Electric Vehicles Considering Uncertainty of Load and User Behaviors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bin; Huang, Rui; Wang, Yubo
2016-05-02
Un-coordinated Electric Vehicle (EV) charging can create unexpected load in local distribution grid, which may degrade the power quality and system reliability. The uncertainty of EV load, user behaviors and other baseload in distribution grid, is one of challenges that impedes optimal control for EV charging problem. Previous researches did not fully solve this problem due to lack of real-world EV charging data and proper stochastic model to describe these behaviors. In this paper, we propose a new predictive EV scheduling algorithm (PESA) inspired by Model Predictive Control (MPC), which includes a dynamic load estimation module and a predictive optimizationmore » module. The user-related EV load and base load are dynamically estimated based on the historical data. At each time interval, the predictive optimization program will be computed for optimal schedules given the estimated parameters. Only the first element from the algorithm outputs will be implemented according to MPC paradigm. Current-multiplexing function in each Electric Vehicle Supply Equipment (EVSE) is considered and accordingly a virtual load is modeled to handle the uncertainties of future EV energy demands. This system is validated by the real-world EV charging data collected on UCLA campus and the experimental results indicate that our proposed model not only reduces load variation up to 40% but also maintains a high level of robustness. Finally, IEC 61850 standard is utilized to standardize the data models involved, which brings significance to more reliable and large-scale implementation.« less
Park, Juhyun; Kang, Minyong; Jeong, Chang Wook; Oh, Sohee; Lee, Jeong Woo; Lee, Seung Bae; Son, Hwancheol; Jeong, Hyeon; Cho, Sung Yong
2015-08-01
The modified Seoul National University Renal Stone Complexity scoring system (S-ReSC-R) for retrograde intrarenal surgery (RIRS) was developed as a tool to predict stone-free rate (SFR) after RIRS. We externally validated the S-ReSC-R. We retrospectively reviewed 159 patients who underwent RIRS. The S-ReSC-R was assigned from 1 to 12 according to the location and number of sites involved. The stone-free status was defined as no evidence of a stone or with clinically insignificant residual fragment stones less than 2 mm. Interobserver and test-retest reliabilities were evaluated. Statistical performance of the prediction model was assessed by its predictive accuracy, predictive probability, and clinical usefulness. Overall SFR was 73.0%. The SFRs were 86.7%, 70.2%, and 48.6% in low-score (1-2), intermediate-score (3-4), and high-score (5-12) groups, respectively (p<0.001). External validation of S-ReSC-R revealed an area under the curve (AUC) of 0.731 (95% CI 0.650-0.813). The AUC of the three-titered S-ReSC-R was 0.701 (95% CI 0.609-0.794). The calibration plot showed that the predicted probability of SFR had a concordance comparable to that of observed frequency. The Hosmer-Lemeshow goodness of fit test revealed a p-value of 0.01 for the S-ReSC-R and 0.90 for the three-titered S-ReSC-R. Interobserver and test-retest reliabilities revealed an almost perfect level of agreement. The present study proved the predictive value of S-ReSC-R to predict SFR following RIRS in an independent cohort. Interobserver and test-retest reliabilities confirmed that S-ReSC-R was reliable and valid.
NASA Astrophysics Data System (ADS)
Perera, Kushan C.; Western, Andrew W.; Robertson, David E.; George, Biju; Nawarathna, Bandara
2016-06-01
Irrigation demands fluctuate in response to weather variations and a range of irrigation management decisions, which creates challenges for water supply system operators. This paper develops a method for real-time ensemble forecasting of irrigation demand and applies it to irrigation command areas of various sizes for lead times of 1 to 5 days. The ensemble forecasts are based on a deterministic time series model coupled with ensemble representations of the various inputs to that model. Forecast inputs include past flow, precipitation, and potential evapotranspiration. These inputs are variously derived from flow observations from a modernized irrigation delivery system; short-term weather forecasts derived from numerical weather prediction models and observed weather data available from automatic weather stations. The predictive performance for the ensemble spread of irrigation demand was quantified using rank histograms, the mean continuous rank probability score (CRPS), the mean CRPS reliability and the temporal mean of the ensemble root mean squared error (MRMSE). The mean forecast was evaluated using root mean squared error (RMSE), Nash-Sutcliffe model efficiency (NSE) and bias. The NSE values for evaluation periods ranged between 0.96 (1 day lead time, whole study area) and 0.42 (5 days lead time, smallest command area). Rank histograms and comparison of MRMSE, mean CRPS, mean CRPS reliability and RMSE indicated that the ensemble spread is generally a reliable representation of the forecast uncertainty for short lead times but underestimates the uncertainty for long lead times.
Přibyl, J; Madsen, P; Bauer, J; Přibylová, J; Simečková, M; Vostrý, L; Zavadilová, L
2013-03-01
Estimated breeding values (EBV) for first-lactation milk production of Holstein cattle in the Czech Republic were calculated using a conventional animal model and by single-step prediction of the genomic enhanced breeding value. Two overlapping data sets of milk production data were evaluated: (1) calving years 1991 to 2006, with 861,429 lactations and 1,918,901 animals in the pedigree and (2) calving years 1991 to 2010, with 1,097,319 lactations and 1,906,576 animals in the pedigree. Global Interbull (Uppsala, Sweden) deregressed proofs of 114,189 bulls were used in the analyses. Reliabilities of Interbull values were equivalent to an average of 8.53 effective records, which were used in a weighted analysis. A total of 1,341 bulls were genotyped using the Illumina BovineSNP50 BeadChip V2 (Illumina Inc., San Diego, CA). Among the genotyped bulls were 332 young bulls with no daughters in the first data set but more than 50 daughters (88.41, on average) with performance records in the second data set. For young bulls, correlations of EBV and genomic enhanced breeding value before and after progeny testing, corresponding average expected reliabilities, and effective daughter contributions (EDC) were calculated. The reliability of prediction pedigree EBV of young bulls was 0.41, corresponding to EDC=10.6. Including Interbull deregressed proofs improved the reliability of prediction by EDC=13.4 and including genotyping improved prediction reliability by EDC=6.2. Total average expected reliability of prediction reached 0.67, corresponding to EDC=30.2. The combination of domestic and Interbull sources for both genotyped and nongenotyped animals is valuable for improving the accuracy of genetic prediction in small populations of dairy cattle. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Computational toxicology using the OpenTox application programming interface and Bioclipse
2011-01-01
Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173
Lee, Yong Ju; Jung, Byeong Su; Kim, Kee-Tae; Paik, Hyun-Dong
2015-09-01
A predictive model was performed to describe the growth of Staphylococcus aureus in raw pork by using Integrated Pathogen Modeling Program 2013 and a polynomial model as a secondary predictive model. S. aureus requires approximately 180 h to reach 5-6 log CFU/g at 10 °C. At 15 °C and 25 °C, approximately 48 and 20 h, respectively, are required to cause food poisoning. Predicted data using the Gompertz model was the most accurate in this study. For lag time (LT) model, bias factor (Bf) and accuracy factor (Af) values were both 1.014, showing that the predictions were within a reliable range. For specific growth rate (SGR) model, Bf and Af were 1.188 and 1.190, respectively. Additionally, both Bf and Af values of the LT and SGR models were close to 1, indicating that IPMP Gompertz model is more adequate for predicting the growth of S. aureus on raw pork than other models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Design of Oil-Lubricated Machine for Life and Reliability
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.
2007-01-01
In the post-World War II era, the major technology drivers for improving the life, reliability, and performance of rolling-element bearings and gears have been the jet engine and the helicopter. By the late 1950s, most of the materials used for bearings and gears in the aerospace industry had been introduced into use. By the early 1960s, the life of most steels was increased over that experienced in the early 1940s, primarily by the introduction of vacuum degassing and vacuum melting processes in the late 1950s. The development of elastohydrodynamic (EHD) theory showed that most rolling bearings and gears have a thin film separating the contacting bodies during motion and it is that film which affects their lives. Computer programs modeling bearing and gear dynamics that incorporate probabilistic life prediction methods and EHD theory enable optimization of rotating machinery based on life and reliability. With improved manufacturing and processing, the potential improvement in bearing and gear life can be as much as 80 times that attainable in the early 1950s. The work presented summarizes the use of laboratory fatigue data for bearings and gears coupled with probabilistic life prediction and EHD theories to predict the life and reliability of a commercial turboprop gearbox. The resulting predictions are compared with field data.
NASA Technical Reports Server (NTRS)
Sproles, Darrell W.; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.
A comparison of hydrologic models for ecological flows and water availability
Peter V. Caldwell; Jonathan G. Kennen; Ge Sun; Julie E. Kiang; Jon B. Butcher; Michele C. Eddy; Lauren E. Hay; Jacob H. LaFontaine; Ernie F. Hain; Stacy A. C. Nelson; Steve G. McNulty
2015-01-01
Robust hydrologic models are needed to help manage water resources for healthy aquatic ecosystems and reliable water supplies for people, but there is a lack of comprehensive model comparison studies that quantify differences in streamflow predictions among model applications developed to answer management questions. We assessed differences in daily streamflow...
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Bayesian quantitative precipitation forecasts in terms of quantiles
NASA Astrophysics Data System (ADS)
Bentzien, Sabrina; Friederichs, Petra
2014-05-01
Ensemble prediction systems (EPS) for numerical weather predictions on the mesoscale are particularly developed to obtain probabilistic guidance for high impact weather. An EPS not only issues a deterministic future state of the atmosphere but a sample of possible future states. Ensemble postprocessing then translates such a sample of forecasts into probabilistic measures. This study focus on probabilistic quantitative precipitation forecasts in terms of quantiles. Quantiles are particular suitable to describe precipitation at various locations, since no assumption is required on the distribution of precipitation. The focus is on the prediction during high-impact events and related to the Volkswagen Stiftung funded project WEX-MOP (Mesoscale Weather Extremes - Theory, Spatial Modeling and Prediction). Quantile forecasts are derived from the raw ensemble and via quantile regression. Neighborhood method and time-lagging are effective tools to inexpensively increase the ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Since an EPS provides a large amount of potentially informative predictors, a variable selection is required in order to obtain a stable statistical model. A Bayesian formulation of quantile regression allows for inference about the selection of predictive covariates by the use of appropriate prior distributions. Moreover, the implementation of an additional process layer for the regression parameters accounts for spatial variations of the parameters. Bayesian quantile regression and its spatially adaptive extension is illustrated for the German-focused mesoscale weather prediction ensemble COSMO-DE-EPS, which runs (pre)operationally since December 2010 at the German Meteorological Service (DWD). Objective out-of-sample verification uses the quantile score (QS), a weighted absolute error between quantile forecasts and observations. The QS is a proper scoring function and can be decomposed into reliability, resolutions and uncertainty parts. A quantile reliability plot gives detailed insights in the predictive performance of the quantile forecasts.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Quantitative metal magnetic memory reliability modeling for welded joints
NASA Astrophysics Data System (ADS)
Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng
2016-03-01
Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.
NASA Technical Reports Server (NTRS)
Trela, W.
1980-01-01
The paper reviews the progress of the major technical tasks of the DOE/NASA/Ford program Evaluation of Ceramics for Stator Applications in Automotive Gas Turbine Engines: reliability prediction, stator fabrication, material characterization, and stator evaluation. A fast fracture reliability model was prepared for a one-piece ceramic stator. Periodic inspection results are presented.
Brøndum, R F; Su, G; Janss, L; Sahana, G; Guldbrandtsen, B; Boichard, D; Lund, M S
2015-06-01
This study investigated the effect on the reliability of genomic prediction when a small number of significant variants from single marker analysis based on whole genome sequence data were added to the regular 54k single nucleotide polymorphism (SNP) array data. The extra markers were selected with the aim of augmenting the custom low-density Illumina BovineLD SNP chip (San Diego, CA) used in the Nordic countries. The single-marker analysis was done breed-wise on all 16 index traits included in the breeding goals for Nordic Holstein, Danish Jersey, and Nordic Red cattle plus the total merit index itself. Depending on the trait's economic weight, 15, 10, or 5 quantitative trait loci (QTL) were selected per trait per breed and 3 to 5 markers were selected to tag each QTL. After removing duplicate markers (same marker selected for more than one trait or breed) and filtering for high pairwise linkage disequilibrium and assaying performance on the array, a total of 1,623 QTL markers were selected for inclusion on the custom chip. Genomic prediction analyses were performed for Nordic and French Holstein and Nordic Red animals using either a genomic BLUP or a Bayesian variable selection model. When using the genomic BLUP model including the QTL markers in the analysis, reliability was increased by up to 4 percentage points for production traits in Nordic Holstein animals, up to 3 percentage points for Nordic Reds, and up to 5 percentage points for French Holstein. Smaller gains of up to 1 percentage point was observed for mastitis, but only a 0.5 percentage point increase was seen for fertility. When using a Bayesian model accuracies were generally higher with only 54k data compared with the genomic BLUP approach, but increases in reliability were relatively smaller when QTL markers were included. Results from this study indicate that the reliability of genomic prediction can be increased by including markers significant in genome-wide association studies on whole genome sequence data alongside the 54k SNP set. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...
NASA Astrophysics Data System (ADS)
Forbes, Kevin F.; St. Cyr, O. C.
2017-10-01
This paper addresses whether geomagnetic activity challenged the reliability of the electric power system during part of the declining phase of solar cycle 23. Operations by National Grid in England and Wales are examined over the period of 11 March 2003 through 31 March 2005. This paper examines the relationship between measures of geomagnetic activity and a metric of challenged electric power reliability known as the net imbalance volume (NIV). Measured in megawatt hours, NIV represents the sum of all energy deployments initiated by the system operator to balance the electric power system. The relationship between geomagnetic activity and NIV is assessed using a multivariate econometric model. The model was estimated using half-hour settlement data over the period of 11 March 2003 through 31 December 2004. The results indicate that geomagnetic activity had a demonstrable effect on NIV over the sample period. Based on the parameter estimates, out-of-sample predictions of NIV were generated for each half hour over the period of 1 January to 31 March 2005. Consistent with the existence of a causal relationship between geomagnetic activity and the electricity market imbalance, the root-mean-square error of the out-of-sample predictions of NIV is smaller; that is, the predictions are more accurate, when the statistically significant estimated effects of geomagnetic activity are included as drivers in the predictions.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Evaluating the capabilities of watershed-scale models in estimating sediment yield at field-scale.
Sommerlot, Andrew R; Nejadhashemi, A Pouyan; Woznicki, Sean A; Giri, Subhasis; Prohaska, Michael D
2013-09-30
Many watershed model interfaces have been developed in recent years for predicting field-scale sediment loads. They share the goal of providing data for decisions aimed at improving watershed health and the effectiveness of water quality conservation efforts. The objectives of this study were to: 1) compare three watershed-scale models (Soil and Water Assessment Tool (SWAT), Field_SWAT, and the High Impact Targeting (HIT) model) against calibrated field-scale model (RUSLE2) in estimating sediment yield from 41 randomly selected agricultural fields within the River Raisin watershed; 2) evaluate the statistical significance among models; 3) assess the watershed models' capabilities in identifying areas of concern at the field level; 4) evaluate the reliability of the watershed-scale models for field-scale analysis. The SWAT model produced the most similar estimates to RUSLE2 by providing the closest median and the lowest absolute error in sediment yield predictions, while the HIT model estimates were the worst. Concerning statistically significant differences between models, SWAT was the only model found to be not significantly different from the calibrated RUSLE2 at α = 0.05. Meanwhile, all models were incapable of identifying priorities areas similar to the RUSLE2 model. Overall, SWAT provided the most correct estimates (51%) within the uncertainty bounds of RUSLE2 and is the most reliable among the studied models, while HIT is the least reliable. The results of this study suggest caution should be exercised when using watershed-scale models for field level decision-making, while field specific data is of paramount importance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Benchmarking novel approaches for modelling species range dynamics
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.
2016-01-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.
Ambler, Gareth; Omar, Rumana Z; Royston, Patrick
2007-06-01
Risk models that aim to predict the future course and outcome of disease processes are increasingly used in health research, and it is important that they are accurate and reliable. Most of these risk models are fitted using routinely collected data in hospitals or general practices. Clinical outcomes such as short-term mortality will be near-complete, but many of the predictors may have missing values. A common approach to dealing with this is to perform a complete-case analysis. However, this may lead to overfitted models and biased estimates if entire patient subgroups are excluded. The aim of this paper is to investigate a number of methods for imputing missing data to evaluate their effect on risk model estimation and the reliability of the predictions. Multiple imputation methods, including hotdecking and multiple imputation by chained equations (MICE), were investigated along with several single imputation methods. A large national cardiac surgery database was used to create simulated yet realistic datasets. The results suggest that complete case analysis may produce unreliable risk predictions and should be avoided. Conditional mean imputation performed well in our scenario, but may not be appropriate if using variable selection methods. MICE was amongst the best performing multiple imputation methods with regards to the quality of the predictions. Additionally, it produced the least biased estimates, with good coverage, and hence is recommended for use in practice.
Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.
2009-01-01
One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487
NASA Astrophysics Data System (ADS)
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the application of BJI with a GA error model outperforms the hydrological parameters robustness (diminishing the divergence model phenomenon) and improves the reliability of the streamflow predictive distribution, in respect of the results of a bad error model as SLS. Finally, the most likely prediction in a validation period, for both BJI+GA and SLS error models shows a similar performance.
NASA Technical Reports Server (NTRS)
Simmons, D. B.
1975-01-01
The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.
Li, Jiazhong; Gramatica, Paola
2010-11-01
Quantitative structure-activity relationship (QSAR) methodology aims to explore the relationship between molecular structures and experimental endpoints, producing a model for the prediction of new data; the predictive performance of the model must be checked by external validation. Clearly, the qualities of chemical structure information and experimental endpoints, as well as the statistical parameters used to verify the external predictivity have a strong influence on QSAR model reliability. Here, we emphasize the importance of these three aspects by analyzing our models on estrogen receptor binders (Endocrine disruptor knowledge base (EDKB) database). Endocrine disrupting chemicals, which mimic or antagonize the endogenous hormones such as estrogens, are a hot topic in environmental and toxicological sciences. QSAR shows great values in predicting the estrogenic activity and exploring the interactions between the estrogen receptor and ligands. We have verified our previously published model for additional external validation on new EDKB chemicals. Having found some errors in the used 3D molecular conformations, we redevelop a new model using the same data set with corrected structures, the same method (ordinary least-square regression, OLS) and DRAGON descriptors. The new model, based on some different descriptors, is more predictive on external prediction sets. Three different formulas to calculate correlation coefficient for the external prediction set (Q2 EXT) were compared, and the results indicated that the new proposal of Consonni et al. had more reasonable results, consistent with the conclusions from regression line, Williams plot and root mean square error (RMSE) values. Finally, the importance of reliable endpoints values has been highlighted by comparing the classification assignments of EDKB with those of another estrogen receptor binders database (METI): we found that 16.1% assignments of the common compounds were opposite (20 among 124 common compounds). In order to verify the real assignments for these inconsistent compounds, we predicted these samples, as a blind external set, by our regression models and compared the results with the two databases. The results indicated that most of the predictions were consistent with METI. Furthermore, we built a kNN classification model using the 104 consistent compounds to predict those inconsistent ones, and most of the predictions were also in agreement with METI database.
Performance and Reliability of Bonded Interfaces for High-Temperature Packaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paret, Paul P
2017-08-02
Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (>200 degrees C). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. Mechanical characterization tests that result in stress-strain curves and accelerated tests that produce cycles-to-failure result will be conducted. Also, we present a finite element method (FEM) modeling methodologymore » that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. A fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed.« less
Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.
2017-01-01
Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehlaghi, Vahab; Taghipour, Mostafa; Haghparast, Abbas
In this study, artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS) are investigated to predict the thickness of the compensator filter in radiation therapy. In the proposed models, the input parameters are field size (S), off-axis distance, and relative dose (D/D{sub 0}), and the output is the thickness of the compensator. The obtained results show that the proposed ANN and ANFIS models are useful, reliable, and cheap tools to predict the thickness of the compensator filter in intensity-modulated radiation therapy.
Chen, Qing; Zhang, Jinxiu; Hu, Ze
2017-01-01
This article investigates the dynamic topology control problem of satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime. PMID:28241474
Chen, Qing; Zhang, Jinxiu; Hu, Ze
2017-02-23
This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.
Tian, Feifei; Tan, Rui; Guo, Tailin; Zhou, Peng; Yang, Li
2013-07-01
Domain-peptide recognition and interaction are fundamentally important for eukaryotic signaling and regulatory networks. It is thus essential to quantitatively infer the binding stability and specificity of such interaction based upon large-scale but low-accurate complex structure models which could be readily obtained from sophisticated molecular modeling procedure. In the present study, a new method is described for the fast and reliable prediction of domain-peptide binding affinity with coarse-grained structure models. This method is designed to tolerate strong random noises involved in domain-peptide complex structures and uses statistical modeling approach to eliminate systematic bias associated with a group of investigated samples. As a paradigm, this method was employed to model and predict the binding behavior of various peptides to four evolutionarily unrelated peptide-recognition domains (PRDs), i.e. human amph SH3, human nherf PDZ, yeast syh GYF and yeast bmh 14-3-3, and moreover, we explored the molecular mechanism and biological implication underlying the binding of cognate and noncognate peptide ligands to their domain receptors. It is expected that the newly proposed method could be further used to perform genome-wide inference of domain-peptide binding at three-dimensional structure level. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Prediction of monthly regional groundwater levels through hybrid soft-computing techniques
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng
2016-10-01
Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.
Final Report: System Reliability Model for Solid-State Lighting (SSL) Luminaires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, J. Lynn
2017-05-31
The primary objectives of this project was to develop and validate reliability models and accelerated stress testing (AST) methodologies for predicting the lifetime of integrated SSL luminaires. This study examined the likely failure modes for SSL luminaires including abrupt failure, excessive lumen depreciation, unacceptable color shifts, and increased power consumption. Data on the relative distribution of these failure modes were acquired through extensive accelerated stress tests and combined with industry data and other source of information on LED lighting. This data was compiled and utilized to build models of the aging behavior of key luminaire optical and electrical components.
Reliability analysis of C-130 turboprop engine components using artificial neural network
NASA Astrophysics Data System (ADS)
Qattan, Nizar A.
In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.
At the Crossroads of Nanotoxicology: Past Achievements and Current Challenges
2015-01-01
rates of ionic dissolution, improving in vitro to in vivo predictive efficiencies, and establishing safety exposure limits. This Review will discuss...Oberdörster et al., 2005a), which drove the focus of in vitro and in vivo model selection to accommodate these areas of higher NM exposure. Most...Accordingly, a current challenge is the design of simple, in vitro models that reliably predict in vivo effects following a NM challenge. In order
Predictions of the residue cross-sections for the elements Z = 113 and Z = 114
NASA Astrophysics Data System (ADS)
Bouriquet, B.; Abe, Y.; Kosenko, G.
2004-10-01
A good reproduction of experimental excitation functions is obtained for the 1 n reactions producing the elements with Z = 108, 110, 111 and 112 by the combined usage of the two-step model for fusion and the statistical decay code KEWPIE. Furthermore, the model provides reliable predictions of productions of the elements with Z = 113 and Z = 114 which will be a useful guide for plannings of experiments.
NASA Astrophysics Data System (ADS)
Fournier, A.; Morzfeld, M.; Hulot, G.
2013-12-01
For a suitable choice of parameters, the system of three ordinary differential equations (ODE) presented by Gissinger [1] was shown to exhibit chaotic reversals whose statistics compared well with those from the paleomagnetic record. In order to further assess the geophysical relevance of this low-dimensional model, we resort to data assimilation methods to calibrate it using reconstructions of the fluctuation of the virtual axial dipole moment spanning the past 2 millions years. Moreover, we test to which extent a properly calibrated model could possibly be used to predict a reversal of the geomagnetic field. We calibrate the ODE model to the geomagnetic field over the past 2 Ma using the SINT data set of Valet et al. [2]. To this end, we consider four data assimilation algorithms: the ensemble Kalman filter (EnKF), a variational method and two Monte Carlo (MC) schemes, prior importance sampling and implicit sampling. We observe that EnKF performs poorly and that prior importance sampling is inefficient. We obtain the most accurate reconstructions of the geomagnetic data using implicit sampling with five data points per assimilation sweep (of duration 5 kyr). The variational scheme performs equally well, but it does not provide us with quantitative information about the uncertainty of the estimates, which makes this method difficult to use for robust prediction under uncertainty. A calibration of the model using the PADM2M data set of Ziegler et al. [3] confirms these findings. We study the predictive capability of the ODE model using statistics computed from synthetic data experiments. For each experiment, we produce 2 Myr of synthetic data (with error levels similar to the ones found in real data), then calibrate the model to this record and then check if this calibrated model can correctly and reliably predict a reversal within the next 10 kyr (say). By performing 100 such experiments, we can assess how reliably our calibrated model can predict a (non-) reversal. It is found that the 5 kyr ahead predictions of reversals produced by the model appear to be accurate and reliable.These encouraging results prompted us to also test predictions of the five reversals of the SINT (and PADM2M) data set, using a similarly calibrated model. Results will be presented and discussed. [1] Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137 [2] Valet, J.-P., Meynadier, L. and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. [3] Ziegler, L. B., Constable, C. G., Johnson, C. L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood model of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089.
Advanced Stirling Convertor Heater Head Durability and Reliability Quantification
NASA Technical Reports Server (NTRS)
Krause, David L.; Shah, Ashwin R.; Korovaichuk, Igor; Kalluri, Sreeramesh
2008-01-01
The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for long duration Science missions, such as lunar applications, Mars rovers, and deep space missions, that require reliable design lifetimes of up to 17 years. Resistance to creep deformation of the MarM-247 heater head (HH), a structurally critical component of the ASRG Advanced Stirling Convertor (ASC), under high temperatures (up to 850 C) is a key design driver for durability. Inherent uncertainties in the creep behavior of the thin-walled HH and the variations in the wall thickness, control temperature, and working gas pressure need to be accounted for in the life and reliability prediction. Due to the availability of very limited test data, assuring life and reliability of the HH is a challenging task. The NASA Glenn Research Center (GRC) has adopted an integrated approach combining available uniaxial MarM-247 material behavior testing, HH benchmark testing and advanced analysis in order to demonstrate the integrity, life and reliability of the HH under expected mission conditions. The proposed paper describes analytical aspects of the deterministic and probabilistic approaches and results. The deterministic approach involves development of the creep constitutive model for the MarM-247 (akin to the Oak Ridge National Laboratory master curve model used previously for Inconel 718 (Special Metals Corporation)) and nonlinear finite element analysis to predict the mean life. The probabilistic approach includes evaluation of the effect of design variable uncertainties in material creep behavior, geometry and operating conditions on life and reliability for the expected life. The sensitivity of the uncertainties in the design variables on the HH reliability is also quantified, and guidelines to improve reliability are discussed.
Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture
Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang
2016-01-01
The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176
A probabilisitic based failure model for components fabricated from anisotropic graphite
NASA Astrophysics Data System (ADS)
Xiao, Chengfeng
The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.
Astashkina, Anna; Grainger, David W
2014-04-01
Drug failure due to toxicity indicators remains among the primary reasons for staggering drug attrition rates during clinical studies and post-marketing surveillance. Broader validation and use of next-generation 3-D improved cell culture models are expected to improve predictive power and effectiveness of drug toxicological predictions. However, after decades of promising research significant gaps remain in our collective ability to extract quality human toxicity information from in vitro data using 3-D cell and tissue models. Issues, challenges and future directions for the field to improve drug assay predictive power and reliability of 3-D models are reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.
[How exactly can we predict the prognosis of COPD].
Atiş, Sibel; Kanik, Arzu; Ozgür, Eylem Sercan; Eker, Suzan; Tümkaya, Münir; Ozge, Cengiz
2009-01-01
Predictive models play a pivotal role in the provision of accurate and useful probabilistic assessments of clinical outcomes in chronic diseases. This study was aimed to develop a dedicated prognostic index for quantifying progression risk in chronic obstructive pulmonary disease (COPD). Data were collected prospectively from 75 COPD patients during a three years period. A predictive model of progression risk of COPD was developed using Bayesian logistic regression analysis by Markov chain Monte Carlo method. One-year cycles were used for the disease progression in this model. Primary end points for progression were impairment in basal dyspne index (BDI) score, FEV(1) decline, and exacerbation frequency in last three years. Time-varying covariates age, smoking, body mass index (BMI), severity of disease according to GOLD, PaO2, PaCO(2), IC, RV/TLC, DLCO were used under the study. The mean age was 57.1 + or - 8.1. BDI were strongly correlated with exacerbation frequency (p= 0.001) but not with FEV(1) decline. BMI was found to be a predictor factor for impairment in BDI (p= 0.03). The following independent risk factors were significant to predict exacerbation frequency: GOLD staging (OR for GOLD I vs. II and III = 2.3 and 4.0), hypoxemia (OR for mild vs moderate and severe = 2.1 and 5.1) and hyperinflation (OR= 1.6). PaO2 (p= 0.026), IC (p= 0.02) and RV/TLC (p= 0.03) were found to be predictive factors for FEV(1) decline. The model estimated BDI, lung function and exacerbation frequency at the last time point by testing initial data of three years with 95% reliability (p< 0.001). Accordingly, this model was evaluated as confident of 95% for assessing the future status of COPD patients. Using Bayesian predictive models, it was possible to develop a risk-stratification index that accurately predicted progression of COPD. This model can provide decision-making about future in COPD patients with high reliability looking clinical data of beginning.
NASA Astrophysics Data System (ADS)
Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd
2018-04-01
A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.
NASA Astrophysics Data System (ADS)
Galve, J. P.; Gutiérrez, F.; Remondo, J.; Bonachea, J.; Lucha, P.; Cendrero, A.
2009-10-01
Multiple sinkhole susceptibility models have been generated in three study areas of the Ebro Valley evaporite karst (NE Spain) applying different methods (nearest neighbour distance, sinkhole density, heuristic scoring system and probabilistic analysis) for each sinkhole type separately (cover collapse sinkholes, cover and bedrock collapse sinkholes and cover and bedrock sagging sinkholes). The quantitative and independent evaluation of the predictive capability of the models reveals that: (1) The most reliable susceptibility models are those derived from the nearest neighbour distance and sinkhole density. These models can be generated in a simple and rapid way from detailed geomorphological maps. (2) The reliability of the nearest neighbour distance and density models is conditioned by the degree of clustering of the sinkholes. Consequently, the karst areas in which sinkholes show a higher clustering are a priori more favourable for predicting new occurrences. (3) The predictive capability of the best models obtained in this research is significantly higher (12.5-82.5%) than that of the heuristic sinkhole susceptibility model incorporated into the General Urban Plan for the municipality of Zaragoza. Although the probabilistic approach provides lower quality results than the methods based on sinkhole proximity and density, it helps to identify the most significant factors and select the most effective mitigation strategies and may be applied to model susceptibility in different future scenarios.
USDA-ARS?s Scientific Manuscript database
Materials and Methods The simulation exercise and model improvement were implemented in phase-wise. In the first modelling activities, the model sensitivities were evaluated to given CO2 concentrations varying from 360 to 720 'mol mol-1 at an interval of 90 'mol mol-1 and air temperature increments...
2014-10-27
a phase-averaged spectral wind-wave generation and transformation model and its interface in the Surface-water Modeling System (SMS). Ambrose...applications of the Boussinesq (BOUSS-2D) wave model that provides more rigorous calculations for design and performance optimization of integrated...navigation systems . Together these wave models provide reliable predictions on regional and local spatial domains and cost-effective engineering solutions
Reliability of analog quantum simulation
Sarovar, Mohan; Zhang, Jun; Zeng, Lishan
2017-01-03
Analog quantum simulators (AQS) will likely be the first nontrivial application of quantum technology for predictive simulation. However, there remain questions regarding the degree of confidence that can be placed in the results of AQS since they do not naturally incorporate error correction. Specifically, how do we know whether an analog simulation of a quantum model will produce predictions that agree with the ideal model in the presence of inevitable imperfections? At the same time there is a widely held expectation that certain quantum simulation questions will be robust to errors and perturbations in the underlying hardware. Resolving these twomore » points of view is a critical step in making the most of this promising technology. In this paper we formalize the notion of AQS reliability by determining sensitivity of AQS outputs to underlying parameters, and formulate conditions for robust simulation. Our approach naturally reveals the importance of model symmetries in dictating the robust properties. Finally, to demonstrate the approach, we characterize the robust features of a variety of quantum many-body models.« less
The Art and Science of Long-Range Space Weather Forecasting
NASA Technical Reports Server (NTRS)
Hathaway, David H.; Wilson, Robert M.
2006-01-01
Long-range space weather forecasts are akin to seasonal forecasts of terrestrial weather. We don t expect to forecast individual events but we do hope to forecast the underlying level of activity important for satellite operations and mission pl&g. Forecasting space weather conditions years or decades into the future has traditionally been based on empirical models of the solar cycle. Models for the shape of the cycle as a function of its amplitude become reliable once the amplitude is well determined - usually two to three years after minimum. Forecasting the amplitude of a cycle well before that time has been more of an art than a science - usually based on cycle statistics and trends. Recent developments in dynamo theory -the theory explaining the generation of the Sun s magnetic field and the solar activity cycle - have now produced models with predictive capabilities. Testing these models with historical sunspot cycle data indicates that these predictions may be highly reliable one, or even two, cycles into the future.
A scoring function based on solvation thermodynamics for protein structure prediction
Du, Shiqiao; Harano, Yuichi; Kinoshita, Masahiro; Sakurai, Minoru
2012-01-01
We predict protein structure using our recently developed free energy function for describing protein stability, which is focused on solvation thermodynamics. The function is combined with the current most reliable sampling methods, i.e., fragment assembly (FA) and comparative modeling (CM). The prediction is tested using 11 small proteins for which high-resolution crystal structures are available. For 8 of these proteins, sequence similarities are found in the database, and the prediction is performed with CM. Fairly accurate models with average Cα root mean square deviation (RMSD) ∼ 2.0 Å are successfully obtained for all cases. For the rest of the target proteins, we perform the prediction following FA protocols. For 2 cases, we obtain predicted models with an RMSD ∼ 3.0 Å as the best-scored structures. For the other case, the RMSD remains larger than 7 Å. For all the 11 target proteins, our scoring function identifies the experimentally determined native structure as the best structure. Starting from the predicted structure, replica exchange molecular dynamics is performed to further refine the structures. However, we are unable to improve its RMSD toward the experimental structure. The exhaustive sampling by coarse-grained normal mode analysis around the native structures reveals that our function has a linear correlation with RMSDs < 3.0 Å. These results suggest that the function is quite reliable for the protein structure prediction while the sampling method remains one of the major limiting factors in it. The aspects through which the methodology could further be improved are discussed. PMID:27493529
NASA Astrophysics Data System (ADS)
Velázquez, Juan Alberto; Anctil, François; Ramos, Maria-Helena; Perrin, Charles
2010-05-01
An ensemble forecasting system seeks to assess and to communicate the uncertainty of hydrological predictions by proposing, at each time step, an ensemble of forecasts from which one can estimate the probability distribution of the predictant (the probabilistic forecast), in contrast with a single estimate of the flow, for which no distribution is obtainable (the deterministic forecast). In the past years, efforts towards the development of probabilistic hydrological prediction systems were made with the adoption of ensembles of numerical weather predictions (NWPs). The additional information provided by the different available Ensemble Prediction Systems (EPS) was evaluated in a hydrological context on various case studies (see the review by Cloke and Pappenberger, 2009). For example, the European ECMWF-EPS was explored in case studies by Roulin et al. (2005), Bartholmes et al. (2005), Jaun et al. (2008), and Renner et al. (2009). The Canadian EC-EPS was also evaluated by Velázquez et al. (2009). Most of these case studies investigate the ensemble predictions of a given hydrological model, set up over a limited number of catchments. Uncertainty from weather predictions is assessed through the use of meteorological ensembles. However, uncertainty from the tested hydrological model and statistical robustness of the forecasting system when coping with different hydro-meteorological conditions are less frequently evaluated. The aim of this study is to evaluate and compare the performance and the reliability of 18 lumped hydrological models applied to a large number of catchments in an operational ensemble forecasting context. Some of these models were evaluated in a previous study (Perrin et al. 2001) for their ability to simulate streamflow. Results demonstrated that very simple models can achieve a level of performance almost as high (sometimes higher) as models with more parameters. In the present study, we focus on the ability of the hydrological models to provide reliable probabilistic forecasts of streamflow, based on ensemble weather predictions. The models were therefore adapted to run in a forecasting mode, i.e., to update initial conditions according to the last observed discharge at the time of the forecast, and to cope with ensemble weather scenarios. All models are lumped, i.e., the hydrological behavior is integrated over the spatial scale of the catchment, and run at daily time steps. The complexity of tested models varies between 3 and 13 parameters. The models are tested on 29 French catchments. Daily streamflow time series extend over 17 months, from March 2005 to July 2006. Catchment areas range between 1470 km2 and 9390 km2, and represent a variety of hydrological and meteorological conditions. The 12 UTC 10-day ECMWF rainfall ensemble (51 members) was used, which led to daily streamflow forecasts for a 9-day lead time. In order to assess the performance and reliability of the hydrological ensemble predictions, we computed the Continuous Ranked probability Score (CRPS) (Matheson and Winkler, 1976), as well as the reliability diagram (e.g. Wilks, 1995) and the rank histogram (Talagrand et al., 1999). Since the ECMWF deterministic forecasts are also available, the performance of the hydrological forecasting systems was also evaluated by comparing the deterministic score (MAE) with the probabilistic score (CRPS). The results obtained for the 18 hydrological models and the 29 studied catchments are discussed in the perspective of improving the operational use of ensemble forecasting in hydrology. References Bartholmes, J. and Todini, E.: Coupling meteorological and hydrological models for flood forecasting, Hydrol. Earth Syst. Sci., 9, 333-346, 2005. Cloke, H. and Pappenberger, F.: Ensemble Flood Forecasting: A Review. Journal of Hydrology 375 (3-4): 613-626, 2009. Jaun, S., Ahrens, B., Walser, A., Ewen, T., and Schär, C.: A probabilistic view on the August 2005 floods in the upper Rhine catchment, Nat. Hazards Earth Syst. Sci., 8, 281-291, 2008. Matheson, J. E. and Winkler, R. L.: Scoring rules for continuous probability distributions, Manage Sci., 22, 1087-1096, 1976. Perrin, C., Michel C. and Andréassian,V. Does a large number of parameters enhance model performance? Comparative assessment of common catchment model structures on 429 catchments, J. Hydrol., 242, 275-301, 2001. Renner, M., Werner, M. G. F., Rademacher, S., and Sprokkereef, E.: Verification of ensemble flow forecast for the River Rhine, J. Hydrol., 376, 463-475, 2009. Roulin, E. and Vannitsem, S.: Skill of medium-range hydrological ensemble predictions, J. Hydrometeorol., 6, 729-744, 2005. Talagrand, O., Vautard, R., and Strauss, B.: Evaluation of the probabilistic prediction systems, in: Proceedings, ECMWF Workshop on Predictability, Shinfield Park, Reading, Berkshire, ECMWF, 1-25, 1999. Velázquez, J.A., Petit, T., Lavoie, A., Boucher M.-A., Turcotte R., Fortin V., and Anctil, F. : An evaluation of the Canadian global meteorological ensemble prediction system for short-term hydrological forecasting, Hydrol. Earth Syst. Sci., 13, 2221-2231, 2009. Wilks, D. S.: Statistical Methods in the Atmospheric Sciences, Academic Press, San Diego, CA, 465 pp., 1995.
Overview of the 1986--1987 atomic mass predictions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.
1988-07-01
The need for a comprehensive update of earlier sets of atomic mass predictions is documented. A project that grew from this need and which resulted in the preparation of the 1986--1987 Atomic Mass Predictions is summarized. Ten sets of new mass predictions and expository text from a variety of types of mass models are combined with the latest evaluation of experimentally determined atomic masses. The methodology employed in constructing these mass predictions is outlined. The models are compared with regard to their reproduction of the experimental mass surface and their use of varying numbers of adjustable parameters. Plots are presented,more » for each set of predictions, of differences between model calculations and the measured masses. These plots may be used to estimate the reliability of the new mass predictions in unmeasured regions that border the experimetally known mass surface. copyright 1988 Academic Press, Inc.« less
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Nettleton, D; Muñiz, J
2001-09-01
In this article, we revise and try to resolve some of the problems inherent in questionnaire screening of sleep apnea cases and apnea diagnosis based on attributes which are relevant and reliable. We present a way of learning information about the relevance of the data, comparing this with the definition of the information by the medical expert. We generate a predictive data model using a data aggregation operator which takes relevance and reliability information about the data into account to produce a diagnosis for each case. We also introduce a grade of membership for each question response which allows the patient to indicate a level of confidence or doubt in their own judgement. The method is tested with data collected from patients in a Sleep Clinic using questionnaires specially designed for the study. Other artificial intelligence predictive modeling algorithms are also tested on the same data and their predictive accuracy compared to that of the aggregation operator.
Force Project Technology Presentation to the NRCC
2014-02-04
Functional Bridge components Smart Odometer Adv Pretreatment Smart Bridge Multi-functional Gap Crossing Fuel Automated Tracking System Adv...comprehensive matrix of candidate composite material systems and textile reinforcement architectures via modeling/analyses and testing. Product(s...Validated Dynamic Modeling tool based on parametric study using material models to reliably predict the textile mechanics of the hose
QSAR study of curcumine derivatives as HIV-1 integrase inhibitors.
Gupta, Pawan; Sharma, Anju; Garg, Prabha; Roy, Nilanjan
2013-03-01
A QSAR study was performed on curcumine derivatives as HIV-1 integrase inhibitors using multiple linear regression. The statistically significant model was developed with squared correlation coefficients (r(2)) 0.891 and cross validated r(2) (r(2) cv) 0.825. The developed model revealed that electronic, shape, size, geometry, substitution's information and hydrophilicity were important atomic properties for determining the inhibitory activity of these molecules. The model was also tested successfully for external validation (r(2) pred = 0.849) as well as Tropsha's test for model predictability. Furthermore, the domain analysis was carried out to evaluate the prediction reliability of external set molecules. The model was statistically robust and had good predictive power which can be successfully utilized for screening of new molecules.
Fitamo, T; Triolo, J M; Boldrin, A; Scheutz, C
2017-08-01
The anaerobic digestibility of various biomass feedstocks in biogas plants is determined with biochemical methane potential (BMP) assays. However, experimental BMP analysis is time-consuming, costly and challenging to optimise stock management and feeding to achieve improved biogas production. The aim of the present study is to develop a fast and reliable model based on near-infrared reflectance spectroscopy (NIRS) for the BMP prediction of urban organic waste (UOW). The model comprised 87 UOW samples. Additionally, 88 plant biomass samples were included, to develop a combined model predicting BMP. The coefficient of determination (R 2 ) and root mean square error in prediction (RMSE P ) of the UOW model were 0.88 and 44 mL CH 4 /g VS, while the combined model was 0.89 and 50 mL CH 4 /g VS. Improved model performance was obtained for the two individual models compared to the combined version. The BMP prediction with NIRS was satisfactory and moderately successful. Copyright © 2017 Elsevier Ltd. All rights reserved.
Advances in Homology Protein Structure Modeling
Xiang, Zhexin
2007-01-01
Homology modeling plays a central role in determining protein structure in the structural genomics project. The importance of homology modeling has been steadily increasing because of the large gap that exists between the overwhelming number of available protein sequences and experimentally solved protein structures, and also, more importantly, because of the increasing reliability and accuracy of the method. In fact, a protein sequence with over 30% identity to a known structure can often be predicted with an accuracy equivalent to a low-resolution X-ray structure. The recent advances in homology modeling, especially in detecting distant homologues, aligning sequences with template structures, modeling of loops and side chains, as well as detecting errors in a model, have contributed to reliable prediction of protein structure, which was not possible even several years ago. The ongoing efforts in solving protein structures, which can be time-consuming and often difficult, will continue to spur the development of a host of new computational methods that can fill in the gap and further contribute to understanding the relationship between protein structure and function. PMID:16787261
Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs
2014-02-01
The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
A Probabilistic Approach to Predict Thermal Fatigue Life for Ball Grid Array Solder Joints
NASA Astrophysics Data System (ADS)
Wei, Helin; Wang, Kuisheng
2011-11-01
Numerous studies of the reliability of solder joints have been performed. Most life prediction models are limited to a deterministic approach. However, manufacturing induces uncertainty in the geometry parameters of solder joints, and the environmental temperature varies widely due to end-user diversity, creating uncertainties in the reliability of solder joints. In this study, a methodology for accounting for variation in the lifetime prediction for lead-free solder joints of ball grid array packages (PBGA) is demonstrated. The key aspects of the solder joint parameters and the cyclic temperature range related to reliability are involved. Probabilistic solutions of the inelastic strain range and thermal fatigue life based on the Engelmaier model are developed to determine the probability of solder joint failure. The results indicate that the standard deviation increases significantly when more random variations are involved. Using the probabilistic method, the influence of each variable on the thermal fatigue life is quantified. This information can be used to optimize product design and process validation acceptance criteria. The probabilistic approach creates the opportunity to identify the root causes of failed samples from product fatigue tests and field returns. The method can be applied to better understand how variation affects parameters of interest in an electronic package design with area array interconnections.
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
Bastiaens, Tim; Smits, Dirk; De Hert, Marc; Vanwalleghem, Dominique; Claes, Laurence
2016-04-30
The Personality Inventory for DSM-5 (PID-5; Krueger et al., 2012) is a dimensional self-report questionnaire designed to measure personality pathology according to the criterion B of the DSM-5 Section III personality model. In the current issue of DSM, this dimensional Section III personality model co-exists with the Section II categorical personality model derived from DSM-IV-TR. Therefore, investigation of the inter-relatedness of both models across populations and languages is warranted. In this study, we first examined the factor structure and reliability of the PID-5 in a Flemish community sample (N=509) by means of exploratory structural equation modeling and alpha coefficients. Next, we investigated the predictive ability of section III personality traits in relation to section II personality disorders through correlations and stepwise regression analyses. Results revealed a five factor solution for the PID-5, with adequate reliability of the facet scales. The variance in Section II personality disorders could be predicted by their theoretically comprising Section III personality traits, but additional Section III personality traits augmented this prediction. Based on current results, we discuss the Section II personality disorder conceptualization and the Section III personality disorder operationalization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Risk Management and Physical Modelling for Mountainous Natural Hazards
NASA Astrophysics Data System (ADS)
Lehning, Michael; Wilhelm, Christian
Population growth and climate change cause rapid changes in mountainous regions resulting in increased risks of floods, avalanches, debris flows and other natural hazards. Xevents are of particular concern, since attempts to protect against them result in exponentially growing costs. In this contribution, we suggest an integral risk management approach to dealing with natural hazards that occur in mountainous areas. Using the example of a mountain pass road, which can be protected from the danger of an avalanche by engineering (galleries) and/or organisational (road closure) measures, we show the advantage of an optimal combination of both versus the traditional approach, which is to rely solely on engineering structures. Organisational measures become especially important for Xevents because engineering structures cannot be designed for those events. However, organisational measures need a reliable and objective forecast of the hazard. Therefore, we further suggest that such forecasts should be developed using physical numerical modelling. We present the status of current approaches to using physical modelling to predict snow cover stability for avalanche warnings and peak runoff from mountain catchments for flood warnings. While detailed physical models can already predict peak runoff reliably, they are only used to support avalanche warnings. With increased process knowledge and computer power, current developments should lead to a enhanced role for detailed physical models in natural mountain hazard prediction.
Mid-frequency Band Dynamics of Large Space Structures
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.; Adams, Douglas S.
2004-01-01
High and low intensity dynamic environments experienced by a spacecraft during launch and on-orbit operations, respectively, induce structural loads and motions, which are difficult to reliably predict. Structural dynamics in low- and mid-frequency bands are sensitive to component interface uncertainty and non-linearity as evidenced in laboratory testing and flight operations. Analytical tools for prediction of linear system response are not necessarily adequate for reliable prediction of mid-frequency band dynamics and analysis of measured laboratory and flight data. A new MATLAB toolbox, designed to address the key challenges of mid-frequency band dynamics, is introduced in this paper. Finite-element models of major subassemblies are defined following rational frequency-wavelength guidelines. For computational efficiency, these subassemblies are described as linear, component mode models. The complete structural system model is composed of component mode subassemblies and linear or non-linear joint descriptions. Computation and display of structural dynamic responses are accomplished employing well-established, stable numerical methods, modern signal processing procedures and descriptive graphical tools. Parametric sensitivity and Monte-Carlo based system identification tools are used to reconcile models with experimental data and investigate the effects of uncertainties. Models and dynamic responses are exported for employment in applications, such as detailed structural integrity and mechanical-optical-control performance analyses.
Can shoulder dystocia be reliably predicted?
Dodd, Jodie M; Catcheside, Britt; Scheil, Wendy
2012-06-01
To evaluate factors reported to increase the risk of shoulder dystocia, and to evaluate their predictive value at a population level. The South Australian Pregnancy Outcome Unit's population database from 2005 to 2010 was accessed to determine the occurrence of shoulder dystocia in addition to reported risk factors, including age, parity, self-reported ethnicity, presence of diabetes and infant birth weight. Odds ratios (and 95% confidence interval) of shoulder dystocia was calculated for each risk factor, which were then incorporated into a logistic regression model. Test characteristics for each variable in predicting shoulder dystocia were calculated. As a proportion of all births, the reported rate of shoulder dystocia increased significantly from 0.95% in 2005 to 1.38% in 2010 (P = 0.0002). Using a logistic regression model, induction of labour and infant birth weight greater than both 4000 and 4500 g were identified as significant independent predictors of shoulder dystocia. The value of risk factors alone and when incorporated into the logistic regression model was poorly predictive of the occurrence of shoulder dystocia. While there are a number of factors associated with an increased risk of shoulder dystocia, none are of sufficient sensitivity or positive predictive value to allow their use clinically to reliably and accurately identify the occurrence of shoulder dystocia. © 2012 The Authors ANZJOG © 2012 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
NASA Astrophysics Data System (ADS)
Peng, L.; Pan, H.; Ma, H.; Zhao, P.; Qin, R.; Deng, C.
2017-12-01
The irreducible water saturation (Swir) is a vital parameter for permeability prediction and original oil and gas estimation. However, the complex pore structure of the rocks makes the parameter difficult to be calculated from both laboratory and conventional well logging methods. In this study, an effective statistical method to predict Swir is derived directly from nuclear magnetic resonance (NMR) data based on fractal theory. The spectrum of transversal relaxation time (T2) is normally considered as an indicator of pore size distribution, and the micro- and meso-pore's fractal dimension in two specific range of T2 spectrum distribution are calculated. Based on the analysis of the fractal characteristics of 22 core samples, which were drilled from four boreholes of tight lithologic oil reservoirs of Ordos Basin in China, the positive correlation between Swir and porosity is derived. Afterwards a predicting model for Swir based on linear regressions of fractal dimensions is proposed. It reveals that the Swir is controlled by the pore size and the roughness of the pore. The reliability of this model is tested and an ideal consistency between predicted results and experimental data is found. This model is a reliable supplementary to predict the irreducible water saturation in the case that T2 cutoff value cannot be accurately determined.
Predictive modeling of neuroanatomic structures for brain atrophy detection
NASA Astrophysics Data System (ADS)
Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming
2010-03-01
In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.
Kim, Chang-Sei; Ansermino, J. Mark; Hahn, Jin-Oh
2016-01-01
The goal of this study is to derive a minimally complex but credible model of respiratory CO2 gas exchange that may be used in systematic design and pilot testing of closed-loop end-tidal CO2 controllers in mechanical ventilation. We first derived a candidate model that captures the essential mechanisms involved in the respiratory CO2 gas exchange process. Then, we simplified the candidate model to derive two lower-order candidate models. We compared these candidate models for predictive capability and reliability using experimental data collected from 25 pediatric subjects undergoing dynamically varying mechanical ventilation during surgical procedures. A two-compartment model equipped with transport delay to account for CO2 delivery between the lungs and the tissues showed modest but statistically significant improvement in predictive capability over the same model without transport delay. Aggregating the lungs and the tissues into a single compartment further degraded the predictive fidelity of the model. In addition, the model equipped with transport delay demonstrated superior reliability to the one without transport delay. Further, the respiratory parameters derived from the model equipped with transport delay, but not the one without transport delay, were physiologically plausible. The results suggest that gas transport between the lungs and the tissues must be taken into account to accurately reproduce the respiratory CO2 gas exchange process under conditions of wide-ranging and dynamically varying mechanical ventilation conditions. PMID:26870728
A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2017-01-01
Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.
A general software reliability process simulation technique
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1991-01-01
The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.
García-Ramos, Amador; Haff, Guy Gregory; Pestaña-Melero, Francisco Luis; Pérez-Castilla, Alejandro; Rojas, Francisco Javier; Balsalobre-Fernández, Carlos; Jaric, Slobodan
2017-09-05
This study compared the concurrent validity and reliability of previously proposed generalized group equations for estimating the bench press (BP) one-repetition maximum (1RM) with the individualized load-velocity relationship modelled with a two-point method. Thirty men (BP 1RM relative to body mass: 1.08 0.18 kg·kg -1 ) performed two incremental loading tests in the concentric-only BP exercise and another two in the eccentric-concentric BP exercise to assess their actual 1RM and load-velocity relationships. A high velocity (≈ 1 m·s -1 ) and a low velocity (≈ 0.5 m·s -1 ) was selected from their load-velocity relationships to estimate the 1RM from generalized group equations and through an individual linear model obtained from the two velocities. The directly measured 1RM was highly correlated with all predicted 1RMs (r range: 0.847-0.977). The generalized group equations systematically underestimated the actual 1RM when predicted from the concentric-only BP (P <0.001; effect size [ES] range: 0.15-0.94), but overestimated it when predicted from the eccentric-concentric BP (P <0.001; ES range: 0.36-0.98). Conversely, a low systematic bias (range: -2.3-0.5 kg) and random errors (range: 3.0-3.8 kg), no heteroscedasticity of errors (r 2 range: 0.053-0.082), and trivial ES (range: -0.17-0.04) were observed when the prediction was based on the two-point method. Although all examined methods reported the 1RM with high reliability (CV≤5.1%; ICC≥0.89), the direct method was the most reliable (CV<2.0%; ICC≥0.98). The quick, fatigue-free, and practical two-point method was able to predict the BP 1RM with high reliability and practically perfect validity, and therefore we recommend its use over generalized group equations.
Application of Grey Model GM(1, 1) to Ultra Short-Term Predictions of Universal Time
NASA Astrophysics Data System (ADS)
Lei, Yu; Guo, Min; Zhao, Danning; Cai, Hongbing; Hu, Dandan
2016-03-01
A mathematical model known as one-order one-variable grey differential equation model GM(1, 1) has been herein employed successfully for the ultra short-term (<10days) predictions of universal time (UT1-UTC). The results of predictions are analyzed and compared with those obtained by other methods. It is shown that the accuracy of the predictions is comparable with that obtained by other prediction methods. The proposed method is able to yield an exact prediction even though only a few observations are provided. Hence it is very valuable in the case of a small size dataset since traditional methods, e.g., least-squares (LS) extrapolation, require longer data span to make a good forecast. In addition, these results can be obtained without making any assumption about an original dataset, and thus is of high reliability. Another advantage is that the developed method is easy to use. All these reveal a great potential of the GM(1, 1) model for UT1-UTC predictions.
Mansberger, Steven L; Sheppler, Christina R; McClure, Tina M; Vanalstine, Cory L; Swanson, Ingrid L; Stoumbos, Zoey; Lambert, William E
2013-09-01
To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach's alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R (2)) of .40. Test-retest reliability was 90%. The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence.
Accuracy of genomic predictions in Gyr (Bos indicus) dairy cattle.
Boison, S A; Utsunomiya, A T H; Santos, D J A; Neves, H H R; Carvalheiro, R; Mészáros, G; Utsunomiya, Y T; do Carmo, A S; Verneque, R S; Machado, M A; Panetto, J C C; Garcia, J F; Sölkner, J; da Silva, M V G B
2017-07-01
Genomic selection may accelerate genetic progress in breeding programs of indicine breeds when compared with traditional selection methods. We present results of genomic predictions in Gyr (Bos indicus) dairy cattle of Brazil for milk yield (MY), fat yield (FY), protein yield (PY), and age at first calving using information from bulls and cows. Four different single nucleotide polymorphism (SNP) chips were studied. Additionally, the effect of the use of imputed data on genomic prediction accuracy was studied. A total of 474 bulls and 1,688 cows were genotyped with the Illumina BovineHD (HD; San Diego, CA) and BovineSNP50 (50K) chip, respectively. Genotypes of cows were imputed to HD using FImpute v2.2. After quality check of data, 496,606 markers remained. The HD markers present on the GeneSeek SGGP-20Ki (15,727; Lincoln, NE), 50K (22,152), and GeneSeek GGP-75Ki (65,018) were subset and used to assess the effect of lower SNP density on accuracy of prediction. Deregressed breeding values were used as pseudophenotypes for model training. Data were split into reference and validation to mimic a forward prediction scheme. The reference population consisted of animals whose birth year was ≤2004 and consisted of either only bulls (TR1) or a combination of bulls and dams (TR2), whereas the validation set consisted of younger bulls (born after 2004). Genomic BLUP was used to estimate genomic breeding values (GEBV) and reliability of GEBV (R 2 PEV ) was based on the prediction error variance approach. Reliability of GEBV ranged from ∼0.46 (FY and PY) to 0.56 (MY) with TR1 and from 0.51 (PY) to 0.65 (MY) with TR2. When averaged across all traits, R 2 PEV were substantially higher (R 2 PEV of TR1 = 0.50 and TR2 = 0.57) compared with reliabilities of parent averages (0.35) computed from pedigree data and based on diagonals of the coefficient matrix (prediction error variance approach). Reliability was similar for all the 4 marker panels using either TR1 or TR2, except that imputed HD cow data set led to an inflation of reliability. Reliability of GEBV could be increased by enlarging the limited bull reference population with cow information. A reduced panel of ∼15K markers resulted in reliabilities similar to using HD markers. Reliability of GEBV could be increased by enlarging the limited bull reference population with cow information. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Multi-Scale Approach for Predicting Fish Species Distributions across Coral Reef Seascapes
Pittman, Simon J.; Brown, Kerry A.
2011-01-01
Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5–300 m radii) surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT) and Maximum Entropy Species Distribution Modelling (MaxEnt). The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided ‘outstanding’ model predictions (AUC = >0.9) for three of five fish species. MaxEnt provided ‘outstanding’ model predictions for two of five species, with the remaining three models considered ‘excellent’ (AUC = 0.8–0.9). In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy) than BRT (68% map accuracy). We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support conservation prioritization in marine protected area design, zoning in marine spatial planning, and ecosystem-based fisheries management. PMID:21637787
Multi-scale approach for predicting fish species distributions across coral reef seascapes.
Pittman, Simon J; Brown, Kerry A
2011-01-01
Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii) surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT) and Maximum Entropy Species Distribution Modelling (MaxEnt). The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9) for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9). In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy) than BRT (68% map accuracy). We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support conservation prioritization in marine protected area design, zoning in marine spatial planning, and ecosystem-based fisheries management.
Evapotranspiration and canopy resistance at an undeveloped prairie in a humid subtropical climate
Bidlake, W.R.
2002-01-01
Reliable estimates of evapotranspiration from areas of wildland vegetation are needed for many types of water-resource investigations. However, little is known about surface fluxes from many areally important vegetation types, and relatively few comparisons have been made to examine how well evapotranspiration models can predict evapotranspiration for soil-, climate-, or vegetation-types that differ from those under which the models have been calibrated. In this investigation at a prairie site in west-central Florida, latent heat flux (??E) computed from the energy balance and alternatively by eddy covariance during a 15-month period differed by 4 percent and 7 percent on hourly and daily time scales, respectively. Annual evapotranspiration computed from the energy balance and by eddy covariance were 978 and 944 mm, respectively. An hourly Penman-Monteith (PM) evapotranspiration model with stomatal control predicated on water-vapor-pressure deficit at canopy level, incoming solar radiation intensity, and soil water deficit was developed and calibrated using surface fluxes from eddy covariance. Model-predicted ??E agreed closely with ??E computed from the energy balance except when moisture from dew or precipitation covered vegetation surfaces. Finally, an hourly PM model developed for an Amazonian pasture predicted ??E for the Florida prairie with unexpected reliability. Additional comparisons of PM-type models that have been developed for differing types of short vegetation could aid in assessing interchangeability of such models.
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2001-01-01
Literature survey related to the EBC/TBC (environmental barrier coating/thermal barrier coating) fife models, failure mechanisms in EBC/TBC and the initial work plan for the proposed EBC/TBC life prediction methods development was developed as well as the finite element model for the thermal/stress analysis of the GRC-developed EBC system was prepared. Technical report for these activities is given in the subsequent sections.
NASA Astrophysics Data System (ADS)
Hancock, G. R.; Webb, A. A.; Turner, L.
2017-11-01
Sediment transport and soil erosion can be determined by a variety of field and modelling approaches. Computer based soil erosion and landscape evolution models (LEMs) offer the potential to be reliable assessment and prediction tools. An advantage of such models is that they provide both erosion and deposition patterns as well as total catchment sediment output. However, before use, like all models they require calibration and validation. In recent years LEMs have been used for a variety of both natural and disturbed landscape assessment. However, these models have not been evaluated for their reliability in steep forested catchments. Here, the SIBERIA LEM is calibrated and evaluated for its reliability for two steep forested catchments in south-eastern Australia. The model is independently calibrated using two methods. Firstly, hydrology and sediment transport parameters are inferred from catchment geomorphology and soil properties and secondly from catchment sediment transport and discharge data. The results demonstrate that both calibration methods provide similar parameters and reliable modelled sediment transport output. A sensitivity study of the input parameters demonstrates the model's sensitivity to correct parameterisation and also how the model could be used to assess potential timber harvesting as well as the removal of vegetation by fire.
Purposes and methods of scoring earthquake forecasts
NASA Astrophysics Data System (ADS)
Zhuang, J.
2010-12-01
There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.
Mapping The Temporal and Spatial Variability of Soil Moisture Content Using Proximal Soil Sensing
NASA Astrophysics Data System (ADS)
Virgawati, S.; Mawardi, M.; Sutiarso, L.; Shibusawa, S.; Segah, H.; Kodaira, M.
2018-05-01
In studies related to soil optical properties, it has been proven that visual and NIR soil spectral response can predict soil moisture content (SMC) using proper data analysis techniques. SMC is one of the most important soil properties influencing most physical, chemical, and biological soil processes. The problem is how to provide reliable, fast and inexpensive information of SMC in the subsurface from numerous soil samples and repeated measurement. The use of spectroscopy technology has emerged as a rapid and low-cost tool for extensive investigation of soil properties. The objective of this research was to develop calibration models based on laboratory Vis-NIR spectroscopy to estimate the SMC at four different growth stages of the soybean crop in Yogyakarta Province. An ASD Field-spectrophotoradiometer was used to measure the reflectance of soil samples. The partial least square regression (PLSR) was performed to establish the relationship between the SMC with Vis-NIR soil reflectance spectra. The selected calibration model was used to predict the new samples of SMC. The temporal and spatial variability of SMC was performed in digital maps. The results revealed that the calibration model was excellent for SMC prediction. Vis-NIR spectroscopy was a reliable tool for the prediction of SMC.
NUCLEAR AND HEAVY ION PHYSICS: α-decay half-lives of superheavy nuclei and general predictions
NASA Astrophysics Data System (ADS)
Dong, Jian-Min; Zhang, Hong-Fei; Wang, Yan-Zhao; Zuo, Wei; Su, Xin-Ning; Li, Jun-Qing
2009-08-01
The generalized liquid drop model (GLDM) and the cluster model have been employed to calculate the α-decay half-lives of superheavy nuclei (SHN) using the experimental α-decay Q values. The results of the cluster model are slightly poorer than those from the GLDM if experimental Q values are used. The prediction powers of these two models with theoretical Q values from Audi et al. (QAudi) and Muntian et al. (QM) have been tested to find that the cluster model with QAudi and QM could provide reliable results for Z > 112 but the GLDM with QAudi for Z <= 112. The half-lives of some still unknown nuclei are predicted by these two models and these results may be useful for future experimental assignment and identification.
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
EM calibration based on Post OPC layout analysis
NASA Astrophysics Data System (ADS)
Sreedhar, Aswin; Kundu, Sandip
2010-03-01
Design for Manufacturability (DFM) involves changes to the design and CAD tools to help increase pattern printability and improve process control. Design for Reliability (DFR) performs the same to improve reliability of devices from failures such as Electromigration (EM), gate-oxide break down, hot carrier injection (HCI), Negative Bias Temperature Insatiability (NBTI) and mechanical stress effects. Electromigration (EM) occurs due to migration or displacement of atoms as a result of the movement of electrons through a conducting medium. The rate of migration determines the Mean Time to Failure (MTTF) which is modeled as a function of temperature and current density. The model itself is calibrated through failure analysis (FA) of parts that are deemed to have failed due to EM against design parameters such as linewidth. Reliability Verification (RV) of a design involves verifying that every conducting line in a design meets certain MTTF threshold. In order to perform RV, current density for each wire must be computed. Current itself is a function of the parasitics that are determined through RC extraction. The standard practice is to perform the RC extraction and current density calculation on drawn, pre-OPC layouts. If a wire fails to meet threshold for MTTF, it may be resized. Subsequently, mask preparation steps such as OPC and PSM introduce extra features such as SRAFs, jogs,hammerheads and serifs that change their resistance, capacitance and current density values. Hence, calibrating EM model based on pre-OPC layouts will lead to different results compared to post-OPC layouts. In this work, we compare EM model calibration and reliability check based on drawn layout versus predicted layout, where the drawn layout is pre-OPC layout and predicted layout is based on litho simulation of post-OPC layout. Results show significant divergence between these two approaches, making a case for methodology based on predicted layout.
Li, Yankun; Shao, Xueguang; Cai, Wensheng
2007-04-15
Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods.
Social network models predict movement and connectivity in ecological landscapes
Fletcher, R.J.; Acevedo, M.A.; Reichert, Brian E.; Pias, Kyle E.; Kitchens, W.M.
2011-01-01
Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Lance A. Vickers; Thomas R. Fox; David L. Loftis; David A. Boucugnani
2013-01-01
The difficulty of achieving reliable oak (Quercus spp.) regeneration is well documented. Application of silvicultural techniques to facilitate oak regeneration largely depends on current regeneration potential. A computer model to assess regeneration potential based on existing advanced reproduction in Appalachian hardwoods was developed by David...
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2017-03-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2016-01-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, non-standardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly-available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the Labeled Latent Dirichlet Allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic (ROC) curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of .79, and .70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scaleable method for accurate automated coding of psychotherapy sessions that performs better than comparable discriminative methods at session-level coding and can also predict fine-grained codes. PMID:26625437
Prediction of ECS and SSC Models for Flux-Limited Samples of Gamma-Ray Blazars
NASA Technical Reports Server (NTRS)
Lister, Matthew L.; Marscher, Alan P.
1999-01-01
The external Compton scattering (ECS) and synchrotron self-Compton (SSC) models make distinct predictions for the amount of Doppler boosting of high-energy gamma-rays emitted by Nazar. We examine how these differences affect the predicted properties of active galactic nucleus (AGN) samples selected on the basis of Murray emission. We create simulated flux-limited samples based on the ECS and SSC models, and compare their properties to those of identified EGRET blazars. We find that for small gamma-ray-selected samples, the two models make very similar predictions, and cannot be reliably distinguished. This is primarily due to the fact that not only the Doppler factor, but also the cosmological distance and intrinsic luminosity play a role in determining whether an AGN is included in a flux-limited gamma-ray sample.
Modeling the Monthly Water Balance of a First Order Coastal Forested Watershed
S. V. Harder; Devendra M. Amatya; T. J. Callahan; Carl C. Trettin
2006-01-01
A study has been conducted to evaluate a spreadsheet-based conceptual Thornthwaite monthly water balance model and the process-based DRAINMOD model for their reliability in predicting monthly water budgets of a poorly drained, first order forested watershed at the Santee Experimental Forest located along the Lower Coastal Plain of South Carolina. Measured precipitation...
NASA Technical Reports Server (NTRS)
Hurley, K.; Anderson, K. A.
1972-01-01
Models of Jupiter's magnetosphere were examined to predict the X-ray flux that would be emitted in auroral or radiation zone processes. Various types of X-ray detection were investigated for energy resolution, efficiency, reliability, and background. From the model fluxes it was determined under what models Jovian X-rays could be detected.
Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture
NASA Astrophysics Data System (ADS)
Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo
2018-03-01
For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.
Composite Overwrapped Pressure Vessel (COPV) Stress Rupture Testing
NASA Technical Reports Server (NTRS)
Greene, Nathanael J.; Saulsberry, Regor L.; Leifeste, Mark R.; Yoder, Tommy B.; Keddy, Chris P.; Forth, Scott C.; Russell, Rick W.
2010-01-01
This paper reports stress rupture testing of Kevlar(TradeMark) composite overwrapped pressure vessels (COPVs) at NASA White Sands Test Facility. This 6-year test program was part of the larger effort to predict and extend the lifetime of flight vessels. Tests were performed to characterize control parameters for stress rupture testing, and vessel life was predicted by statistical modeling. One highly instrumented 102-cm (40-in.) diameter Kevlar(TradeMark) COPV was tested to failure (burst) as a single-point model verification. Significant data were generated that will enhance development of improved NDE methods and predictive modeling techniques, and thus better address stress rupture and other composite durability concerns that affect pressure vessel safety, reliability and mission assurance.
Fatigue reliability of deck structures subjected to correlated crack growth
NASA Astrophysics Data System (ADS)
Feng, G. Q.; Garbatov, Y.; Guedes Soares, C.
2013-12-01
The objective of this work is to analyse fatigue reliability of deck structures subjected to correlated crack growth. The stress intensity factors of the correlated cracks are obtained by finite element analysis and based on which the geometry correction functions are derived. The Monte Carlo simulations are applied to predict the statistical descriptors of correlated cracks based on the Paris-Erdogan equation. A probabilistic model of crack growth as a function of time is used to analyse the fatigue reliability of deck structures accounting for the crack propagation correlation. A deck structure is modelled as a series system of stiffened panels, where a stiffened panel is regarded as a parallel system composed of plates and are longitudinal. It has been proven that the method developed here can be conveniently applied to perform the fatigue reliability assessment of structures subjected to correlated crack growth.
A model to predict accommodations needed by disabled persons.
Babski-Reeves, Kari; Williams, Sabrina; Waters, Tzer Nan; Crumpton-Young, Lesia L; McCauley-Bell, Pamela
2005-09-01
In this paper, several approaches to assist employers in the accommodation process for disabled employees are discussed and a mathematical model is proposed to assist employers in predicting the accommodation level needed by an individual with a mobility-related disability. This study investigates the validity and reliability of this model in assessing the accommodation level needed by individuals utilizing data collected from twelve individuals with mobility-related disabilities. Based on the results of the statistical analyses, this proposed model produces a feasible preliminary measure for assessing the accommodation level needed for persons with mobility-related disabilities. Suggestions for practical application of this model in an industrial setting are addressed.
NASA Astrophysics Data System (ADS)
Heo, Youn Jeong; Cho, Jeongho; Heo, Moon Beom
2010-07-01
The broadcast ephemeris and IGS ultra-rapid predicted (IGU-P) products are primarily available for use in real-time GPS applications. The IGU orbit precision has been remarkably improved since late 2007, but its clock products have not shown acceptably high-quality prediction performance. One reason for this fact is that satellite atomic clocks in space can be easily influenced by various factors such as temperature and environment and this leads to complicated aspects like periodic variations, which are not sufficiently described by conventional models. A more reliable prediction model is thus proposed in this paper in order to be utilized particularly in describing the periodic variation behaviour satisfactorily. The proposed prediction model for satellite clocks adds cyclic terms to overcome the periodic effects and adopts delay coordinate embedding, which offers the possibility of accessing linear or nonlinear coupling characteristics like satellite behaviour. The simulation results have shown that the proposed prediction model outperforms the IGU-P solutions at least on a daily basis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, L.; Britt, J.; Birkmire, R.
ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less
NASA Astrophysics Data System (ADS)
Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael
2015-06-01
Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.
Prediction of mechanical property loss in polyamide during immersion in sea water
NASA Astrophysics Data System (ADS)
Le Gac, Pierre Yves; Arhant, Mael; Le Gall, Maelenn; Burtin, Christian; Davies, Peter
2016-05-01
It is well known that the water absorption in polyamide leads to a large reduction in the mechanical properties of the polymer, which is induced by the plasticization of the amorphous phase. However, predicting such a loss in a marine environment is not straightforward, especially when thick samples are considered. This study presents a modeling study of the water absorption in polyamide 6 based on the free volume theory. Using this modeling coupled with a description of the stress yield changes with Tg, it is possible to predict the long term behavior of thick samples when immersed in sea water. Reliability of the prediction is checked by a comparison with experimental results.
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
Finite element modelling of aluminum alloy 2024-T3 under transverse impact loading
NASA Astrophysics Data System (ADS)
Abdullah, Ahmad Sufian; Kuntjoro, Wahyu; Yamin, A. F. M.
2017-12-01
Fiber metal laminate named GLARE is a new aerospace material which has great potential to be widely used in future lightweight aircraft. It consists of aluminum alloy 2024-T3 and glass-fiber reinforced laminate. In order to produce reliable finite element model of impact response or crashworthiness of structure made of GLARE, one can initially model and validate the finite element model of the impact response of its constituents separately. The objective of this study was to develop a reliable finite element model of aluminum alloy 2024-T3 under low velocity transverse impact loading using commercial software ABAQUS. Johnson-Cook plasticity and damage models were used to predict the alloy's material properties and impact behavior. The results of the finite element analysis were compared to the experiment that has similar material and impact conditions. Results showed good correlations in terms of impact forces, deformation and failure progressions which concluded that the finite element model of 2024-T3 aluminum alloy under low velocity transverse impact condition using Johnson-Cook plastic and damage models was reliable.
Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.
Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C
2014-05-01
Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.
NASA Astrophysics Data System (ADS)
Dong, Hancheng; Jin, Xiaoning; Lou, Yangbing; Wang, Changhong
2014-12-01
Lithium-ion batteries are used as the main power source in many electronic and electrical devices. In particular, with the growth in battery-powered electric vehicle development, the lithium-ion battery plays a critical role in the reliability of vehicle systems. In order to provide timely maintenance and replacement of battery systems, it is necessary to develop a reliable and accurate battery health diagnostic that takes a prognostic approach. Therefore, this paper focuses on two main methods to determine a battery's health: (1) Battery State-of-Health (SOH) monitoring and (2) Remaining Useful Life (RUL) prediction. Both of these are calculated by using a filter algorithm known as the Support Vector Regression-Particle Filter (SVR-PF). Models for battery SOH monitoring based on SVR-PF are developed with novel capacity degradation parameters introduced to determine battery health in real time. Moreover, the RUL prediction model is proposed, which is able to provide the RUL value and update the RUL probability distribution to the End-of-Life cycle. Results for both methods are presented, showing that the proposed SOH monitoring and RUL prediction methods have good performance and that the SVR-PF has better monitoring and prediction capability than the standard particle filter (PF).
A real-time prediction model for post-irradiation malignant cervical lymph nodes.
Lo, W-C; Cheng, P-W; Shueng, P-W; Hsieh, C-H; Chang, Y-L; Liao, L-J
2018-04-01
To establish a real-time predictive scoring model based on sonographic characteristics for identifying malignant cervical lymph nodes (LNs) in cancer patients after neck irradiation. One-hundred forty-four irradiation-treated patients underwent ultrasonography and ultrasound-guided fine-needle aspirations (USgFNAs), and the resultant data were used to construct a real-time and computerised predictive scoring model. This scoring system was further compared with our previously proposed prediction model. A predictive scoring model, 1.35 × (L axis) + 2.03 × (S axis) + 2.27 × (margin) + 1.48 × (echogenic hilum) + 3.7, was generated by stepwise multivariate logistic regression analysis. Neck LNs were considered to be malignant when the score was ≥ 7, corresponding to a sensitivity of 85.5%, specificity of 79.4%, positive predictive value (PPV) of 82.3%, negative predictive value (NPV) of 83.1%, and overall accuracy of 82.6%. When this new model and the original model were compared, the areas under the receiver operating characteristic curve (c-statistic) were 0.89 and 0.81, respectively (P < .05). A real-time sonographic predictive scoring model was constructed to provide prompt and reliable guidance for USgFNA biopsies to manage cervical LNs after neck irradiation. © 2017 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Hancock, Thomas E.; And Others
1995-01-01
In machine-mediated learning environments, there is a need for more reliable methods of calculating the probability that a learner's response will be correct in future trials. A combination of domain-independent response-state measures of cognition along with two instructional variables for maximum predictive ability are demonstrated. (Author/LRW)
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George
2016-04-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).
Gerber, Brian D.; Kendall, William L.
2017-01-01
Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4) forecasting population size.
Learning to rank image tags with limited training examples.
Songhe Feng; Zheyun Feng; Rong Jin
2015-04-01
With an increasing number of images that are available in social media, image annotation has emerged as an important research topic due to its application in image matching and retrieval. Most studies cast image annotation into a multilabel classification problem. The main shortcoming of this approach is that it requires a large number of training images with clean and complete annotations in order to learn a reliable model for tag prediction. We address this limitation by developing a novel approach that combines the strength of tag ranking with the power of matrix recovery. Instead of having to make a binary decision for each tag, our approach ranks tags in the descending order of their relevance to the given image, significantly simplifying the problem. In addition, the proposed method aggregates the prediction models for different tags into a matrix, and casts tag ranking into a matrix recovery problem. It introduces the matrix trace norm to explicitly control the model complexity, so that a reliable prediction model can be learned for tag ranking even when the tag space is large and the number of training images is limited. Experiments on multiple well-known image data sets demonstrate the effectiveness of the proposed framework for tag ranking compared with the state-of-the-art approaches for image annotation and tag ranking.
Yellepeddi, Venkata; Rower, Joseph; Liu, Xiaoxi; Kumar, Shaun; Rashid, Jahidur; Sherwin, Catherine M T
2018-05-18
Physiologically based pharmacokinetic modeling and simulation is an important tool for predicting the pharmacokinetics, pharmacodynamics, and safety of drugs in pediatrics. Physiologically based pharmacokinetic modeling is applied in pediatric drug development for first-time-in-pediatric dose selection, simulation-based trial design, correlation with target organ toxicities, risk assessment by investigating possible drug-drug interactions, real-time assessment of pharmacokinetic-safety relationships, and assessment of non-systemic biodistribution targets. This review summarizes the details of a physiologically based pharmacokinetic modeling approach in pediatric drug research, emphasizing reports on pediatric physiologically based pharmacokinetic models of individual drugs. We also compare and contrast the strategies employed by various researchers in pediatric physiologically based pharmacokinetic modeling and provide a comprehensive overview of physiologically based pharmacokinetic modeling strategies and approaches in pediatrics. We discuss the impact of physiologically based pharmacokinetic models on regulatory reviews and product labels in the field of pediatric pharmacotherapy. Additionally, we examine in detail the current limitations and future directions of physiologically based pharmacokinetic modeling in pediatrics with regard to the ability to predict plasma concentrations and pharmacokinetic parameters. Despite the skepticism and concern in the pediatric community about the reliability of physiologically based pharmacokinetic models, there is substantial evidence that pediatric physiologically based pharmacokinetic models have been used successfully to predict differences in pharmacokinetics between adults and children for several drugs. It is obvious that the use of physiologically based pharmacokinetic modeling to support various stages of pediatric drug development is highly attractive and will rapidly increase, provided the robustness and reliability of these techniques are well established.
NASA Astrophysics Data System (ADS)
Popov, A.; Zolotarev, V.; Bychkov, S.
2016-11-01
This paper examines the results of experimental studies of a previously submitted combined algorithm designed to increase the reliability of information systems. The data that illustrates the organization and conduct of the studies is provided. Within the framework of a comparison of As a part of the study conducted, the comparison of the experimental data of simulation modeling and the data of the functioning of the real information system was made. The hypothesis of the homogeneity of the logical structure of the information systems was formulated, thus enabling to reconfigure the algorithm presented, - more specifically, to transform it into the model for the analysis and prediction of arbitrary information systems. The results presented can be used for further research in this direction. The data of the opportunity to predict the functioning of the information systems can be used for strategic and economic planning. The algorithm can be used as a means for providing information security.
Predictive model of muscle fatigue after spinal cord injury in humans.
Shields, Richard K; Chang, Ya-Ju; Dudley-Javoroski, Shauna; Lin, Cheng-Hsiang
2006-07-01
The fatigability of paralyzed muscle limits its ability to deliver physiological loads to paralyzed extremities during repetitive electrical stimulation. The purposes of this study were to determine the reliability of measuring paralyzed muscle fatigue and to develop a model to predict the temporal changes in muscle fatigue that occur after spinal cord injury (SCI). Thirty-four subjects underwent soleus fatigue testing with a modified Burke electrical stimulation fatigue protocol. The between-day reliability of this protocol was high (intraclass correlation, 0.96). We fit the fatigue index (FI) data to a quadratic-linear segmental polynomial model. FI declined rapidly (0.3854 per year) for the first 1.7 years, and more slowly (0.01 per year) thereafter. The rapid decline of FI immediately after SCI implies that a "window of opportunity" exists for the clinician if the goal is to prevent these changes. Understanding the timing of change in muscle endurance properties (and, therefore, load-generating capacity) after SCI may assist clinicians when developing therapeutic interventions to maintain musculoskeletal integrity.
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
MEMS reliability: The challenge and the promise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, W.M.; Tanner, D.M.; Miller, S.L.
1998-05-01
MicroElectroMechanical Systems (MEMS) that think, sense, act and communicate will open up a broad new array of cost effective solutions only if they prove to be sufficiently reliable. A valid reliability assessment of MEMS has three prerequisites: (1) statistical significance; (2) a technique for accelerating fundamental failure mechanisms, and (3) valid physical models to allow prediction of failures during actual use. These already exist for the microelectronics portion of such integrated systems. The challenge lies in the less well understood micromachine portions and its synergistic effects with microelectronics. This paper presents a methodology addressing these prerequisites and a description ofmore » the underlying physics of reliability for micromachines.« less
Predictable and reliable ECG monitoring over IEEE 802.11 WLANs within a hospital.
Park, Juyoung; Kang, Kyungtae
2014-09-01
Telecardiology provides mobility for patients who require constant electrocardiogram (ECG) monitoring. However, its safety is dependent on the predictability and robustness of data delivery, which must overcome errors in the wireless channel through which the ECG data are transmitted. We report here a framework that can be used to gauge the applicability of IEEE 802.11 wireless local area network (WLAN) technology to ECG monitoring systems in terms of delay constraints and transmission reliability. For this purpose, a medical-grade WLAN architecture achieved predictable delay through the combination of a medium access control mechanism based on the point coordination function provided by IEEE 802.11 and an error control scheme based on Reed-Solomon coding and block interleaving. The size of the jitter buffer needed was determined by this architecture to avoid service dropout caused by buffer underrun, through analysis of variations in transmission delay. Finally, we assessed this architecture in terms of service latency and reliability by modeling the transmission of uncompressed two-lead electrocardiogram data from the MIT-BIH Arrhythmia Database and highlight the applicability of this wireless technology to telecardiology.
Reliability prediction of large fuel cell stack based on structure stress analysis
NASA Astrophysics Data System (ADS)
Liu, L. F.; Liu, B.; Wu, C. W.
2017-09-01
The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.
2007-12-01
there are no reliable alternatives to animal testing in the determination of toxicity. QSARs are only as reliable as the corroborating toxicological ...2) QSAR approaches can also be used to estimate toxicological impact. Toxicity QSAR models can often predict many toxicity parameters without... Toxicology Study No. 87-XE-03N3-05, Assessing the Potential Environmental Consequences of a New Energetic Material: A Phased Approach, September 2005 1
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We are investigating the application of classical reliability performance metrics combined with standard single event upset (SEU) analysis data. We expect to relate SEU behavior to system performance requirements. Our proposed methodology will provide better prediction of SEU responses in harsh radiation environments with confidence metrics. single event upset (SEU), single event effect (SEE), field programmable gate array devises (FPGAs)
Learned helplessness: validity and reliability of depressive-like states in mice.
Chourbaji, S; Zacher, C; Sanchis-Segura, C; Dormann, C; Vollmayr, B; Gass, P
2005-12-01
The learned helplessness paradigm is a depression model in which animals are exposed to unpredictable and uncontrollable stress, e.g. electroshocks, and subsequently develop coping deficits for aversive but escapable situations (J.B. Overmier, M.E. Seligman, Effects of inescapable shock upon subsequent escape and avoidance responding, J. Comp. Physiol. Psychol. 63 (1967) 28-33 ). It represents a model with good similarity to the symptoms of depression, construct, and predictive validity in rats. Despite an increased need to investigate emotional, in particular depression-like behaviors in transgenic mice, so far only a few studies have been published using the learned helplessness paradigm. One reason may be the fact that-in contrast to rats (B. Vollmayr, F.A. Henn, Learned helplessness in the rat: improvements in validity and reliability, Brain Res. Brain Res. Protoc. 8 (2001) 1-7)--there is no generally accepted learned helplessness protocol available for mice. This prompted us to develop a reliable helplessness procedure in C57BL/6N mice, to exclude possible artifacts, and to establish a protocol, which yields a consistent fraction of helpless mice following the shock exposure. Furthermore, we validated this protocol pharmacologically using the tricyclic antidepressant imipramine. Here, we present a mouse model with good face and predictive validity that can be used for transgenic, behavioral, and pharmacological studies.
J. X. Zhang; J. Q. Wu; K. Chang; W. J. Elliot; S. Dun
2009-01-01
The recent modification of the Water Erosion Prediction Project (WEPP) model has improved its applicability to hydrology and erosion modeling in forest watersheds. To generate reliable topographic and hydrologic inputs for the WEPP model, carefully selecting digital elevation models (DEMs) with appropriate resolution and accuracy is essential because topography is a...
Kyongho Son; Christina Tague; Carolyn Hunsaker
2016-01-01
The effect of fine-scale topographic variability on model estimates of ecohydrologic responses to climate variability in Californiaâs Sierra Nevada watersheds has not been adequately quantified and may be important for supporting reliable climate-impact assessments. This study tested the effect of digital elevation model (DEM) resolution on model accuracy and estimates...
NASA Astrophysics Data System (ADS)
David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera
2017-04-01
This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
Lei, Tailong; Sun, Huiyong; Kang, Yu; Zhu, Feng; Liu, Hui; Zhou, Wenfang; Wang, Zhe; Li, Dan; Li, Youyong; Hou, Tingjun
2017-11-06
Xenobiotic chemicals and their metabolites are mainly excreted out of our bodies by the urinary tract through the urine. Chemical-induced urinary tract toxicity is one of the main reasons that cause failure during drug development, and it is a common adverse event for medications, natural supplements, and environmental chemicals. Despite its importance, there are only a few in silico models for assessing urinary tract toxicity for a large number of compounds with diverse chemical structures. Here, we developed a series of qualitative and quantitative structure-activity relationship (QSAR) models for predicting urinary tract toxicity. In our study, the recursive feature elimination method incorporated with random forests (RFE-RF) was used for dimension reduction, and then eight machine learning approaches were used for QSAR modeling, i.e., relevance vector machine (RVM), support vector machine (SVM), regularized random forest (RRF), C5.0 trees, eXtreme gradient boosting (XGBoost), AdaBoost.M1, SVM boosting (SVMBoost), and RVM boosting (RVMBoost). For building classification models, the synthetic minority oversampling technique was used to handle the imbalance data set problem. Among all the machine learning approaches, SVMBoost based on the RBF kernel achieves both the best quantitative (q ext 2 = 0.845) and qualitative predictions for the test set (MCC of 0.787, AUC of 0.893, sensitivity of 89.6%, specificity of 94.1%, and global accuracy of 90.8%). The application domains were then analyzed, and all of the tested chemicals fall within the application domain coverage. We also examined the structure features of the chemicals with large prediction errors. In brief, both the regression and classification models developed by the SVMBoost approach have reliable prediction capability for assessing chemical-induced urinary tract toxicity.
Bommert, Andrea; Rahnenführer, Jörg; Lang, Michel
2017-01-01
Finding a good predictive model for a high-dimensional data set can be challenging. For genetic data, it is not only important to find a model with high predictive accuracy, but it is also important that this model uses only few features and that the selection of these features is stable. This is because, in bioinformatics, the models are used not only for prediction but also for drawing biological conclusions which makes the interpretability and reliability of the model crucial. We suggest using three target criteria when fitting a predictive model to a high-dimensional data set: the classification accuracy, the stability of the feature selection, and the number of chosen features. As it is unclear which measure is best for evaluating the stability, we first compare a variety of stability measures. We conclude that the Pearson correlation has the best theoretical and empirical properties. Also, we find that for the stability assessment behaviour it is most important that a measure contains a correction for chance or large numbers of chosen features. Then, we analyse Pareto fronts and conclude that it is possible to find models with a stable selection of few features without losing much predictive accuracy.
Predictability of the Indian Ocean Dipole in the coupled models
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao
2017-03-01
In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.
Reliability Modeling of Microelectromechanical Systems Using Neural Networks
NASA Technical Reports Server (NTRS)
Perera. J. Sebastian
2000-01-01
Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.
NASA Astrophysics Data System (ADS)
Zubov, N. O.; Kaban'kov, O. N.; Yagov, V. V.; Sukomel, L. A.
2017-12-01
Wide use of natural circulation loops operating at low redused pressures generates the real need to develop reliable methods for predicting flow regimes and friction pressure drop for two-phase flows in this region of parameters. Although water-air flows at close-to-atmospheric pressures are the most widely studied subject in the field of two-phase hydrodynamics, the problem of reliably calculating friction pressure drop can hardly be regarded to have been fully solved. The specific volumes of liquid differ very much from those of steam (gas) under such conditions, due to which even a small change in flow quality may cause the flow pattern to alter very significantly. Frequently made attempts to use some or another universal approach to calculating friction pressure drop in a wide range of steam quality values do not seem to be justified and yield predicted values that are poorly consistent with experimentally measured data. The article analyzes the existing methods used to calculate friction pressure drop for two-phase flows at low pressures by comparing their results with the experimentally obtained data. The advisability of elaborating calculation procedures for determining the friction pressure drop and void fraction for two-phase flows taking their pattern (flow regime) into account is demonstrated. It is shown that, for flows characterized by low reduced pressures, satisfactory results are obtained from using a homogeneous model for quasi-homogeneous flows, whereas satisfactory results are obtained from using an annular flow model for flows characterized by high values of void fraction. Recommendations for making a shift from one model to another in carrying out engineering calculations are formulated and tested. By using the modified annular flow model, it is possible to obtain reliable predictions for not only the pressure gradient but also for the liquid film thickness; the consideration of droplet entrainment and deposition phenomena allows reasonable corrections to be introduced into calculations. To the best of the authors' knowledge, it is for the first time that the entrainment of droplets from the film surface is taken into consideration in the dispersed-annular flow model.
Nambi, Vijay; Chambless, Lloyd; He, Max; Folsom, Aaron R; Mosley, Tom; Boerwinkle, Eric; Ballantyne, Christie M
2012-01-01
Carotid intima-media thickness (CIMT) and plaque information can improve coronary heart disease (CHD) risk prediction when added to traditional risk factors (TRF). However, obtaining adequate images of all carotid artery segments (A-CIMT) may be difficult. Of A-CIMT, the common carotid artery intima-media thickness (CCA-IMT) is relatively more reliable and easier to measure. We evaluated whether CCA-IMT is comparable to A-CIMT when added to TRF and plaque information in improving CHD risk prediction in the Atherosclerosis Risk in Communities (ARIC) study. Ten-year CHD risk prediction models using TRF alone, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque were developed for the overall cohort, men, and women. The area under the receiver operator characteristic curve (AUC), per cent individuals reclassified, net reclassification index (NRI), and model calibration by the Grønnesby-Borgan test were estimated. There were 1722 incident CHD events in 12 576 individuals over a mean follow-up of 15.2 years. The AUC for TRF only, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque models were 0.741, 0.754, and 0.753, respectively. Although there was some discordance when the CCA-IMT + plaque- and A-CIMT + plaque-based risk estimation was compared, the NRI and clinical NRI (NRI in the intermediate-risk group) when comparing the CIMT models with TRF-only model, per cent reclassified, and test for model calibration were not significantly different. Coronary heart disease risk prediction can be improved by adding A-CIMT + plaque or CCA-IMT + plaque information to TRF. Therefore, evaluating the carotid artery for plaque presence and measuring CCA-IMT, which is easier and more reliable than measuring A-CIMT, provide a good alternative to measuring A-CIMT for CHD risk prediction.
CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.
2003-01-01
This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.
Bangera, Rama; Correa, Katharina; Lhorente, Jean P; Figueroa, René; Yáñez, José M
2017-01-31
Salmon Rickettsial Syndrome (SRS) caused by Piscirickettsia salmonis is a major disease affecting the Chilean salmon industry. Genomic selection (GS) is a method wherein genome-wide markers and phenotype information of full-sibs are used to predict genomic EBV (GEBV) of selection candidates and is expected to have increased accuracy and response to selection over traditional pedigree based Best Linear Unbiased Prediction (PBLUP). Widely used GS methods such as genomic BLUP (GBLUP), SNPBLUP, Bayes C and Bayesian Lasso may perform differently with respect to accuracy of GEBV prediction. Our aim was to compare the accuracy, in terms of reliability of genome-enabled prediction, from different GS methods with PBLUP for resistance to SRS in an Atlantic salmon breeding program. Number of days to death (DAYS), binary survival status (STATUS) phenotypes, and 50 K SNP array genotypes were obtained from 2601 smolts challenged with P. salmonis. The reliability of different GS methods at different SNP densities with and without pedigree were compared to PBLUP using a five-fold cross validation scheme. Heritability estimated from GS methods was significantly higher than PBLUP. Pearson's correlation between predicted GEBV from PBLUP and GS models ranged from 0.79 to 0.91 and 0.79-0.95 for DAYS and STATUS, respectively. The relative increase in reliability from different GS methods for DAYS and STATUS with 50 K SNP ranged from 8 to 25% and 27-30%, respectively. All GS methods outperformed PBLUP at all marker densities. DAYS and STATUS showed superior reliability over PBLUP even at the lowest marker density of 3 K and 500 SNP, respectively. 20 K SNP showed close to maximal reliability for both traits with little improvement using higher densities. These results indicate that genomic predictions can accelerate genetic progress for SRS resistance in Atlantic salmon and implementation of this approach will contribute to the control of SRS in Chile. We recommend GBLUP for routine GS evaluation because this method is computationally faster and the results are very similar with other GS methods. The use of lower density SNP or the combination of low density SNP and an imputation strategy may help to reduce genotyping costs without compromising gain in reliability.
Kryshev, I I; Boyer, P; Monte, L; Brittain, J E; Dzyuba, N N; Krylov, A L; Kryshev, A I; Nosov, A V; Sanina, K D; Zheleznyak, M I
2009-03-15
This paper presents results of testing models for the radioactive contamination of river water and bottom sediments by (90)Sr, (137)Cs and (239,240)Pu. The scenario for the model testing was based on data from the Techa River (Southern Urals, Russia), which was contaminated as a result of discharges of liquid radioactive waste into the river. The endpoints of the scenario were model predictions of the activity concentrations of (90)Sr, (137)Cs and (239,240)Pu in water and bottom sediments along the Techa River in 1996. Calculations for the Techa scenario were performed by six participant teams from France (model CASTEAUR), Italy (model MARTE), Russia (models TRANSFER-2, CASSANDRA, GIDRO-W) and Ukraine (model RIVTOX), all using different models. As a whole, the radionuclide predictions for (90)Sr in water for all considered models, (137)Cs for MARTE and TRANSFER-2, and (239,240)Pu for TRANSFER-2 and CASSANDRA can be considered sufficiently reliable, whereas the prediction for sediments should be considered cautiously. At the same time the CASTEAUR and RIVTOX models estimate the activity concentrations of (137)Cs and (239,240)Pu in water more reliably than in bottom sediments. The models MARTE ((239,240)Pu) and CASSANDRA ((137)Cs) evaluated the activity concentrations of radionuclides in sediments with about the same agreement with observations as for water. For (90)Sr and (137)Cs the agreement between empirical data and model predictions was good, but not for all the observations of (239,240)Pu in the river water-bottom sediment system. The modelling of (239,240)Pu distribution proved difficult because, in contrast to (137)Cs and (90)Sr, most of models have not been previously tested or validated for plutonium.
NASA Technical Reports Server (NTRS)
French, V. (Principal Investigator)
1982-01-01
An evaluation was made of Thompson-Type models which use trend terms (as a surrogate for technology), meteorological variables based on monthly average temperature, and total precipitation to forecast and estimate corn yields in Iowa, Illinois, and Indiana. Pooled and unpooled Thompson-type models were compared. Neither was found to be consistently superior to the other. Yield reliability indicators show that the models are of limited use for large area yield estimation. The models are objective and consistent with scientific knowledge. Timely yield forecasts and estimates can be made during the growing season by using normals or long range weather forecasts. The models are not costly to operate and are easy to use and understand. The model standard errors of prediction do not provide a useful current measure of modeled yield reliability.
Mansberger, Steven L.; Sheppler, Christina R.; McClure, Tina M.; VanAlstine, Cory L.; Swanson, Ingrid L.; Stoumbos, Zoey; Lambert, William E.
2013-01-01
Purpose: To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. Methods: We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Results: Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach’s alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R2) of .40. Test-retest reliability was 90%. Conclusion: The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence. PMID:24072942
Projecting technology change to improve space technology planning and systems management
NASA Astrophysics Data System (ADS)
Walk, Steven Robert
2011-04-01
Projecting technology performance evolution has been improving over the years. Reliable quantitative forecasting methods have been developed that project the growth, diffusion, and performance of technology in time, including projecting technology substitutions, saturation levels, and performance improvements. These forecasts can be applied at the early stages of space technology planning to better predict available future technology performance, assure the successful selection of technology, and improve technology systems management strategy. Often what is published as a technology forecast is simply scenario planning, usually made by extrapolating current trends into the future, with perhaps some subjective insight added. Typically, the accuracy of such predictions falls rapidly with distance in time. Quantitative technology forecasting (QTF), on the other hand, includes the study of historic data to identify one of or a combination of several recognized universal technology diffusion or substitution patterns. In the same manner that quantitative models of physical phenomena provide excellent predictions of system behavior, so do QTF models provide reliable technological performance trajectories. In practice, a quantitative technology forecast is completed to ascertain with confidence when the projected performance of a technology or system of technologies will occur. Such projections provide reliable time-referenced information when considering cost and performance trade-offs in maintaining, replacing, or migrating a technology, component, or system. This paper introduces various quantitative technology forecasting techniques and illustrates their practical application in space technology and technology systems management.
solveME: fast and reliable solution of nonlinear ME models.
Yang, Laurence; Ma, Ding; Ebrahim, Ali; Lloyd, Colton J; Saunders, Michael A; Palsson, Bernhard O
2016-09-22
Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints. Just as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
Hou, Xianlong; Hodges, Ben R; Feng, Dongyu; Liu, Qixiao
2017-03-15
As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sabater-Galindo, Marta; Sabater-Hernández, Daniel; Ruiz de Maya, Salvador; Gastelurrutia, Miguel Angel; Martínez-Martínez, Fernando; Benrimoj, Shalom I
2017-06-01
Professional pharmaceutical services may impact on patient's health behaviour as well as influence on patients' perceptions of the pharmacist image. The Health Belief Model predicts health-related behaviours using patients' beliefs. However, health beliefs (HBs) could transcend beyond predicting health behaviour and may have an impact on the patients' perceptions of the pharmacist image. This study objective was to develop and test a model that relates patients' HBs to patient's perception of the image of the pharmacist, and to assess if the provision of pharmacy services (Intervention group-IG) influences this perception compared to usual care (Control group). A qualitative study was undertaken and a questionnaire was created for the development of the model. The content, dimensions, validity and reliability of the questionnaire were pre-tested qualitatively and in a pilot mail survey. The reliability and validity of the proposed model were tested using Confirmatory Factor Analysis (CFA). Structural Equation Modelling (SEM) was used to explain relationships between dimensions of the final model and to analyse differences between groups. As a result, a final model was developed. CFA concluded that the model was valid and reliable (Goodness of Fit indices: x²(80) = 125.726, p = .001, RMSEA = .04, SRMR = .04, GFI = .997, NFI = .93, CFI = .974). SEM indicated that 'Perceived benefits' were significantly associated with 'Perceived Pharmacist Image' in the whole sample. Differences were found in the IG with also 'Self-efficacy' significantly influencing 'Perceived pharmacist image'. A model of patients' HBs related to their image of the pharmacist was developed and tested. When pharmacists deliver professional services, these services modify some patients' HBs that in turn influence public perception of the pharmacist.
Emission of pesticides into the air
Van Den, Berg; Kubiak, R.; Benjey, W.G.; Majewski, M.S.; Yates, S.R.; Reeves, G.L.; Smelt, J.H.; Van Der Linden, A. M. A.
1999-01-01
During and after the application of a pesticide in agriculture, a substantial fraction of the dosage may enter the atmosphere and be transported over varying distances downwind of the target. The rate and extent of the emission during application, predominantly as spray particle drift, depends primarily on the application method (equipment and technique), the formulation and environmental conditions, whereas the emission after application depends primarily on the properties of the pesticide, soils, crops and environmental conditions. The fraction of the dosage that misses the target area may be high in some cases and more experimental data on this loss term are needed for various application types and weather conditions. Such data are necessary to test spray drift models, and for further model development and verification as well. Following application, the emission of soil fumigants and soil incorporated pesticides into the air can be measured and computed with reasonable accuracy, but further model development is needed to improve the reliability of the model predictions. For soil surface applied pesticides reliable measurement methods are available, but there is not yet a reliable model. Further model development is required which must be verified by field experiments. Few data are available on pesticide volatilization from plants and more field experiments are also needed to study the fate processes on the plants. Once this information is available, a model needs to be developed to predict the volatilization of pesticides from plants, which, again, should be verified with field measurements. For regional emission estimates, a link between data on the temporal and spatial pesticide use and a geographical information system for crops and soils with their characteristics is needed.
Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan
2017-01-01
Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.
Ghaderi, Forouzan; Ghaderi, Amir H.; Ghaderi, Noushin; Najafi, Bijan
2017-01-01
Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose. PMID:29188217
Xu, Xiaogang; Wang, Songling; Liu, Jinlian; Liu, Xinyu
2014-01-01
Blower and exhaust fans consume over 30% of electricity in a thermal power plant, and faults of these fans due to rotation stalls are one of the most frequent reasons for power plant outage failures. To accurately predict the occurrence of fan rotation stalls, we propose a support vector regression machine (SVRM) model that predicts the fan internal pressures during operation, leaving ample time for rotation stall detection. We train the SVRM model using experimental data samples, and perform pressure data prediction using the trained SVRM model. To prove the feasibility of using the SVRM model for rotation stall prediction, we further process the predicted pressure data via wavelet-transform-based stall detection. By comparison of the detection results from the predicted and measured pressure data, we demonstrate that the SVRM model can accurately predict the fan pressure and guarantee reliable stall detection with a time advance of up to 0.0625 s. This superior pressure data prediction capability leaves significant time for effective control and prevention of fan rotation stall faults. This model has great potential for use in intelligent fan systems with stall prevention capability, which will ensure safe operation and improve the energy efficiency of power plants. PMID:24854057
Large-scale structure prediction by improved contact predictions and model quality assessment.
Michel, Mirco; Menéndez Hurtado, David; Uziela, Karolis; Elofsson, Arne
2017-07-15
Accurate contact predictions can be used for predicting the structure of proteins. Until recently these methods were limited to very big protein families, decreasing their utility. However, recent progress by combining direct coupling analysis with machine learning methods has made it possible to predict accurate contact maps for smaller families. To what extent these predictions can be used to produce accurate models of the families is not known. We present the PconsFold2 pipeline that uses contact predictions from PconsC3, the CONFOLD folding algorithm and model quality estimations to predict the structure of a protein. We show that the model quality estimation significantly increases the number of models that reliably can be identified. Finally, we apply PconsFold2 to 6379 Pfam families of unknown structure and find that PconsFold2 can, with an estimated 90% specificity, predict the structure of up to 558 Pfam families of unknown structure. Out of these, 415 have not been reported before. Datasets as well as models of all the 558 Pfam families are available at http://c3.pcons.net/ . All programs used here are freely available. arne@bioinfo.se. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Uncertainties in (E)UV model atmosphere fluxes
NASA Astrophysics Data System (ADS)
Rauch, T.
2008-04-01
Context: During the comparison of synthetic spectra calculated with two NLTE model atmosphere codes, namely TMAP and TLUSTY, we encounter systematic differences in the EUV fluxes due to the treatment of level dissolution by pressure ionization. Aims: In the case of Sirius B, we demonstrate an uncertainty in modeling the EUV flux reliably in order to challenge theoreticians to improve the theory of level dissolution. Methods: We calculated synthetic spectra for hot, compact stars using state-of-the-art NLTE model-atmosphere techniques. Results: Systematic differences may occur due to a code-specific cutoff frequency of the H I Lyman bound-free opacity. This is the case for TMAP and TLUSTY. Both codes predict the same flux level at wavelengths lower than about 1500 Å for stars with effective temperatures (T_eff) below about 30 000 K only, if the same cutoff frequency is chosen. Conclusions: The theory of level dissolution in high-density plasmas, which is available for hydrogen only should be generalized to all species. Especially, the cutoff frequencies for the bound-free opacities should be defined in order to make predictions of UV fluxes more reliable.
O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite
2012-01-01
Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
NASA Astrophysics Data System (ADS)
Couture, A.; Casten, R. F.; Cakirli, R. B.
2017-12-01
Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.
Zhang, Qingqing; Huo, Mengqi; Zhang, Yanling; Qiao, Yanjiang; Gao, Xiaoyan
2018-06-01
High-resolution mass spectrometry (HRMS) provides a powerful tool for the rapid analysis and identification of compounds in herbs. However, the diversity and large differences in the content of the chemical constituents in herbal medicines, especially isomerisms, are a great challenge for mass spectrometry-based structural identification. In the current study, a new strategy for the structural characterization of potential new phthalide compounds was proposed by isomer structure predictions combined with a quantitative structure-retention relationship (QSRR) analysis using phthalide compounds in Chuanxiong as an example. This strategy consists of three steps. First, the structures of phthalide compounds were reasonably predicted on the basis of the structure features and MS/MS fragmentation patterns: (1) the collected raw HRMS data were preliminarily screened by an in-house database; (2) the MS/MS fragmentation patterns of the analogous compounds were summarized; (3) the reported phthalide compounds were identified, and the structures of the isomers were reasonably predicted. Second, the QSRR model was established and verified using representative phthalide compound standards. Finally, the retention times of the predicted isomers were calculated by the QSRR model, and the structures of these peaks were rationally characterized by matching retention times of the detected chromatographic peaks and the predicted isomers. A multiple linear regression QSRR model in which 6 physicochemical variables were screened was built using 23 phthalide standards. The retention times of the phthalide isomers in Chuanxiong were well predicted by the QSRR model combined with reasonable structure predictions (R 2 =0.955). A total of 81 peaks were detected from Chuanxiong and assigned to reasonable structures, and 26 potential new phthalide compounds were structurally characterized. This strategy can improve the identification efficiency and reliability of homologues in complex materials. Copyright © 2018 Elsevier B.V. All rights reserved.
Roy, Kunal; Mitra, Indrani
2011-07-01
Quantitative structure-activity relationships (QSARs) have important applications in drug discovery research, environmental fate modeling, property prediction, etc. Validation has been recognized as a very important step for QSAR model development. As one of the important objectives of QSAR modeling is to predict activity/property/toxicity of new chemicals falling within the domain of applicability of the developed models and QSARs are being used for regulatory decisions, checking reliability of the models and confidence of their predictions is a very important aspect, which can be judged during the validation process. One prime application of a statistically significant QSAR model is virtual screening for molecules with improved potency based on the pharmacophoric features and the descriptors appearing in the QSAR model. Validated QSAR models may also be utilized for design of focused libraries which may be subsequently screened for the selection of hits. The present review focuses on various metrics used for validation of predictive QSAR models together with an overview of the application of QSAR models in the fields of virtual screening and focused library design for diverse series of compounds with citation of some recent examples.
Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass
NASA Astrophysics Data System (ADS)
Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.
2018-04-01
Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.
A Method for Evaluating the Safety Impacts of Air Traffic Automation
NASA Technical Reports Server (NTRS)
Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Bonesteel, Charles
1998-01-01
This report describes a methodology for analyzing the safety and operational impacts of emerging air traffic technologies. The approach integrates traditional reliability models of the system infrastructure with models that analyze the environment within which the system operates, and models of how the system responds to different scenarios. Products of the analysis include safety measures such as predicted incident rates, predicted accident statistics, and false alarm rates; and operational availability data. The report demonstrates the methodology with an analysis of the operation of the Center-TRACON Automation System at Dallas-Fort Worth International Airport.
Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai
2013-01-01
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
A predictive framework for evaluating models of semantic organization in free recall
Morton, Neal W; Polyn, Sean M.
2016-01-01
Research in free recall has demonstrated that semantic associations reliably influence the organization of search through episodic memory. However, the specific structure of these associations and the mechanisms by which they influence memory search remain unclear. We introduce a likelihood-based model-comparison technique, which embeds a model of semantic structure within the context maintenance and retrieval (CMR) model of human memory search. Within this framework, model variants are evaluated in terms of their ability to predict the specific sequence in which items are recalled. We compare three models of semantic structure, latent semantic analysis (LSA), global vectors (GloVe), and word association spaces (WAS), and find that models using WAS have the greatest predictive power. Furthermore, we find evidence that semantic and temporal organization is driven by distinct item and context cues, rather than a single context cue. This finding provides important constraint for theories of memory search. PMID:28331243
Ebara, Takeshi; Azuma, Ryohei; Shoji, Naoto; Matsukawa, Tsuyoshi; Yamada, Yasuyuki; Akiyama, Tomohiro; Kurihara, Takahiro; Yamada, Shota
2017-11-25
Objective measurements using built-in smartphone sensors that can measure physical activity/inactivity in daily working life have the potential to provide a new approach to assessing workers' health effects. The aim of this study was to elucidate the characteristics and reliability of built-in step counting sensors on smartphones for development of an easy-to-use objective measurement tool that can be applied in ergonomics or epidemiological research. To evaluate the reliability of step counting sensors embedded in seven major smartphone models, the 6-minute walk test was conducted and the following analyses of sensor precision and accuracy were performed: 1) relationship between actual step count and step count detected by sensors, 2) reliability between smartphones of the same model, and 3) false detection rates when sitting during office work, while riding the subway, and driving. On five of the seven models, the inter-class correlations coefficient (ICC (3,1) ) showed high reliability with a range of 0.956-0.993. The other two models, however, had ranges of 0.443-0.504 and the relative error ratios of the sensor-detected step count to the actual step count were ±48.7%-49.4%. The level of agreement between the same models was ICC (3,1) : 0.992-0.998. The false detection rates differed between the sitting conditions. These results suggest the need for appropriate regulation of step counts measured by sensors, through means such as correction or calibration with a predictive model formula, in order to obtain the highly reliable measurement results that are sought in scientific investigation.
Towards a generalized energy prediction model for machine tools
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H.; Dornfeld, David A.; Helu, Moneer; Rachuri, Sudarsan
2017-01-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process. PMID:28652687
Towards a generalized energy prediction model for machine tools.
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan
2017-04-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.
An evidence-based decision assistance model for predicting training outcome in juvenile guide dogs.
Harvey, Naomi D; Craigon, Peter J; Blythe, Simon A; England, Gary C W; Asher, Lucy
2017-01-01
Working dog organisations, such as Guide Dogs, need to regularly assess the behaviour of the dogs they train. In this study we developed a questionnaire-style behaviour assessment completed by training supervisors of juvenile guide dogs aged 5, 8 and 12 months old (n = 1,401), and evaluated aspects of its reliability and validity. Specifically, internal reliability, temporal consistency, construct validity, predictive criterion validity (comparing against later training outcome) and concurrent criterion validity (comparing against a standardised behaviour test) were evaluated. Thirty-nine questions were sourced either from previously published literature or created to meet requirements identified via Guide Dogs staff surveys and staff feedback. Internal reliability analyses revealed seven reliable and interpretable trait scales named according to the questions within them as: Adaptability; Body Sensitivity; Distractibility; Excitability; General Anxiety; Trainability and Stair Anxiety. Intra-individual temporal consistency of the scale scores between 5-8, 8-12 and 5-12 months was high. All scales excepting Body Sensitivity showed some degree of concurrent criterion validity. Predictive criterion validity was supported for all seven scales, since associations were found with training outcome, at at-least one age. Thresholds of z-scores on the scales were identified that were able to distinguish later training outcome by identifying 8.4% of all dogs withdrawn for behaviour and 8.5% of all qualified dogs, with 84% and 85% specificity. The questionnaire assessment was reliable and could detect traits that are consistent within individuals over time, despite juvenile dogs undergoing development during the study period. By applying thresholds to scores produced from the questionnaire this assessment could prove to be a highly valuable decision-making tool for Guide Dogs. This is the first questionnaire-style assessment of juvenile dogs that has shown value in predicting the training outcome of individual working dogs.
Spread prediction model of continuous steel tube based on BP neural network
NASA Astrophysics Data System (ADS)
Zhai, Jian-wei; Yu, Hui; Zou, Hai-bei; Wang, San-zhong; Liu, Li-gang
2017-07-01
According to the geometric pass of roll and technological parameters of three-roller continuous mandrel rolling mill in a factory, a finite element model is established to simulate the continuous rolling process of seamless steel tube, and the reliability of finite element model is verified by comparing with the simulation results and actual results of rolling force, wall thickness and outer diameter of the tube. The effect of roller reduction, roller rotation speed and blooming temperature on the spread rule is studied. Based on BP(Back Propagation) neural network technology, a spread prediction model of continuous rolling tube is established for training wall thickness coefficient and spread coefficient of the continuous rolling tube, and the rapid and accurate prediction of continuous rolling tube size is realized.
NASA Astrophysics Data System (ADS)
Chan, Kwai S.; Enright, Michael P.; Moody, Jonathan; Fitch, Simeon H. K.
2014-01-01
The objective of this investigation was to develop an innovative methodology for life and reliability prediction of hot-section components in advanced turbopropulsion systems. A set of generic microstructure-based time-dependent crack growth (TDCG) models was developed and used to assess the sources of material variability due to microstructure and material parameters such as grain size, activation energy, and crack growth threshold for TDCG. A comparison of model predictions and experimental data obtained in air and in vacuum suggests that oxidation is responsible for higher crack growth rates at high temperatures, low frequencies, and long dwell times, but oxidation can also induce higher crack growth thresholds (Δ K th or K th) under certain conditions. Using the enhanced risk analysis tool and material constants calibrated to IN 718 data, the effect of TDCG on the risk of fracture in turboengine components was demonstrated for a generic rotor design and a realistic mission profile using the DARWIN® probabilistic life-prediction code. The results of this investigation confirmed that TDCG and cycle-dependent crack growth in IN 718 can be treated by a simple summation of the crack increments over a mission. For the temperatures considered, TDCG in IN 718 can be considered as a K-controlled or a diffusion-controlled oxidation-induced degradation process. This methodology provides a pathway for evaluating microstructural effects on multiple damage modes in hot-section components.
Predicting outcome in severe traumatic brain injury using a simple prognostic model.
Sobuwa, Simpiwe; Hartzenberg, Henry Benjamin; Geduld, Heike; Uys, Corrie
2014-06-17
Several studies have made it possible to predict outcome in severe traumatic brain injury (TBI) making it beneficial as an aid for clinical decision-making in the emergency setting. However, reliable predictive models are lacking for resource-limited prehospital settings such as those in developing countries like South Africa. To develop a simple predictive model for severe TBI using clinical variables in a South African prehospital setting. All consecutive patients admitted at two level-one centres in Cape Town, South Africa, for severe TBI were included. A binary logistic regression model was used, which included three predictor variables: oxygen saturation (SpO₂), Glasgow Coma Scale (GCS) and pupil reactivity. The Glasgow Outcome Scale was used to assess outcome on hospital discharge. A total of 74.4% of the outcomes were correctly predicted by the logistic regression model. The model demonstrated SpO₂ (p=0.019), GCS (p=0.001) and pupil reactivity (p=0.002) as independently significant predictors of outcome in severe TBI. Odds ratios of a good outcome were 3.148 (SpO₂ ≥ 90%), 5.108 (GCS 6 - 8) and 4.405 (pupils bilaterally reactive). This model is potentially useful for effective predictions of outcome in severe TBI.
Validation Evidence of the Motivation for Teaching Scale in Secondary Education.
Abós, Ángel; Sevil, Javier; Martín-Albo, José; Aibar, Alberto; García-González, Luis
2018-04-10
Grounded in self-determination theory, the aim of this study was to develop a scale with adequate psychometric properties to assess motivation for teaching and to explain some outcomes of secondary education teachers at work. The sample comprised 584 secondary education teachers. Analyses supported the five-factor model (intrinsic motivation, identified regulation, introjected regulation, external regulation and amotivation) and indicated the presence of a continuum of self-determination. Evidence of reliability was provided by Cronbach's alpha, composite reliability and average variance extracted. Multigroup confirmatory factor analyses supported the partial invariance (configural and metric) of the scale in different sub-samples, in terms of gender and type of school. Concurrent validity was analyzed by a structural equation modeling that explained 71% of the work dedication variance and 69% of the boredom at work variance. Work dedication was positively predicted by intrinsic motivation (ß = .56, p < .001) and external regulation (ß = .29, p < .001) and negatively predicted by introjected regulation (ß = -.22, p < .001) and amotivation (ß = -.49, p < .001). Boredom at work was negatively predicted by intrinsic motivation (ß = -.28, p < .005) and positively predicted by amotivation (ß = .68, p < .001). The Motivation for Teaching Scale in Secondary Education (Spanish acronym EME-ES, Escala de Motivación por la Enseñanza en Educación Secundaria) is discussed as a valid and reliable instrument. This is the first specific scale in the work context of secondary teachers that has integrated the five-factor structure together with their dedication and boredom at work.
Hydrologic modeling strategy for the Islamic Republic of Mauritania, Africa
Friedel, Michael J.
2008-01-01
The government of Mauritania is interested in how to maintain hydrologic balance to ensure a long-term stable water supply for minerals-related, domestic, and other purposes. Because of the many complicating and competing natural and anthropogenic factors, hydrologists will perform quantitative analysis with specific objectives and relevant computer models in mind. Whereas various computer models are available for studying water-resource priorities, the success of these models to provide reliable predictions largely depends on adequacy of the model-calibration process. Predictive analysis helps us evaluate the accuracy and uncertainty associated with simulated dependent variables of our calibrated model. In this report, the hydrologic modeling process is reviewed and a strategy summarized for future Mauritanian hydrologic modeling studies.
Reliability evaluation of oil pipelines operating in aggressive environment
NASA Astrophysics Data System (ADS)
Magomedov, R. M.; Paizulaev, M. M.; Gebel, E. S.
2017-08-01
In connection with modern increased requirements for ecology and safety, the development of diagnostic services complex is obligatory and necessary enabling to ensure the reliable operation of the gas transportation infrastructure. Estimation of oil pipelines technical condition should be carried out not only to establish the current values of the equipment technological parameters in operation, but also to predict the dynamics of changes in the physical and mechanical characteristics of the material, the appearance of defects, etc. to ensure reliable and safe operation. In the paper, existing Russian and foreign methods for evaluation of the oil pipelines reliability are considered, taking into account one of the main factors leading to the appearance of crevice in the pipeline material, i.e. change the shape of its cross-section, - corrosion. Without compromising the generality of the reasoning, the assumption of uniform corrosion wear for the initial rectangular cross section has been made. As a result a formula for calculation the probability of failure-free operation was formulated. The proposed mathematical model makes it possible to predict emergency situations, as well as to determine optimal operating conditions for oil pipelines.
A microRNA-based prediction model for lymph node metastasis in hepatocellular carcinoma.
Zhang, Li; Xiang, Zuo-Lin; Zeng, Zhao-Chong; Fan, Jia; Tang, Zhao-You; Zhao, Xiao-Mei
2016-01-19
We developed an efficient microRNA (miRNA) model that could predict the risk of lymph node metastasis (LNM) in hepatocellular carcinoma (HCC). We first evaluated a training cohort of 192 HCC patients after hepatectomy and found five LNM associated predictive factors: vascular invasion, Barcelona Clinic Liver Cancer stage, miR-145, miR-31, and miR-92a. The five statistically independent factors were used to develop a predictive model. The predictive value of the miRNA-based model was confirmed in a validation cohort of 209 consecutive HCC patients. The prediction model was scored for LNM risk from 0 to 8. The cutoff value 4 was used to distinguish high-risk and low-risk groups. The model sensitivity and specificity was 69.6 and 80.2%, respectively, during 5 years in the validation cohort. And the area under the curve (AUC) for the miRNA-based prognostic model was 0.860. The 5-year positive and negative predictive values of the model in the validation cohort were 30.3 and 95.5%, respectively. Cox regression analysis revealed that the LNM hazard ratio of the high-risk versus low-risk groups was 11.751 (95% CI, 5.110-27.021; P < 0.001) in the validation cohort. In conclusion, the miRNA-based model is reliable and accurate for the early prediction of LNM in patients with HCC.
Simultaneous Co-Clustering and Classification in Customers Insight
NASA Astrophysics Data System (ADS)
Anggistia, M.; Saefuddin, A.; Sartono, B.
2017-04-01
Building predictive model based on the heterogeneous dataset may yield many problems, such as less precise in parameter and prediction accuracy. Such problem can be solved by segmenting the data into relatively homogeneous groups and then build a predictive model for each cluster. The advantage of using this strategy usually gives result in simpler models, more interpretable, and more actionable without any loss in accuracy and reliability. This work concerns on marketing data set which recorded a customer behaviour across products. There are some variables describing customer and product as attributes. The basic idea of this approach is to combine co-clustering and classification simultaneously. The objective of this research is to analyse the customer across product characteristics, so the marketing strategy implemented precisely.
NASA Astrophysics Data System (ADS)
Buzzelli, Christopher; Doering, Peter H.; Wan, Yongshan; Sun, Detong; Fugate, David
2014-12-01
Variations in freshwater inflow have ecological consequences for estuaries ranging among eutrophication, flushing and transport, and high and low salinity impacts on biota. Predicting the potential effects of the magnitude and composition of inflow on estuaries over a range of spatial and temporal scales requires reliable mathematical models. The goal of this study was to develop and test a model of ecosystem processes with variable freshwater inflow to the sub-tropical Caloosahatchee River Estuary (CRE) in southwest Florida from 2002 to 2009. The modeling framework combined empirically derived inputs of freshwater and materials from the watershed, daily predictions of salinity, a box model for physical transport, and simulation models of biogeochemical and seagrass dynamics. The CRE was split into 3 segments to estimate advective and dispersive transport of water column constituents. Each segment contained a sub-model to simulate changes in the concentrations of organic nitrogen and phosphorus (ON and OP), ammonium (NH4+), nitrate-nitrite (NOx-), ortho-phosphate (PO4-3), phytoplankton chlorophyll a (CHL), and sediment microalgae (SM). The seaward segment also had sub-models for seagrasses (Halodule wrightii and Thalassia testudinum). The model provided realistic predictions of ON in the upper estuary during wet conditions since organic nitrogen is associated with freshwater inflow and low salinity. Although simulated CHL concentrations were variable, the model proved to be a reliable predictor in time and space. While predicted NOx- concentrations were proportional to freshwater inflow, NH4+ was less predictable due to the complexity of internal cycling during times of reduced freshwater inflow. Overall, the model provided a representation of seagrass biomass changes despite the absence of epiphytes, nutrient effects, or sophisticated translocation in the formulation. The model is being used to investigate the relative importance of colored dissolved organic matter (CDOM) vs. CHL in submarine light availability throughout the CRE, assess if reductions in nutrient loads are more feasible by controlling freshwater quantity or N and P concentrations, and explore the role of inflow and flushing on the fates of externally and internally derived dissolved and particulate constituents.
Tan, Christine L; Hassali, Mohamed A; Saleem, Fahad; Shafie, Asrul A; Aljadhey, Hisham; Gan, Vincent B
2015-01-01
(i) To develop the Pharmacy Value-Added Services Questionnaire (PVASQ) using emerging themes generated from interviews. (ii) To establish reliability and validity of questionnaire instrument. Using an extended Theory of Planned Behavior as the theoretical model, face-to-face interviews generated salient beliefs of pharmacy value-added services. The PVASQ was constructed initially in English incorporating important themes and later translated into the Malay language with forward and backward translation. Intention (INT) to adopt pharmacy value-added services is predicted by attitudes (ATT), subjective norms (SN), perceived behavioral control (PBC), knowledge and expectations. Using a 7-point Likert-type scale and a dichotomous scale, test-retest reliability (N=25) was assessed by administrating the questionnaire instrument twice at an interval of one week apart. Internal consistency was measured by Cronbach's alpha and construct validity between two administrations was assessed using the kappa statistic and the intraclass correlation coefficient (ICC). Confirmatory Factor Analysis, CFA (N=410) was conducted to assess construct validity of the PVASQ. The kappa coefficients indicate a moderate to almost perfect strength of agreement between test and retest. The ICC for all scales tested for intra-rater (test-retest) reliability was good. The overall Cronbach' s alpha (N=25) is 0.912 and 0.908 for the two time points. The result of CFA (N=410) showed most items loaded strongly and correctly into corresponding factors. Only one item was eliminated. This study is the first to develop and establish the reliability and validity of the Pharmacy Value-Added Services Questionnaire instrument using the Theory of Planned Behavior as the theoretical model. The translated Malay language version of PVASQ is reliable and valid to predict Malaysian patients' intention to adopt pharmacy value-added services to collect partial medicine supply.
Prediction of Seasonal Climate-induced Variations in Global Food Production
NASA Technical Reports Server (NTRS)
Iizumi, Toshichika; Sakuma, Hirofumi; Yokozawa, Masayuki; Luo, Jing-Jia; Challinor, Andrew J.; Brown, Molly E.; Sakurai, Gen; Yamagata, Toshio
2013-01-01
Consumers, including the poor in many countries, are increasingly dependent on food imports and are therefore exposed to variations in yields, production, and export prices in the major food-producing regions of the world. National governments and commercial entities are paying increased attention to the cropping forecasts of major food-exporting countries as well as to their own domestic food production. Given the increased volatility of food markets and the rising incidence of climatic extremes affecting food production, food price spikes may increase in prevalence in future years. Here we present a global assessment of the reliability of crop failure hindcasts for major crops at two lead times derived by linking ensemble seasonal climatic forecasts with statistical crop models. We assessed the reliability of hindcasts (i.e., retrospective forecasts for the past) of crop yield loss relative to the previous year for two lead times. Pre-season yield predictions employ climatic forecasts and have lead times of approximately 3 to 5 months for providing information regarding variations in yields for the coming cropping season. Within-season yield predictions use climatic forecasts with lead times of 1 to 3 months. Pre-season predictions can be of value to national governments and commercial concerns, complemented by subsequent updates from within-season predictions. The latter incorporate information on the most recent climatic data for the upcoming period of reproductive growth. In addition to such predictions, hindcasts using observations from satellites were performed to demonstrate the upper limit of the reliability of crop forecasting.
NASA Astrophysics Data System (ADS)
Trachtenberg, I.
How a reliability model might be developed with new data from accelerated stress testing, failure mechanisms, process control monitoring, and test structure evaluations is illustrated. The effects of the acceleration of temperature on operating life is discussed. Test structures that will further accelerate the failure rate are discussed. Corrosion testing is addressed. The uncoated structure is encapsulated in a variety of mold compounds and subjected to pressure-cooker testing.
Jalalian, Mehrdad; Latiff, Latiffah; Hassan, Syed Tajuddin Syed; Hanachi, Parichehr; Othman, Mohamed
2010-05-01
University students are a target group for blood donor programs. To develop a blood donation culture among university students, it is important to identify factors used to predict their intent to donate blood. This study attempted to develop a valid and reliable measurement tool to be employed in assessing variables in a blood donation behavior model based on the Theory of Planned Behavior (TPB), a commonly used theoretical foundation for social psychology studies. We employed an elicitation study, in which we determined the commonly held behavioral and normative beliefs about blood donation. We used the results of the elicitation study and a standard format for creating questionnaire items for all constructs of the TPB model to prepare the first draft of the measurement tool. After piloting the questionnaire, we prepared the final draft of the questionnaire to be used in our main study. Examination of internal consistency using Chronbach's alpha coefficient and item-total statistics indicated the constructs "Intention" and "Self efficacy" had the highest reliability. Removing one item from each of the constructs, "Attitude," "Subjective norm," "Self efficacy," or "Behavioral beliefs", can considerably increase the reliability of the measurement tool, however, such action is controversial, especially for the variables "attitude" and "subjective norm." We consider all the items of our first draft questionnaire in our main study to make it a reliable measurement tool.
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Fournier, A.; Hulot, G.
2014-12-01
We investigate the geophysical relevance of low-dimensional models of the geomagnetic dipole fieldby comparing these models to the signed relative paleomagnetic intensity over the past 2 Myr.The comparison is done via Bayesian statistics, implemented numerically by Monte Carlo (MC) sampling.We consider several MC schemes, as well as two data sets to show the robustness of our approach with respect to its numerical implementation and to the details of how the data are collected.The data we consider are the Sint-2000 [1] and PADM2M [2] data sets.We consider three stochastic differential equation (SDE) models and one deterministic model. Experiments with synthetic data show that it is feasible that a low dimensional modelcan learn the geophysical state from data of only the dipole field,and reveal the limitations of the low-dimensional models.For example, the G12 model [3] (a deterministic model that generates dipole reversals by crisis induced intermittency)can only match either one of the two important time scales we find in the data. The MC sampling approach also allows usto use the models to make predictions of the dipole field.We assess how reliably dipole reversals can be predictedwith our approach by hind-casting five reversals documented over the past 2 Myr. We find that, besides its limitations, G12 can be used to predict reversals reliably,however only with short lead times and over short horizons. The scalar SDE models on the other hand are not useful for prediction of dipole reversals.References Valet, J.P., Maynadier,L and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. Ziegler, L.B., Constable, C.G., Johnson, C.L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood model of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089. Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137.
Predicting risk and outcomes for frail older adults: an umbrella review of frailty screening tools
Apóstolo, João; Cooke, Richard; Bobrowicz-Campos, Elzbieta; Santana, Silvina; Marcucci, Maura; Cano, Antonio; Vollenbroek-Hutten, Miriam; Germini, Federico; Holland, Carol
2017-01-01
EXECUTIVE SUMMARY Background A scoping search identified systematic reviews on diagnostic accuracy and predictive ability of frailty measures in older adults. In most cases, research was confined to specific assessment measures related to a specific clinical model. Objectives To summarize the best available evidence from systematic reviews in relation to reliability, validity, diagnostic accuracy and predictive ability of frailty measures in older adults. Inclusion criteria Population Older adults aged 60 years or older recruited from community, primary care, long-term residential care and hospitals. Index test Available frailty measures in older adults. Reference test Cardiovascular Health Study phenotype model, the Canadian Study of Health and Aging cumulative deficit model, Comprehensive Geriatric Assessment or other reference tests. Diagnosis of interest Frailty defined as an age-related state of decreased physiological reserves characterized by an increased risk of poor clinical outcomes. Types of studies Quantitative systematic reviews. Search strategy A three-step search strategy was utilized to find systematic reviews, available in English, published between January 2001 and October 2015. Methodological quality Assessed by two independent reviewers using the Joanna Briggs Institute critical appraisal checklist for systematic reviews and research synthesis. Data extraction Two independent reviewers extracted data using the standardized data extraction tool designed for umbrella reviews. Data synthesis Data were only presented in a narrative form due to the heterogeneity of included reviews. Results Five reviews with a total of 227,381 participants were included in this umbrella review. Two reviews focused on reliability, validity and diagnostic accuracy; two examined predictive ability for adverse health outcomes; and one investigated validity, diagnostic accuracy and predictive ability. In total, 26 questionnaires and brief assessments and eight frailty indicators were analyzed, most of which were applied to community-dwelling older people. The Frailty Index was examined in almost all these dimensions, with the exception of reliability, and its diagnostic and predictive characteristics were shown to be satisfactory. Gait speed showed high sensitivity, but only moderate specificity, and excellent predictive ability for future disability in activities of daily living. The Tilburg Frailty Indicator was shown to be a reliable and valid measure for frailty screening, but its diagnostic accuracy was not evaluated. Screening Letter, Timed-up-and-go test and PRISMA 7 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) demonstrated high sensitivity and moderate specificity for identifying frailty. In general, low physical activity, variously measured, was one of the most powerful predictors of future decline in activities of daily living. Conclusion Only a few frailty measures seem to be demonstrably valid, reliable and diagnostically accurate, and have good predictive ability. Among them, the Frailty Index and gait speed emerged as the most useful in routine care and community settings. However, none of the included systematic reviews provided responses that met all of our research questions on their own and there is a need for studies that could fill this gap, covering all these issues within the same study. Nevertheless, it was clear that no suitable tool for assessing frailty appropriately in emergency departments was identified. PMID:28398987
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1990-01-01
A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.
Assessing the predictive value of the American Board of Family Practice In-training Examination.
Replogle, William H; Johnson, William D
2004-03-01
The American Board of Family Practice In-training Examination (ABFP ITE) is a cognitive examination similar in content to the ABFP Certification Examination (CE). The ABFP ITE is widely used in family medicine residency programs. It was originally developed and intended to be used for assessment of groups of residents. Despite lack of empirical support, however, some residency programs are using ABFP ITE scores as individual resident performance indicators. This study's objective was to estimate the positive predictive value of the ABFP ITE for identifying residents at risk for poor performance on the ABFP CE or a subsequent ABFP ITE. We used a normal distribution model for correlated test scores and Monte Carlo simulation to investigate the effect of test reliability (measurement errors) on the positive predictive value of the ABFP ITE. The positive predictive value of the composite score was .72. The positive predictive value of the eight specialty subscales ranged from .26 to .57. Only the composite score of the ABFP ITE has acceptable positive predictive value to be used as part of a comprehension resident evaluation system. The ABFP ITE specialty subscales do not have sufficient positive predictive value or reliability to warrant use as performance indicators.
NASA Astrophysics Data System (ADS)
Brochero, D.; Peña, J.; Anctil, F.; Boucher, M. A.; Nogales, J.; Reyes, N.
2016-12-01
The impacts of floods in Colombia during 2010 and 2011 as a result of ENSO in its cold phase (La Niña) marked a milestone in Colombian politics. In La Mojana region the balance was around 100,000 homeless and 3 km2 of flooded crops. We model the upstream basin of La Mojana (3600 km2 and a mean annual precipitation from 1000mm in valleys to 4500 mm in mountains). A forecasting system of at least three days in advance was judged prudent. This basin receives an streamflow highly regulated by multiple reservoirs that we model with a recurrent neural networks from 1 to 3-days ahead. For hydrological modeling purposes we use the GR4J, HBV, and SIMHYD models, records of daily precipitation, temperature, and streamflows, and 110 prediction scenarios of precipitation and temperature from Canada, USA, Brazil, and Europe extracted from the TIGGE database (MEPS). Calibration period is between January 2004 and August 2011. Validation from September to December 2011, taking as meteorological input the MEPS. We analised four alternative for the 3-day Hydrological Ensemble Prediction System (HEPS) Calibration: 1) only the GR4J model and observed values, 2). as 1 but HBV and SIMHYD are included, 3). Simultaneous optimization of the three hydrological models based on the reliability maximisation and the CRPS minimisation using the multiobjective calibration, observed and forecasted temperature and precipitation from the MEPS and, 4). as 3 but adding the daily streamflow data assimilation. Results show that the use of multiple hydrological models is clearly advantageous but even more performing the simultaneous optimization of hydrological models in the probabilistic context directly. The results evolution of the MAE on the reliability diagram (MAE-RD) are 43%, 27%, 17% and 15% respectively for the four alternatives. Regarding CRPS, MAE results show that the probabilistic prediction improves the deterministic estimate based on the daily mean HEPS scenario, despite the improvement in reliability is not necessarily reflected in the CRPS for the four alternatives: 4.3, 3.06 , 9.98, and 3.94, values that also accompany the mean scenario Nash-Sutcliffe of 0.93, 0.96, 0.51, and 0.93 respectively. In conclusion it shows that alternative 4 reached a good compromise between the deterministic and probabilistic performance (NS=0.93 and MAERD = 15%).
NASA Astrophysics Data System (ADS)
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Shengzhi; Ming, Bo; Huang, Qiang
It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less
Test-Retest Reliability and Predictive Validity of the Implicit Association Test in Children
ERIC Educational Resources Information Center
Rae, James R.; Olson, Kristina R.
2018-01-01
The Implicit Association Test (IAT) is increasingly used in developmental research despite minimal evidence of whether children's IAT scores are reliable across time or predictive of behavior. When test-retest reliability and predictive validity have been assessed, the results have been mixed, and because these studies have differed on many…
Determination of Turboprop Reduction Gearbox System Fatigue Life and Reliability
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.; Lewicki, David G.; Savage, Michael; Vlcek, Brian L.
2007-01-01
Two computational models to determine the fatigue life and reliability of a commercial turboprop gearbox are compared with each other and with field data. These models are (1) Monte Carlo simulation of randomly selected lives of individual bearings and gears comprising the system and (2) two-parameter Weibull distribution function for bearings and gears comprising the system using strict-series system reliability to combine the calculated individual component lives in the gearbox. The Monte Carlo simulation included the virtual testing of 744,450 gearboxes. Two sets of field data were obtained from 64 gearboxes that were first-run to removal for cause, were refurbished and placed back in service, and then were second-run until removal for cause. A series of equations were empirically developed from the Monte Carlo simulation to determine the statistical variation in predicted life and Weibull slope as a function of the number of gearboxes failed. The resultant L(sub 10) life from the field data was 5,627 hr. From strict-series system reliability, the predicted L(sub 10) life was 774 hr. From the Monte Carlo simulation, the median value for the L(sub 10) gearbox lives equaled 757 hr. Half of the gearbox L(sub 10) lives will be less than this value and the other half more. The resultant L(sub 10) life of the second-run (refurbished) gearboxes was 1,334 hr. The apparent load-life exponent p for the roller bearings is 5.2. Were the bearing lives to be recalculated with a load-life exponent p equal to 5.2, the predicted L(sub 10) life of the gearbox would be equal to the actual life obtained in the field. The component failure distribution of the gearbox from the Monte Carlo simulation was nearly identical to that using the strict-series system reliability analysis, proving the compatibility of these methods.
Catchment-scale groundwater recharge and vegetation water use efficiency
NASA Astrophysics Data System (ADS)
Troch, P. A. A.; Dwivedi, R.; Liu, T.; Meira, A.; Roy, T.; Valdés-Pineda, R.; Durcik, M.; Arciniega, S.; Brena-Naranjo, J. A.
2017-12-01
Precipitation undergoes a two-step partitioning when it falls on the land surface. At the land surface and in the shallow subsurface, rainfall or snowmelt can either runoff as infiltration/saturation excess or quick subsurface flow. The rest will be stored temporarily in the root zone. From the root zone, water can leave the catchment as evapotranspiration or percolate further and recharge deep storage (e.g. fractured bedrock aquifer). Quantifying the average amount of water that recharges deep storage and sustains low flows is extremely challenging, as we lack reliable methods to quantify this flux at the catchment scale. It was recently shown, however, that for semi-arid catchments in Mexico, an index of vegetation water use efficiency, i.e. the Horton index (HI), could predict deep storage dynamics. Here we test this finding using 247 MOPEX catchments across the conterminous US, including energy-limited catchments. Our results show that the observed HI is indeed a reliable predictor of deep storage dynamics in space and time. We further investigate whether the HI can also predict average recharge rates across the conterminous US. We find that the HI can reliably predict the average recharge rate, estimated from the 50th percentile flow of the flow duration curve. Our results compare favorably with estimates of average recharge rates from the US Geological Survey. Previous research has shown that HI can be reliably estimated based on aridity index, mean slope and mean elevation of a catchment (Voepel et al., 2011). We recalibrated Voepel's model and used it to predict the HI for our 247 catchments. We then used these predicted values of the HI to estimate average recharge rates for our catchments, and compared them with those estimated from observed HI. We find that the accuracies of our predictions based on observed and predicted HI are similar. This provides an estimation method of catchment-scale average recharge rates based on easily derived catchment characteristics, such as climate and topography, and free of discharge measurements.
VerSeDa: vertebrate secretome database
Cortazar, Ana R.; Oguiza, José A.
2017-01-01
Based on the current tools, de novo secretome (full set of proteins secreted by an organism) prediction is a time consuming bioinformatic task that requires a multifactorial analysis in order to obtain reliable in silico predictions. Hence, to accelerate this process and offer researchers a reliable repository where secretome information can be obtained for vertebrates and model organisms, we have developed VerSeDa (Vertebrate Secretome Database). This freely available database stores information about proteins that are predicted to be secreted through the classical and non-classical mechanisms, for the wide range of vertebrate species deposited at the NCBI, UCSC and ENSEMBL sites. To our knowledge, VerSeDa is the only state-of-the-art database designed to store secretome data from multiple vertebrate genomes, thus, saving an important amount of time spent in the prediction of protein features that can be retrieved from this repository directly. Database URL: VerSeDa is freely available at http://genomics.cicbiogune.es/VerSeDa/index.php PMID:28365718
VerSeDa: vertebrate secretome database.
Cortazar, Ana R; Oguiza, José A; Aransay, Ana M; Lavín, José L
2017-01-01
Based on the current tools, de novo secretome (full set of proteins secreted by an organism) prediction is a time consuming bioinformatic task that requires a multifactorial analysis in order to obtain reliable in silico predictions. Hence, to accelerate this process and offer researchers a reliable repository where secretome information can be obtained for vertebrates and model organisms, we have developed VerSeDa (Vertebrate Secretome Database). This freely available database stores information about proteins that are predicted to be secreted through the classical and non-classical mechanisms, for the wide range of vertebrate species deposited at the NCBI, UCSC and ENSEMBL sites. To our knowledge, VerSeDa is the only state-of-the-art database designed to store secretome data from multiple vertebrate genomes, thus, saving an important amount of time spent in the prediction of protein features that can be retrieved from this repository directly. VerSeDa is freely available at http://genomics.cicbiogune.es/VerSeDa/index.php. © The Author(s) 2017. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
To, Albert C.; Liu, Wing Kam; Olson, Gregory B.; Belytschko, Ted; Chen, Wei; Shephard, Mark S.; Chung, Yip-Wah; Ghanem, Roger; Voorhees, Peter W.; Seidman, David N.; Wolverton, Chris; Chen, J. S.; Moran, Brian; Freeman, Arthur J.; Tian, Rong; Luo, Xiaojuan; Lautenschlager, Eric; Challoner, A. Dorian
2008-09-01
Microsystems have become an integral part of our lives and can be found in homeland security, medical science, aerospace applications and beyond. Many critical microsystem applications are in harsh environments, in which long-term reliability needs to be guaranteed and repair is not feasible. For example, gyroscope microsystems on satellites need to function for over 20 years under severe radiation, thermal cycling, and shock loading. Hence a predictive-science-based, verified and validated computational models and algorithms to predict the performance and materials integrity of microsystems in these situations is needed. Confidence in these predictions is improved by quantifying uncertainties and approximation errors. With no full system testing and limited sub-system testings, petascale computing is certainly necessary to span both time and space scales and to reduce the uncertainty in the prediction of long-term reliability. This paper presents the necessary steps to develop predictive-science-based multiscale modeling and simulation system. The development of this system will be focused on the prediction of the long-term performance of a gyroscope microsystem. The environmental effects to be considered include radiation, thermo-mechanical cycling and shock. Since there will be many material performance issues, attention is restricted to creep resulting from thermal aging and radiation-enhanced mass diffusion, material instability due to radiation and thermo-mechanical cycling and damage and fracture due to shock. To meet these challenges, we aim to develop an integrated multiscale software analysis system that spans the length scales from the atomistic scale to the scale of the device. The proposed software system will include molecular mechanics, phase field evolution, micromechanics and continuum mechanics software, and the state-of-the-art model identification strategies where atomistic properties are calibrated by quantum calculations. We aim to predict the long-term (in excess of 20 years) integrity of the resonator, electrode base, multilayer metallic bonding pads, and vacuum seals in a prescribed mission. Although multiscale simulations are efficient in the sense that they focus the most computationally intensive models and methods on only the portions of the space time domain needed, the execution of the multiscale simulations associated with evaluating materials and device integrity for aerospace microsystems will require the application of petascale computing. A component-based software strategy will be used in the development of our massively parallel multiscale simulation system. This approach will allow us to take full advantage of existing single scale modeling components. An extensive, pervasive thrust in the software system development is verification, validation, and uncertainty quantification (UQ). Each component and the integrated software system need to be carefully verified. An UQ methodology that determines the quality of predictive information available from experimental measurements and packages the information in a form suitable for UQ at various scales needs to be developed. Experiments to validate the model at the nanoscale, microscale, and macroscale are proposed. The development of a petascale predictive-science-based multiscale modeling and simulation system will advance the field of predictive multiscale science so that it can be used to reliably analyze problems of unprecedented complexity, where limited testing resources can be adequately replaced by petascale computational power, advanced verification, validation, and UQ methodologies.
Discharge prediction in the Upper Senegal River using remote sensing data
NASA Astrophysics Data System (ADS)
Ceccarini, Iacopo; Raso, Luciano; Steele-Dunne, Susan; Hrachowitz, Markus; Nijzink, Remko; Bodian, Ansoumana; Claps, Pierluigi
2017-04-01
The Upper Senegal River, West Africa, is a poorly gauged basin. Nevertheless, discharge predictions are required in this river for the optimal operation of the downstream Manantali reservoir, flood forecasting, development plans for the entire basin and studies for adaptation to climate change. Despite the need for reliable discharge predictions, currently available rainfall-runoff models for this basin provide only poor performances, particularly during extreme regimes, both low-flow and high-flow. In this research we develop a rainfall-runoff model that combines remote-sensing input data and a-priori knowledge on catchment physical characteristics. This semi-distributed model, is based on conceptual numerical descriptions of hydrological processes at the catchment scale. Because of the lack of reliable input data from ground observations, we use the Tropical Rainfall Measuring Mission (TRMM) remote-sensing data for precipitation and the Global Land Evaporation Amsterdam Model (GLEAM) for the terrestrial potential evaporation. The model parameters are selected by a combination of calibration, by match of observed output and considering a large set of hydrological signatures, as well as a-priori knowledge on the catchment. The Generalized Likelihood Uncertainty Estimation (GLUE) method was used to choose the most likely range in which the parameter sets belong. Analysis of different experiments enhances our understanding on the added value of distributed remote-sensing data and a-priori information in rainfall-runoff modelling. Results of this research will be used for decision making at different scales, contributing to a rational use of water resources in this river.
Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De
2016-01-01
The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).
Landscape capability predicts upland game bird abundance and occurrence
Loman, Zachary G.; Blomberg, Erik J.; DeLuca, William; Harrison, Daniel J.; Loftin, Cyndy; Wood, Petra B.
2017-01-01
Landscape capability (LC) models are a spatial tool with potential applications in conservation planning. We used survey data to validate LC models as predictors of occurrence and abundance at broad and fine scales for American woodcock (Scolopax minor) and ruffed grouse (Bonasa umbellus). Landscape capability models were reliable predictors of occurrence but were less indicative of relative abundance at route (11.5–14.6 km) and point scales (0.5–1 km). As predictors of occurrence, LC models had high sensitivity (0.71–0.93) and were accurate (0.71–0.88) and precise (0.88 and 0.92 for woodcock and grouse, respectively). Models did not predict point-scale abundance independent of the ability to predict occurrence of either species. The LC models are useful predictors of patterns of occurrences in the northeastern United States, but they have limited utility as predictors of fine-scale or route-specific abundances.
Utility of the PRE-DELIRIC delirium prediction model in a Scottish ICU cohort.
Paton, Lia; Elliott, Sara; Chohan, Sanjiv
2016-08-01
The PREdiction of DELIRium for Intensive Care (PRE-DELIRIC) model reliably predicts at 24 h the development of delirium during intensive care admission. However, the model does not take account of alcohol misuse, which has a high prevalence in Scottish intensive care patients. We used the PRE-DELIRIC model to calculate the risk of delirium for patients in our ICU from May to July 2013. These patients were screened for delirium on each day of their ICU stay using the Confusion Assessment Method for ICU (CAM-ICU). Outcomes were ascertained from the national ICU database. In the 39 patients screened daily, the risk of delirium given by the PRE-DELIRIC model was positively associated with prevalence of delirium, length of ICU stay and mortality. The PRE-DELIRIC model can therefore be usefully applied to a Scottish cohort with a high prevalence of substance misuse, allowing preventive measures to be targeted.
Preliminary study of soil permeability properties using principal component analysis
NASA Astrophysics Data System (ADS)
Yulianti, M.; Sudriani, Y.; Rustini, H. A.
2018-02-01
Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.
The Shutdown Dissociation Scale (Shut-D)
Schalinski, Inga; Schauer, Maggie; Elbert, Thomas
2015-01-01
The evolutionary model of the defense cascade by Schauer and Elbert (2010) provides a theoretical frame for a short interview to assess problems underlying and leading to the dissociative subtype of posttraumatic stress disorder. Based on known characteristics of the defense stages “fright,” “flag,” and “faint,” we designed a structured interview to assess the vulnerability for the respective types of dissociation. Most of the scales that assess dissociative phenomena are designed as self-report questionnaires. Their items are usually selected based on more heuristic considerations rather than a theoretical model and thus include anything from minor dissociative experiences to major pathological dissociation. The shutdown dissociation scale (Shut-D) was applied in several studies in patients with a history of multiple traumatic events and different disorders that have been shown previously to be prone to symptoms of dissociation. The goal of the present investigation was to obtain psychometric characteristics of the Shut-D (including factor structure, internal consistency, retest reliability, predictive, convergent and criterion-related concurrent validity). A total population of 225 patients and 68 healthy controls were accessed. Shut-D appears to have sufficient internal reliability, excellent retest reliability, high convergent validity, and satisfactory predictive validity, while the summed score of the scale reliably separates patients with exposure to trauma (in different diagnostic groups) from healthy controls. The Shut-D is a brief structured interview for assessing the vulnerability to dissociate as a consequence of exposure to traumatic stressors. The scale demonstrates high-quality psychometric properties and may be useful for researchers and clinicians in assessing shutdown dissociation as well as in predicting the risk of dissociative responding. PMID:25976478
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
Predicting bone strength with ultrasonic guided waves
Bochud, Nicolas; Vallet, Quentin; Minonzio, Jean-Gabriel; Laugier, Pascal
2017-01-01
Recent bone quantitative ultrasound approaches exploit the multimode waveguide response of long bones for assessing properties such as cortical thickness and stiffness. Clinical applications remain, however, challenging, as the impact of soft tissue on guided waves characteristics is not fully understood yet. In particular, it must be clarified whether soft tissue must be incorporated in waveguide models needed to infer reliable cortical bone properties. We hypothesize that an inverse procedure using a free plate model can be applied to retrieve the thickness and stiffness of cortical bone from experimental data. This approach is first validated on a series of laboratory-controlled measurements performed on assemblies of bone- and soft tissue mimicking phantoms and then on in vivo measurements. The accuracy of the estimates is evaluated by comparison with reference values. To further support our hypothesis, these estimates are subsequently inserted into a bilayer model to test its accuracy. Our results show that the free plate model allows retrieving reliable waveguide properties, despite the presence of soft tissue. They also suggest that the more sophisticated bilayer model, although it is more precise to predict experimental data in the forward problem, could turn out to be hardly manageable for solving the inverse problem. PMID:28256568
Touch Precision Modulates Visual Bias.
Misceo, Giovanni F; Jones, Maurice D
2018-01-01
The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.
An empirical study of flight control software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.
Modeling 3-D objects with planar surfaces for prediction of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Koch, M. B.; Beck, F. B.; Cockrell, C. R.
1992-01-01
Electromagnetic scattering analysis of objects at resonance is difficult because low frequency techniques are slow and computer intensive, and high frequency techniques may not be reliable. A new technique for predicting the electromagnetic backscatter from electrically conducting objects at resonance is studied. This technique is based on modeling three dimensional objects as a combination of flat plates where some of the plates are blocking the scattering from others. A cube is analyzed as a simple example. The preliminary results compare well with the Geometrical Theory of Diffraction and with measured data.
TWT transmitter fault prediction based on ANFIS
NASA Astrophysics Data System (ADS)
Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen
2017-11-01
Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.
Niraula, Rewati; Norman, Laura A.; Meixner, Thomas; Callegary, James B.
2012-01-01
In most watershed-modeling studies, flow is calibrated at one monitoring site, usually at the watershed outlet. Like many arid and semi-arid watersheds, the main reach of the Santa Cruz watershed, located on the Arizona-Mexico border, is discontinuous for most of the year except during large flood events, and therefore the flow characteristics at the outlet do not represent the entire watershed. Calibration is required at multiple locations along the Santa Cruz River to improve model reliability. The objective of this study was to best portray surface water flow in this semiarid watershed and evaluate the effect of multi-gage calibration on flow predictions. In this study, the Soil and Water Assessment Tool (SWAT) was calibrated at seven monitoring stations, which improved model performance and increased the reliability of flow, in the Santa Cruz watershed. The most sensitive parameters to affect flow were found to be curve number (CN2), soil evaporation and compensation coefficient (ESCO), threshold water depth in shallow aquifer for return flow to occur (GWQMN), base flow alpha factor (Alpha_Bf), and effective hydraulic conductivity of the soil layer (Ch_K2). In comparison, when the model was established with a single calibration at the watershed outlet, flow predictions at other monitoring gages were inaccurate. This study emphasizes the importance of multi-gage calibration to develop a reliable watershed model in arid and semiarid environments. The developed model, with further calibration of water quality parameters will be an integral part of the Santa Cruz Watershed Ecosystem Portfolio Model (SCWEPM), an online decision support tool, to assess the impacts of climate change and urban growth in the Santa Cruz watershed.
Aggregation Trade Offs in Family Based Recommendations
NASA Astrophysics Data System (ADS)
Berkovsky, Shlomo; Freyne, Jill; Coombe, Mac
Personalized information access tools are frequently based on collaborative filtering recommendation algorithms. Collaborative filtering recommender systems typically suffer from a data sparsity problem, where systems do not have sufficient user data to generate accurate and reliable predictions. Prior research suggested using group-based user data in the collaborative filtering recommendation process to generate group-based predictions and partially resolve the sparsity problem. Although group recommendations are less accurate than personalized recommendations, they are more accurate than general non-personalized recommendations, which are the natural fall back when personalized recommendations cannot be generated. In this work we present initial results of a study that exploits the browsing logs of real families of users gathered in an eHealth portal. The browsing logs allowed us to experimentally compare the accuracy of two group-based recommendation strategies: aggregated group models and aggregated predictions. Our results showed that aggregating individual models into group models resulted in more accurate predictions than aggregating individual predictions into group predictions.
Life Prediction Issues in Thermal/Environmental Barrier Coatings in Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Brewer, David N.; Murthy, Pappu L. N.
2001-01-01
Issues and design requirements for the environmental barrier coating (EBC)/thermal barrier coating (TBC) life that are general and those specific to the NASA Ultra-Efficient Engine Technology (UEET) development program have been described. The current state and trend of the research, methods in vogue related to the failure analysis, and long-term behavior and life prediction of EBCITBC systems are reported. Also, the perceived failure mechanisms, variables, and related uncertainties governing the EBCITBC system life are summarized. A combined heat transfer and structural analysis approach based on the oxidation kinetics using the Arrhenius theory is proposed to develop a life prediction model for the EBC/TBC systems. Stochastic process-based reliability approach that includes the physical variables such as gas pressure, temperature, velocity, moisture content, crack density, oxygen content, etc., is suggested. Benefits of the reliability-based approach are also discussed in the report.
Testing the reliability of ice-cream cone model
NASA Astrophysics Data System (ADS)
Pan, Zonghao; Shen, Chenglong; Wang, Chuanbing; Liu, Kai; Xue, Xianghui; Wang, Yuming; Wang, Shui
2015-04-01
Coronal Mass Ejections (CME)'s properties are important to not only the physical scene itself but space-weather prediction. Several models (such as cone model, GCS model, and so on) have been raised to get rid of the projection effects within the properties observed by spacecraft. According to SOHO/ LASCO observations, we obtain the 'real' 3D parameters of all the FFHCMEs (front-side full halo Coronal Mass Ejections) within the 24th solar cycle till July 2012, by the ice-cream cone model. Considering that the method to obtain 3D parameters from the CME observations by multi-satellite and multi-angle has higher accuracy, we use the GCS model to obtain the real propagation parameters of these CMEs in 3D space and compare the results with which by ice-cream cone model. Then we could discuss the reliability of the ice-cream cone model.
Assessment of Prevalence of Persons with Down Syndrome: A Theory-Based Demographic Model
ERIC Educational Resources Information Center
de Graaf, Gert; Vis, Jeroen C.; Haveman, Meindert; van Hove, Geert; de Graaf, Erik A. B.; Tijssen, Jan G. P.; Mulder, Barbara J. M.
2011-01-01
Background: The Netherlands are lacking reliable empirical data in relation to the development of birth and population prevalence of Down syndrome. For the UK and Ireland there are more historical empirical data available. A theory-based model is developed for predicting Down syndrome prevalence in the Netherlands from the 1950s onwards. It is…
Validation of NE-TWIGS for tolerant hardwood stands in Ontario
Jacek Bankowski; Daniel C. Dey; Eric Boysen; Murray Woods; Jim Rice
1996-01-01
The individual-tree, distance-independent stand growth simulator NE-TWIGS has been tested for Ontario's tolerant hardwood stands using data from long-term permanent sample plots. NE-TWIGS provides reliable short-term (5-year) predictions of stand basal area (modelling efficiency from 77% to 99%), but in longer projections the efficiency of the model drops...
Co-Attention Based Neural Network for Source-Dependent Essay Scoring
ERIC Educational Resources Information Center
Zhang, Haoran; Litman, Diane
2018-01-01
This paper presents an investigation of using a co-attention based neural network for source-dependent essay scoring. We use a co-attention mechanism to help the model learn the importance of each part of the essay more accurately. Also, this paper shows that the co-attention based neural network model provides reliable score prediction of…
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
[Spatial distribution prediction of surface soil Pb in a battery contaminated site].
Liu, Geng; Niu, Jun-Jie; Zhang, Chao; Zhao, Xin; Guo, Guan-Lin
2014-12-01
In order to enhance the reliability of risk estimation and to improve the accuracy of pollution scope determination in a battery contaminated site with the soil characteristic pollutant Pb, four spatial interpolation models, including Combination Prediction Model (OK(LG) + TIN), kriging model (OK(BC)), Inverse Distance Weighting model (IDW), and Spline model were employed to compare their effects on the spatial distribution and pollution assessment of soil Pb. The results showed that Pb concentration varied significantly and the data was severely skewed. The variation coefficient of the site was higher in the local region. OK(LG) + TIN was found to be more accurate than the other three models in predicting the actual pollution situations of the contaminated site. The prediction accuracy of other models was lower, due to the effect of the principle of different models and datum feature. The interpolation results of OK(BC), IDW and Spline could not reflect the detailed characteristics of seriously contaminated areas, and were not suitable for mapping and spatial distribution prediction of soil Pb in this site. This study gives great contributions and provides useful references for defining the remediation boundary and making remediation decision of contaminated sites.
An, Qingyu; Yao, Wei; Wu, Jun
2015-03-01
This study describes our development of a model to predict the incidence of clinically diagnosed dysentery in Dalian, Liaoning Province, China, using time series analysis. The model was developed using the seasonal autoregressive integrated moving average (SARIMA). Spearman correlation analysis was conducted to explore the relationship between meteorological variables and the incidence of clinically diagnosed dysentery. The meteorological variables which significantly correlated with the incidence of clinically diagnosed dysentery were then used as covariables in the model, which incorporated the monthly incidence of clinically diagnosed dysentery from 2005 to 2010 in Dalian. After model development, a simulation was conducted for the year 2011 and the results of this prediction were compared with the real observed values. The model performed best when the temperature data for the preceding month was used to predict clinically diagnosed dysentery during the following month. The developed model was effective and reliable in predicting the incidence of clinically diagnosed dysentery for most but not all months, and may be a useful tool for dysentery disease control and prevention, but further studies are needed to fine tune the model.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George
2017-03-01
Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
Berlinguer, Fiammetta; Madeddu, Manuela; Pasciu, Valeria; Succu, Sara; Spezzigu, Antonio; Satta, Valentina; Mereu, Paolo; Leoni, Giovanni G; Naitana, Salvatore
2009-01-01
Currently, the assessment of sperm function in a raw or processed semen sample is not able to reliably predict sperm ability to withstand freezing and thawing procedures and in vivo fertility and/or assisted reproductive biotechnologies (ART) outcome. The aim of the present study was to investigate which parameters among a battery of analyses could predict subsequent spermatozoa in vitro fertilization ability and hence blastocyst output in a goat model. Ejaculates were obtained by artificial vagina from 3 adult goats (Capra hircus) aged 2 years (A, B and C). In order to assess the predictive value of viability, computer assisted sperm analyzer (CASA) motility parameters and ATP intracellular concentration before and after thawing and of DNA integrity after thawing on subsequent embryo output after an in vitro fertility test, a logistic regression analysis was used. Individual differences in semen parameters were evident for semen viability after thawing and DNA integrity. Results of IVF test showed that spermatozoa collected from A and B lead to higher cleavage rates (0 < 0.01) and blastocysts output (p < 0.05) compared with C. Logistic regression analysis model explained a deviance of 72% (p < 0.0001), directly related with the mean percentage of rapid spermatozoa in fresh semen (p < 0.01), semen viability after thawing (p < 0.01), and with two of the three comet parameters considered, i.e tail DNA percentage and comet length (p < 0.0001). DNA integrity alone had a high predictive value on IVF outcome with frozen/thawed semen (deviance explained: 57%). The model proposed here represents one of the many possible ways to explain differences found in embryo output following IVF with different semen donors and may represent a useful tool to select the most suitable donors for semen cryopreservation. PMID:19900288
Wen, Zhang; Guo, Ya; Xu, Banghao; Xiao, Kaiyin; Peng, Tao; Peng, Minhao
2016-04-01
Postoperative pancreatic fistula is still a major complication after pancreatic surgery, despite improvements of surgical technique and perioperative management. We sought to systematically review and critically access the conduct and reporting of methods used to develop risk prediction models for predicting postoperative pancreatic fistula. We conducted a systematic search of PubMed and EMBASE databases to identify articles published before January 1, 2015, which described the development of models to predict the risk of postoperative pancreatic fistula. We extracted information of developing a prediction model including study design, sample size and number of events, definition of postoperative pancreatic fistula, risk predictor selection, missing data, model-building strategies, and model performance. Seven studies of developing seven risk prediction models were included. In three studies (42 %), the number of events per variable was less than 10. The number of candidate risk predictors ranged from 9 to 32. Five studies (71 %) reported using univariate screening, which was not recommended in building a multivariate model, to reduce the number of risk predictors. Six risk prediction models (86 %) were developed by categorizing all continuous risk predictors. The treatment and handling of missing data were not mentioned in all studies. We found use of inappropriate methods that could endanger the development of model, including univariate pre-screening of variables, categorization of continuous risk predictors, and model validation. The use of inappropriate methods affects the reliability and the accuracy of the probability estimates of predicting postoperative pancreatic fistula.
Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention
Noppeney, Uta
2018-01-01
Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567
Park, Young-Jae; Lee, Jin-Moo; Yoo, Seung-Yeon; Park, Young-Bae
2016-04-01
To examine whether color parameters of tongue inspection (TI) using a digital camera was reliable and valid, and to examine which color parameters serve as predictors of symptom patterns in terms of East Asian medicine (EAM). Two hundred female subjects' tongue substances were photographed by a mega-pixel digital camera. Together with the photographs, the subjects were asked to complete Yin deficiency, Phlegm pattern, and Cold-Heat pattern questionnaires. Using three sets of digital imaging software, each digital image was exposure- and white balance-corrected, and finally L* (luminance), a* (red-green balance), and b* (yellow-blue balance) values of the tongues were calculated. To examine intra- and inter-rater reliabilities and criterion validity of the color analysis method, three raters were asked to calculate color parameters for 20 digital image samples. Finally, four hierarchical regression models were formed. Color parameters showed good or excellent reliability (0.627-0.887 for intra-class correlation coefficients) and significant criterion validity (0.523-0.718 for Spearman's correlation). In the hierarchical regression models, age was a significant predictor of Yin deficiency (β = 0.192), and b* value of the tip of the tongue was a determinant predictor of Yin deficiency, Phlegm, and Heat patterns (β = - 0.212, - 0.172, and - 0.163). Luminance (L*) was predictive of Yin deficiency (β = -0.172) and Cold (β = 0.173) pattern. Our results suggest that color analysis of the tongue using the L*a*b* system is reliable and valid, and that color parameters partially serve as symptom pattern predictors in EAM practice.
Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.
1993-01-01
Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.
Prediction Model for Relativistic Electrons at Geostationary Orbit
NASA Technical Reports Server (NTRS)
Khazanov, George V.; Lyatsky, Wladislaw
2008-01-01
We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.
Ridge regression for predicting elastic moduli and hardness of calcium aluminosilicate glasses
NASA Astrophysics Data System (ADS)
Deng, Yifan; Zeng, Huidan; Jiang, Yejia; Chen, Guorong; Chen, Jianding; Sun, Luyi
2018-03-01
It is of great significance to design glasses with satisfactory mechanical properties predictively through modeling. Among various modeling methods, data-driven modeling is such a reliable approach that can dramatically shorten research duration, cut research cost and accelerate the development of glass materials. In this work, the ridge regression (RR) analysis was used to construct regression models for predicting the compositional dependence of CaO-Al2O3-SiO2 glass elastic moduli (Shear, Bulk, and Young’s moduli) and hardness based on the ternary diagram of the compositions. The property prediction over a large glass composition space was accomplished with known experimental data of various compositions in the literature, and the simulated results are in good agreement with the measured ones. This regression model can serve as a facile and effective tool for studying the relationship between the compositions and the property, enabling high-efficient design of glasses to meet the requirements for specific elasticity and hardness.
Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.
2016-01-01
Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.
Callwood, Alison; Cooke, Debbie; Bolger, Sarah; Lemanska, Agnieszka; Allan, Helen
2018-01-01
Universities in the United Kingdom (UK) are required to incorporate values based recruitment (VBR) into their healthcare student selection processes. This reflects an international drive to strengthen the quality of healthcare service provision. This paper presents novel findings in relation to the reliability and predictive validity of multiple mini interviews (MMIs); one approach to VBR widely being employed by universities. To examine the reliability (internal consistency) and predictive validity of MMIs using end of Year One practice outcomes of under-graduate pre-registration adult, child, mental health nursing, midwifery and paramedic practice students. Cross-discipline evaluation study. One university in the United Kingdom. Data were collected in two streams: applicants to A) The September 2014 and 2015 Midwifery Studies programmes; B) September 2015 adult; Child and Mental Health Nursing and Paramedic Practice programmes. Fifty-seven midwifery students commenced their programme in 2014 and 69 in 2015; 47 and 54 agreed to participate and completed Year One respectively. 333 healthcare students commenced their programmes in September 2015. Of these, 281 agreed to participate and completed their first year (180 adult, 33 child and 34 mental health nursing and 34 paramedic practice students). Stream A featured a seven station four-minute model with one interviewer at each station and in Stream B a six station model was employed. Cronbach's alpha was used to assess MMI station internal consistency and Pearson's moment correlation co-efficient to explore associations between participants' admission MMI score and end of Year one clinical practice outcomes (OSCE and mentor grading). Stream A: Significant correlations are reported between midwifery applicant's MMI scores and end of Year One practice outcomes. A multivariate linear regression model demonstrated that MMI score significantly predicted end of Year One practice outcomes controlling for age and academic entry level: coefficients 0.195 (p=0.002) and 0.116 (p=0.002) for OSCE and mentor grading respectively. In Stream B no significant correlations were found between MMI score and practice outcomes measured by mentor grading. Internal consistency for each MMI station was 'excellent' with values ranging from 0.966-0.974 across Streams A and B. This novel, cross-discipline study shows that MMIs are reliable VBR tools which have predictive validity when a seven station model is used. These data are important given the current international use of different MMI models in healthcare student selection processes. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Szeląg, Bartosz; Barbusiński, Krzysztof; Studziński, Jan; Bartkiewicz, Lidia
2017-11-01
In the study, models developed using data mining methods are proposed for predicting wastewater quality indicators: biochemical and chemical oxygen demand, total suspended solids, total nitrogen and total phosphorus at the inflow to wastewater treatment plant (WWTP). The models are based on values measured in previous time steps and daily wastewater inflows. Also, independent prediction systems that can be used in case of monitoring devices malfunction are provided. Models of wastewater quality indicators were developed using MARS (multivariate adaptive regression spline) method, artificial neural networks (ANN) of the multilayer perceptron type combined with the classification model (SOM) and cascade neural networks (CNN). The lowest values of absolute and relative errors were obtained using ANN+SOM, whereas the MARS method produced the highest error values. It was shown that for the analysed WWTP it is possible to obtain continuous prediction of selected wastewater quality indicators using the two developed independent prediction systems. Such models can ensure reliable WWTP work when wastewater quality monitoring systems become inoperable, or are under maintenance.
Weighted integration of short-term memory and sensory signals in the oculomotor system.
Deravet, Nicolas; Blohm, Gunnar; de Xivry, Jean-Jacques Orban; Lefèvre, Philippe
2018-05-01
Oculomotor behaviors integrate sensory and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. However, there is a lack of behavioral evidence in humans supporting these theoretical predictions. Here, we independently manipulated the reliability of visual and prior information in a smooth pursuit task. Our results show that both smooth pursuit eye velocity and catch-up saccade amplitude were modulated by visual and prior information reliability. We interpret these findings as the continuous reliability-based integration of a short-term memory of target motion with visual information, which support modeling work. Furthermore, we suggest that saccadic and pursuit systems share this short-term memory. We propose that this short-term memory of target motion is quickly built and continuously updated, and constitutes a general building block present in all sensorimotor systems.
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.
VHSIC/VHSIC-Like Reliability Prediction Modeling
1989-10-01
prediction would require ’ kowledge of event statistics as well as device robustness. Ii1 Additionally, although this is primarily a theoretical, bottom...Degradation in Section 5.3 P = Power PDIP = Plastic DIP P(f) = Probability of Failure due to EOS or ESD P(flc) = Probability of Failure given Contact from an...the results of those stresses: Device Stress Part Number Power Dissipation Manufacturer Test Type Part Description Junction Teniperatune Package Type
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
Burgess, C. P.; Holman, R.; Tasinato, G.
2016-01-26
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgess, C. P.; Holman, R.; Tasinato, G.
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
Operating temperatures of open-rack installed photovoltaic inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Z.; Wang, L.; Kurtz, S.
This paper presents a model for evaluating the heat-sink and component temperatures of open-rack installed photovoltaic inverters. These temperatures can be used for predicting inverter reliability. Inverter heat-sink temperatures were measured for inverters connected to three grid-connected PV (photovoltaic) test systems in Golden, Colorado, US. A model is proposed for calculating the inverter heat-sink temperature based on the ambient temperature, the ratio of the consumed power to the rated power of the inverter, and the measured wind speed. To verify and study this model, more than one year of inverter DC/AC power, irradiance, wind speed, and heat sink temperature risemore » data were collected and analyzed. The model is shown to be accurate in predicting average inverter temperatures, but will require further refinement for prediction of transient temperatures.« less
1989-10-28
develop mathematical models of nature so as to study and predict the behavior of physical systems. The remarkable advances in technology over the last half...met for three days to discuss and study this ’ect. This volume contains invited papers and selected contributed papers presented at this meeting. The...interesting mixture of application of existing methods to issues of reliability as well as studies of new methods that touch upon or depend upon the
Baba, Hiromi; Takahara, Jun-ichi; Yamashita, Fumiyoshi; Hashida, Mitsuru
2015-11-01
The solvent effect on skin permeability is important for assessing the effectiveness and toxicological risk of new dermatological formulations in pharmaceuticals and cosmetics development. The solvent effect occurs by diverse mechanisms, which could be elucidated by efficient and reliable prediction models. However, such prediction models have been hampered by the small variety of permeants and mixture components archived in databases and by low predictive performance. Here, we propose a solution to both problems. We first compiled a novel large database of 412 samples from 261 structurally diverse permeants and 31 solvents reported in the literature. The data were carefully screened to ensure their collection under consistent experimental conditions. To construct a high-performance predictive model, we then applied support vector regression (SVR) and random forest (RF) with greedy stepwise descriptor selection to our database. The models were internally and externally validated. The SVR achieved higher performance statistics than RF. The (externally validated) determination coefficient, root mean square error, and mean absolute error of SVR were 0.899, 0.351, and 0.268, respectively. Moreover, because all descriptors are fully computational, our method can predict as-yet unsynthesized compounds. Our high-performance prediction model offers an attractive alternative to permeability experiments for pharmaceutical and cosmetic candidate screening and optimizing skin-permeable topical formulations.
Electrochemistry-based Battery Modeling for Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Kulkarni, Chetan Shrikant
2013-01-01
Batteries are used in a wide variety of applications. In recent years, they have become popular as a source of power for electric vehicles such as cars, unmanned aerial vehicles, and commericial passenger aircraft. In such application domains, it becomes crucial to both monitor battery health and performance and to predict end of discharge (EOD) and end of useful life (EOL) events. To implement such technologies, it is crucial to understand how batteries work and to capture that knowledge in the form of models that can be used by monitoring, diagnosis, and prognosis algorithms. In this work, we develop electrochemistry-based models of lithium-ion batteries that capture the significant electrochemical processes, are computationally efficient, capture the effects of aging, and are of suitable accuracy for reliable EOD prediction in a variety of usage profiles. This paper reports on the progress of such a model, with results demonstrating the model validity and accurate EOD predictions.
Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H
2016-12-15
Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.
Predicting perceptual quality of images in realistic scenario using deep filter banks
NASA Astrophysics Data System (ADS)
Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang
2018-03-01
Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)
2002-01-01
Wind tunnels use scale models to characterize aerodynamic coefficients, Wind tunnel testing can be slow and costly due to high personnel overhead and intensive power utilization. Although manual curve fitting can be done, it is highly efficient to use a neural network to define the complex relationship between variables. Numerical simulation of complex vehicles on the wide range of conditions required for flight simulation requires static and dynamic data. Static data at low Mach numbers and angles of attack may be obtained with simpler Euler codes. Static data of stalled vehicles where zones of flow separation are usually present at higher angles of attack require Navier-Stokes simulations which are costly due to the large processing time required to attain convergence. Preliminary dynamic data may be obtained with simpler methods based on correlations and vortex methods; however, accurate prediction of the dynamic coefficients requires complex and costly numerical simulations. A reliable and fast method of predicting complex aerodynamic coefficients for flight simulation I'S presented using a neural network. The training data for the neural network are derived from numerical simulations and wind-tunnel experiments. The aerodynamic coefficients are modeled as functions of the flow characteristics and the control surfaces of the vehicle. The basic coefficients of lift, drag and pitching moment are expressed as functions of angles of attack and Mach number. The modeled and training aerodynamic coefficients show good agreement. This method shows excellent potential for rapid development of aerodynamic models for flight simulation. Genetic Algorithms (GA) are used to optimize a previously built Artificial Neural Network (ANN) that reliably predicts aerodynamic coefficients. Results indicate that the GA provided an efficient method of optimizing the ANN model to predict aerodynamic coefficients. The reliability of the ANN using the GA includes prediction of aerodynamic coefficients to an accuracy of 110% . In our problem, we would like to get an optimized neural network architecture and minimum data set. This has been accomplished within 500 training cycles of a neural network. After removing training pairs (outliers), the GA has produced much better results. The neural network constructed is a feed forward neural network with a back propagation learning mechanism. The main goal has been to free the network design process from constraints of human biases, and to discover better forms of neural network architectures. The automation of the network architecture search by genetic algorithms seems to have been the best way to achieve this goal.
Ebara, Takeshi; Azuma, Ryohei; Shoji, Naoto; Matsukawa, Tsuyoshi; Yamada, Yasuyuki; Akiyama, Tomohiro; Kurihara, Takahiro; Yamada, Shota
2017-01-01
Objectives: Objective measurements using built-in smartphone sensors that can measure physical activity/inactivity in daily working life have the potential to provide a new approach to assessing workers' health effects. The aim of this study was to elucidate the characteristics and reliability of built-in step counting sensors on smartphones for development of an easy-to-use objective measurement tool that can be applied in ergonomics or epidemiological research. Methods: To evaluate the reliability of step counting sensors embedded in seven major smartphone models, the 6-minute walk test was conducted and the following analyses of sensor precision and accuracy were performed: 1) relationship between actual step count and step count detected by sensors, 2) reliability between smartphones of the same model, and 3) false detection rates when sitting during office work, while riding the subway, and driving. Results: On five of the seven models, the inter-class correlations coefficient (ICC (3,1)) showed high reliability with a range of 0.956-0.993. The other two models, however, had ranges of 0.443-0.504 and the relative error ratios of the sensor-detected step count to the actual step count were ±48.7%-49.4%. The level of agreement between the same models was ICC (3,1): 0.992-0.998. The false detection rates differed between the sitting conditions. Conclusions: These results suggest the need for appropriate regulation of step counts measured by sensors, through means such as correction or calibration with a predictive model formula, in order to obtain the highly reliable measurement results that are sought in scientific investigation. PMID:28835575
Gurm, Hitinder S.; Kooiman, Judith; LaLonde, Thomas; Grines, Cindy; Share, David; Seth, Milan
2014-01-01
Background Transfusion is a common complication of Percutaneous Coronary Intervention (PCI) and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. Methods Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC), with net reclassification improvement (NRI) used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. Results Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70%) were randomly selected for training the models, and 30,966 (30%) for validation. The models demonstrated excellent calibration and discrimination (AUC: full model = 0.888 (95% CI 0.877–0.899), reduced model AUC = 0.880 (95% CI, 0.868–0.892), p for difference 0.003, NRI = 2.77%, p = 0.007). Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. Conclusions The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion). This risk prediction algorithm may prove useful for both bed side clinical decision making and risk adjustment for assessment of quality. PMID:24816645
2018-01-01
On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the ‘Internet of Things’ (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds. PMID:29748521
Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E
2018-05-10
On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.
Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM
NASA Astrophysics Data System (ADS)
Sheng, Hanlin; Zhang, Tianhong
2017-08-01
In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.
Are there reliable constitutive laws for dynamic friction?
Woodhouse, Jim; Putelat, Thibaut; McKay, Andrew
2015-09-28
Structural vibration controlled by interfacial friction is widespread, ranging from friction dampers in gas turbines to the motion of violin strings. To predict, control or prevent such vibration, a constitutive description of frictional interactions is inevitably required. A variety of friction models are discussed to assess their scope and validity, in the light of constraints provided by different experimental observations. Three contrasting case studies are used to illustrate how predicted behaviour can be extremely sensitive to the choice of frictional constitutive model, and to explore possible experimental paths to discriminate between and calibrate dynamic friction models over the full parameter range needed for real applications. © 2015 The Author(s).
Big Data Analytics for Modelling and Forecasting of Geomagnetic Field Indices
NASA Astrophysics Data System (ADS)
Wei, H. L.
2016-12-01
A massive amount of data are produced and stored in research areas of space weather and space climate. However, the value of a vast majority of the data acquired every day may not be effectively or efficiently exploited in our daily practice when we try to forecast solar wind parameters and geomagnetic field indices using these recorded measurements or digital signals, probably due to the challenges stemming from the dealing with big data which are characterized by the 4V futures: volume (a massively large amount of data), variety (a great number of different types of data), velocity (a requirement of quick processing of the data), and veracity (the trustworthiness and usability of the data). In order to obtain more reliable and accurate predictive models for geomagnetic field indices, it requires that models should be developed from the big data analytics perspective (or it at least benefits from such a perspective). This study proposes a few data-based modelling frameworks which aim to produce more efficient predictive models for space weather parameters forecasting by means of system identification and big data analytics. More specifically, it aims to build more reliable mathematical models that characterise the relationship between solar wind parameters and geomagnetic filed indices, for example the dependent relationship of Dst and Kp indices on a few solar wind parameters and magnetic field indices, namely, solar wind velocity (V), southward interplanetary magnetic field (Bs), solar wind rectified electric field (VBs), and dynamic flow pressure (P). Examples are provided to illustrate how the proposed modelling approaches are applied to Dst and Kp index prediction.
Mining data from hemodynamic simulations for generating prediction and explanation models.
Bosnić, Zoran; Vračar, Petar; Radović, Milos D; Devedžić, Goran; Filipović, Nenad D; Kononenko, Igor
2012-03-01
One of the most common causes of human death is stroke, which can be caused by carotid bifurcation stenosis. In our work, we aim at proposing a prototype of a medical expert system that could significantly aid medical experts to detect hemodynamic abnormalities (increased artery wall shear stress). Based on the acquired simulated data, we apply several methodologies for1) predicting magnitudes and locations of maximum wall shear stress in the artery, 2) estimating reliability of computed predictions, and 3) providing user-friendly explanation of the model's decision. The obtained results indicate that the evaluated methodologies can provide a useful tool for the given problem domain. © 2012 IEEE
Transonic cascade flow prediction using the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Arnone, A.; Stecco, S. S.
1991-01-01
This paper presents results which summarize the work carried out during the last three years to improve the efficiency and accuracy of numerical predictions in turbomachinery flow calculations. A new kind of nonperiodic c-type grid is presented and a Runge-Kutta scheme with accelerating strategies is used as a flow solver. The code capability is presented by testing four different blades at different exit Mach numbers in transonic regimes. Comparison with experiments shows the very good reliability of the numerical prediction. In particular, the loss coefficient seems to be correctly predicted by using the well-known Baldwin-Lomax turbulence model.
DOT National Transportation Integrated Search
2008-08-01
Bridge management is an important activity of transportation agencies in the US : and in many other countries. A critical aspect of bridge management is to reliably predict : the deterioration of bridge structures, so that appropriate or optimal acti...
NASA Astrophysics Data System (ADS)
Rifai, Eko Aditya; van Dijk, Marc; Vermeulen, Nico P. E.; Geerke, Daan P.
2018-01-01
Computational protein binding affinity prediction can play an important role in drug research but performing efficient and accurate binding free energy calculations is still challenging. In the context of phase 2 of the Drug Design Data Resource (D3R) Grand Challenge 2 we used our automated eTOX ALLIES approach to apply the (iterative) linear interaction energy (LIE) method and we evaluated its performance in predicting binding affinities for farnesoid X receptor (FXR) agonists. Efficiency was obtained by our pre-calibrated LIE models and molecular dynamics (MD) simulations at the nanosecond scale, while predictive accuracy was obtained for a small subset of compounds. Using our recently introduced reliability estimation metrics, we could classify predictions with higher confidence by featuring an applicability domain (AD) analysis in combination with protein-ligand interaction profiling. The outcomes of and agreement between our AD and interaction-profile analyses to distinguish and rationalize the performance of our predictions highlighted the relevance of sufficiently exploring protein-ligand interactions during training and it demonstrated the possibility to quantitatively and efficiently evaluate if this is achieved by using simulation data only.
Xu, Dong; Zhang, Yang
2013-01-01
Genome-wide protein structure prediction and structure-based function annotation have been a long-term goal in molecular biology but not yet become possible due to difficulties in modeling distant-homology targets. We developed a hybrid pipeline combining ab initio folding and template-based modeling for genome-wide structure prediction applied to the Escherichia coli genome. The pipeline was tested on 43 known sequences, where QUARK-based ab initio folding simulation generated models with TM-score 17% higher than that by traditional comparative modeling methods. For 495 unknown hard sequences, 72 are predicted to have a correct fold (TM-score > 0.5) and 321 have a substantial portion of structure correctly modeled (TM-score > 0.35). 317 sequences can be reliably assigned to a SCOP fold family based on structural analogy to existing proteins in PDB. The presented results, as a case study of E. coli, represent promising progress towards genome-wide structure modeling and fold family assignment using state-of-the-art ab initio folding algorithms. PMID:23719418
Cao, Pengxing
2017-01-01
Models of within-host influenza viral dynamics have contributed to an improved understanding of viral dynamics and antiviral effects over the past decade. Existing models can be classified into two broad types based on the mechanism of viral control: models utilising target cell depletion to limit the progress of infection and models which rely on timely activation of innate and adaptive immune responses to control the infection. In this paper, we compare how two exemplar models based on these different mechanisms behave and investigate how the mechanistic difference affects the assessment and prediction of antiviral treatment. We find that the assumed mechanism for viral control strongly influences the predicted outcomes of treatment. Furthermore, we observe that for the target cell-limited model the assumed drug efficacy strongly influences the predicted treatment outcomes. The area under the viral load curve is identified as the most reliable predictor of drug efficacy, and is robust to model selection. Moreover, with support from previous clinical studies, we suggest that the target cell-limited model is more suitable for modelling in vitro assays or infection in some immunocompromised/immunosuppressed patients while the immune response model is preferred for predicting the infection/antiviral effect in immunocompetent animals/patients. PMID:28933757
Aboagye-Sarfo, Patrick; Mai, Qun; Sanfilippo, Frank M; Preen, David B; Stewart, Louise M; Fatovich, Daniel M
2015-10-01
To develop multivariate vector-ARMA (VARMA) forecast models for predicting emergency department (ED) demand in Western Australia (WA) and compare them to the benchmark univariate autoregressive moving average (ARMA) and Winters' models. Seven-year monthly WA state-wide public hospital ED presentation data from 2006/07 to 2012/13 were modelled. Graphical and VARMA modelling methods were used for descriptive analysis and model fitting. The VARMA models were compared to the benchmark univariate ARMA and Winters' models to determine their accuracy to predict ED demand. The best models were evaluated by using error correction methods for accuracy. Descriptive analysis of all the dependent variables showed an increasing pattern of ED use with seasonal trends over time. The VARMA models provided a more precise and accurate forecast with smaller confidence intervals and better measures of accuracy in predicting ED demand in WA than the ARMA and Winters' method. VARMA models are a reliable forecasting method to predict ED demand for strategic planning and resource allocation. While the ARMA models are a closely competing alternative, they under-estimated future ED demand. Copyright © 2015 Elsevier Inc. All rights reserved.
Evaluation of a Mysis bioenergetics model
Chipps, S.R.; Bennett, D.H.
2002-01-01
Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.
Prediction of normalized biodiesel properties by simulation of multiple feedstock blends.
García, Manuel; Gonzalo, Alberto; Sánchez, José Luis; Arauzo, Jesús; Peña, José Angel
2010-06-01
A continuous process for biodiesel production has been simulated using Aspen HYSYS V7.0 software. As fresh feed, feedstocks with a mild acid content have been used. The process flowsheet follows a traditional alkaline transesterification scheme constituted by esterification, transesterification and purification stages. Kinetic models taking into account the concentration of the different species have been employed in order to simulate the behavior of the CSTR reactors and the product distribution within the process. The comparison between experimental data found in literature and the predicted normalized properties, has been discussed. Additionally, a comparison between different thermodynamic packages has been performed. NRTL activity model has been selected as the most reliable of them. The combination of these models allows the prediction of 13 out of 25 parameters included in standard EN-14214:2003, and confers simulators a great value as predictive as well as optimization tool. (c) 2010 Elsevier Ltd. All rights reserved.
Cisler, Josh M.; Bush, Keith; James, G. Andrew; Smitherman, Sonet; Kilts, Clinton D.
2015-01-01
Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD. PMID:26241958
Cisler, Josh M; Bush, Keith; James, G Andrew; Smitherman, Sonet; Kilts, Clinton D
2015-01-01
Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD.