Sample records for quality model predictions

  1. Evaluating Predictive Models of Software Quality

    NASA Astrophysics Data System (ADS)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  2. Large-scale structure prediction by improved contact predictions and model quality assessment.

    PubMed

    Michel, Mirco; Menéndez Hurtado, David; Uziela, Karolis; Elofsson, Arne

    2017-07-15

    Accurate contact predictions can be used for predicting the structure of proteins. Until recently these methods were limited to very big protein families, decreasing their utility. However, recent progress by combining direct coupling analysis with machine learning methods has made it possible to predict accurate contact maps for smaller families. To what extent these predictions can be used to produce accurate models of the families is not known. We present the PconsFold2 pipeline that uses contact predictions from PconsC3, the CONFOLD folding algorithm and model quality estimations to predict the structure of a protein. We show that the model quality estimation significantly increases the number of models that reliably can be identified. Finally, we apply PconsFold2 to 6379 Pfam families of unknown structure and find that PconsFold2 can, with an estimated 90% specificity, predict the structure of up to 558 Pfam families of unknown structure. Out of these, 415 have not been reported before. Datasets as well as models of all the 558 Pfam families are available at http://c3.pcons.net/ . All programs used here are freely available. arne@bioinfo.se. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Researches of fruit quality prediction model based on near infrared spectrum

    NASA Astrophysics Data System (ADS)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  4. A comparison of different functions for predicted protein model quality assessment.

    PubMed

    Li, Juan; Fang, Huisheng

    2016-07-01

    In protein structure prediction, a considerable number of models are usually produced by either the Template-Based Method (TBM) or the ab initio prediction. The purpose of this study is to find the critical parameter in assessing the quality of the predicted models. A non-redundant template library was developed and 138 target sequences were modeled. The target sequences were all distant from the proteins in the template library and were aligned with template library proteins on the basis of the transformation matrix. The quality of each model was first assessed with QMEAN and its six parameters, which are C_β interaction energy (C_beta), all-atom pairwise energy (PE), solvation energy (SE), torsion angle energy (TAE), secondary structure agreement (SSA), and solvent accessibility agreement (SAE). Finally, the alignment score (score) was also used to assess the quality of model. Hence, a total of eight parameters (i.e., QMEAN, C_beta, PE, SE, TAE, SSA, SAE, score) were independently used to assess the quality of each model. The results indicate that SSA is the best parameter to estimate the quality of the model.

  5. A model for predicting air quality along highways.

    DOT National Transportation Integrated Search

    1973-01-01

    The subject of this report is an air quality prediction model for highways, AIRPOL Version 2, July 1973. AIRPOL has been developed by modifying the basic Gaussian approach to gaseous dispersion. The resultant model is smooth and continuous throughout...

  6. Assessment and prediction of air quality using fuzzy logic and autoregressive models

    NASA Astrophysics Data System (ADS)

    Carbajal-Hernández, José Juan; Sánchez-Fernández, Luis P.; Carrasco-Ochoa, Jesús A.; Martínez-Trinidad, José Fco.

    2012-12-01

    In recent years, artificial intelligence methods have been used for the treatment of environmental problems. This work, presents two models for assessment and prediction of air quality. First, we develop a new computational model for air quality assessment in order to evaluate toxic compounds that can harm sensitive people in urban areas, affecting their normal activities. In this model we propose to use a Sigma operator to statistically asses air quality parameters using their historical data information and determining their negative impact in air quality based on toxicity limits, frequency average and deviations of toxicological tests. We also introduce a fuzzy inference system to perform parameter classification using a reasoning process and integrating them in an air quality index describing the pollution levels in five stages: excellent, good, regular, bad and danger, respectively. The second model proposed in this work predicts air quality concentrations using an autoregressive model, providing a predicted air quality index based on the fuzzy inference system previously developed. Using data from Mexico City Atmospheric Monitoring System, we perform a comparison among air quality indices developed for environmental agencies and similar models. Our results show that our models are an appropriate tool for assessing site pollution and for providing guidance to improve contingency actions in urban areas.

  7. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  8. Prediction of global and local model quality in CASP8 using the ModFOLD server.

    PubMed

    McGuffin, Liam J

    2009-01-01

    The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0--an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/. Copyright 2009 Wiley-Liss, Inc.

  9. Early experiences building a software quality prediction model

    NASA Technical Reports Server (NTRS)

    Agresti, W. W.; Evanco, W. M.; Smith, M. C.

    1990-01-01

    Early experiences building a software quality prediction model are discussed. The overall research objective is to establish a capability to project a software system's quality from an analysis of its design. The technical approach is to build multivariate models for estimating reliability and maintainability. Data from 21 Ada subsystems were analyzed to test hypotheses about various design structures leading to failure-prone or unmaintainable systems. Current design variables highlight the interconnectivity and visibility of compilation units. Other model variables provide for the effects of reusability and software changes. Reported results are preliminary because additional project data is being obtained and new hypotheses are being developed and tested. Current multivariate regression models are encouraging, explaining 60 to 80 percent of the variation in error density of the subsystems.

  10. Large-scale model quality assessment for improving protein tertiary structure prediction.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-06-15

    Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.

  11. Utility of NCEP Operational and Emerging Meteorological Models for Driving Air Quality Prediction

    NASA Astrophysics Data System (ADS)

    McQueen, J.; Huang, J.; Huang, H. C.; Shafran, P.; Lee, P.; Pan, L.; Sleinkofer, A. M.; Stajner, I.; Upadhayay, S.; Tallapragada, V.

    2017-12-01

    Operational air quality predictions for the United States (U. S.) are provided at NOAA by the National Air Quality Forecasting Capability (NAQFC). NAQFC provides nationwide operational predictions of ozone and particulate matter twice per day (at 06 and 12 UTC cycles) at 12 km resolution and 1 hour time intervals through 48 hours and distributed at http://airquality.weather.gov. The NOAA National Centers for Environmental Prediction (NCEP) operational North American Mesoscale (NAM) 12 km weather prediction is used to drive the Community Multiscale Air Quality (CMAQ) model. In 2017, the NAM was upgraded in part to reduce a warm 2m temperature bias in Summer (V4). At the same time CMAQ was updated to V5.0.2. Both versions of the models were run in parallel for several months. Therefore the impact of improvements from the atmospheric chemistry model versus upgrades with the weather prediction model could be assessed. . Improvements to CMAQ were related to improvements to improvements in NAM 2 m temperature bias through increasing the opacity of clouds and reducing downward shortwave radiation resulted in reduced ozone photolysis. Higher resolution operational NWP models have recently been introduced as part of the NCEP modeling suite. These include the NAM CONUS Nest (3 km horizontal resolution) run four times per day through 60 hours and the High Resolution Rapid Refresh (HRRR, 3 km) run hourly out to 18 hours. In addition, NCEP with other NOAA labs has begun to develop and test the Next Generation Global Prediction System (NGGPS) based on the FV3 global model. This presentation also overviews recent developments with operational numerical weather prediction and evaluates the ability of these models for predicting low level temperatures, clouds and capturing boundary layer processes important for driving air quality prediction in complex terrain. The assessed meteorological model errors could help determine the magnitude of possible pollutant errors from CMAQ if used

  12. Quality by control: Towards model predictive control of mammalian cell culture bioprocesses.

    PubMed

    Sommeregger, Wolfgang; Sissolak, Bernhard; Kandra, Kulwant; von Stosch, Moritz; Mayer, Martin; Striedner, Gerald

    2017-07-01

    The industrial production of complex biopharmaceuticals using recombinant mammalian cell lines is still mainly built on a quality by testing approach, which is represented by fixed process conditions and extensive testing of the end-product. In 2004 the FDA launched the process analytical technology initiative, aiming to guide the industry towards advanced process monitoring and better understanding of how critical process parameters affect the critical quality attributes. Implementation of process analytical technology into the bio-production process enables moving from the quality by testing to a more flexible quality by design approach. The application of advanced sensor systems in combination with mathematical modelling techniques offers enhanced process understanding, allows on-line prediction of critical quality attributes and subsequently real-time product quality control. In this review opportunities and unsolved issues on the road to a successful quality by design and dynamic control implementation are discussed. A major focus is directed on the preconditions for the application of model predictive control for mammalian cell culture bioprocesses. Design of experiments providing information about the process dynamics upon parameter change, dynamic process models, on-line process state predictions and powerful software environments seem to be a prerequisite for quality by control realization. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Deep learning architecture for air quality predictions.

    PubMed

    Li, Xiang; Peng, Ling; Hu, Yuan; Shao, Jing; Chi, Tianhe

    2016-11-01

    With the rapid development of urbanization and industrialization, many developing countries are suffering from heavy air pollution. Governments and citizens have expressed increasing concern regarding air pollution because it affects human health and sustainable development worldwide. Current air quality prediction methods mainly use shallow models; however, these methods produce unsatisfactory results, which inspired us to investigate methods of predicting air quality based on deep architecture models. In this paper, a novel spatiotemporal deep learning (STDL)-based air quality prediction method that inherently considers spatial and temporal correlations is proposed. A stacked autoencoder (SAE) model is used to extract inherent air quality features, and it is trained in a greedy layer-wise manner. Compared with traditional time series prediction models, our model can predict the air quality of all stations simultaneously and shows the temporal stability in all seasons. Moreover, a comparison with the spatiotemporal artificial neural network (STANN), auto regression moving average (ARMA), and support vector regression (SVR) models demonstrates that the proposed method of performing air quality predictions has a superior performance.

  14. Protein model quality assessment prediction by combining fragment comparisons and a consensus Cα contact potential

    PubMed Central

    Zhou, Hongyi; Skolnick, Jeffrey

    2009-01-01

    In this work, we develop a fully automated method for the quality assessment prediction of protein structural models generated by structure prediction approaches such as fold recognition servers, or ab initio methods. The approach is based on fragment comparisons and a consensus Cα contact potential derived from the set of models to be assessed and was tested on CASP7 server models. The average Pearson linear correlation coefficient between predicted quality and model GDT-score per target is 0.83 for the 98 targets which is better than those of other quality assessment methods that participated in CASP7. Our method also outperforms the other methods by about 3% as assessed by the total GDT-score of the selected top models. PMID:18004783

  15. Modelling of beef sensory quality for a better prediction of palatability.

    PubMed

    Hocquette, Jean-François; Van Wezemael, Lynn; Chriki, Sghaier; Legrand, Isabelle; Verbeke, Wim; Farmer, Linda; Scollan, Nigel D; Polkinghorne, Rod; Rødbotten, Rune; Allen, Paul; Pethick, David W

    2014-07-01

    Despite efforts by the industry to control the eating quality of beef, there remains a high level of variability in palatability, which is one reason for consumer dissatisfaction. In Europe, there is still no reliable on-line tool to predict beef quality and deliver consistent quality beef to consumers. Beef quality traits depend in part on the physical and chemical properties of the muscles. The determination of these properties (known as muscle profiling) will allow for more informed decisions to be made in the selection of individual muscles for the production of value-added products. Therefore, scientists and professional partners of the ProSafeBeef project have brought together all the data they have accumulated over 20 years. The resulting BIF-Beef (Integrated and Functional Biology of Beef) data warehouse contains available data of animal growth, carcass composition, muscle tissue characteristics and beef quality traits. This database is useful to determine the most important muscle characteristics associated with a high tenderness, a high flavour or generally a high quality. Another more consumer driven modelling tool was developed in Australia: the Meat Standards Australia (MSA) grading scheme that predicts beef quality for each individual muscle×specific cooking method combination using various information on the corresponding animals and post-slaughter processing factors. This system has also the potential to detect variability in quality within muscles. The MSA system proved to be effective in predicting beef palatability not only in Australia but also in many other countries. The results of the work conducted in Europe within the ProSafeBeef project indicate that it would be possible to manage a grading system in Europe similar to the MSA system. The combination of the different modelling approaches (namely muscle biochemistry and a MSA-like meat grading system adapted to the European market) is a promising area of research to improve the prediction

  16. Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.

    PubMed

    Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza

    2015-09-15

    The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment

    PubMed Central

    2014-01-01

    Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387

  18. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment.

    PubMed

    Cao, Renzhi; Wang, Zheng; Cheng, Jianlin

    2014-04-15

    Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.

  19. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  20. Prediction models for Arabica coffee beverage quality based on aroma analyses and chemometrics.

    PubMed

    Ribeiro, J S; Augusto, F; Salva, T J G; Ferreira, M M C

    2012-11-15

    In this work, soft modeling based on chemometric analyses of coffee beverage sensory data and the chromatographic profiles of volatile roasted coffee compounds is proposed to predict the scores of acidity, bitterness, flavor, cleanliness, body, and overall quality of the coffee beverage. A partial least squares (PLS) regression method was used to construct the models. The ordered predictor selection (OPS) algorithm was applied to select the compounds for the regression model of each sensory attribute in order to take only significant chromatographic peaks into account. The prediction errors of these models, using 4 or 5 latent variables, were equal to 0.28, 0.33, 0.35, 0.33, 0.34 and 0.41, for each of the attributes and compatible with the errors of the mean scores of the experts. Thus, the results proved the feasibility of using a similar methodology in on-line or routine applications to predict the sensory quality of Brazilian Arabica coffee. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Developing and implementing the use of predictive models for estimating water quality at Great Lakes beaches

    USGS Publications Warehouse

    Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.

    2013-01-01

    Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches

  2. Performance of ANFIS versus MLP-NN dissolved oxygen prediction models in water quality monitoring.

    PubMed

    Najah, A; El-Shafie, A; Karim, O A; El-Shafie, Amr H

    2014-02-01

    We discuss the accuracy and performance of the adaptive neuro-fuzzy inference system (ANFIS) in training and prediction of dissolved oxygen (DO) concentrations. The model was used to analyze historical data generated through continuous monitoring of water quality parameters at several stations on the Johor River to predict DO concentrations. Four water quality parameters were selected for ANFIS modeling, including temperature, pH, nitrate (NO3) concentration, and ammoniacal nitrogen concentration (NH3-NL). Sensitivity analysis was performed to evaluate the effects of the input parameters. The inputs with the greatest effect were those related to oxygen content (NO3) or oxygen demand (NH3-NL). Temperature was the parameter with the least effect, whereas pH provided the lowest contribution to the proposed model. To evaluate the performance of the model, three statistical indices were used: the coefficient of determination (R (2)), the mean absolute prediction error, and the correlation coefficient. The performance of the ANFIS model was compared with an artificial neural network model. The ANFIS model was capable of providing greater accuracy, particularly in the case of extreme events.

  3. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  4. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  5. Predicting the Overall Spatial Quality of Automotive Audio Systems

    NASA Astrophysics Data System (ADS)

    Koya, Daisuke

    The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial

  6. SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector machines

    PubMed Central

    2014-01-01

    Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231

  7. SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector machines.

    PubMed

    Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin

    2014-04-28

    It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.

  8. Blind prediction of natural video quality.

    PubMed

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2014-03-01

    We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

  9. Sugeno-Fuzzy Expert System Modeling for Quality Prediction of Non-Contact Machining Process

    NASA Astrophysics Data System (ADS)

    Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

    2018-03-01

    Modeling can be categorised into four main domains: prediction, optimisation, estimation and calibration. In this paper, the Takagi-Sugeno-Kang (TSK) fuzzy logic method is examined as a prediction modelling method to investigate the taper quality of laser lathing, which seeks to replace traditional lathe machines with 3D laser lathing in order to achieve the desired cylindrical shape of stock materials. Three design parameters were selected: feed rate, cutting speed and depth of cut. A total of twenty-four experiments were conducted with eight sequential runs and replicated three times. The results were found to be 99% of accuracy rate of the TSK fuzzy predictive model, which suggests that the model is a suitable and practical method for non-linear laser lathing process.

  10. Accounting for and predicting the influence of spatial autocorrelation in water quality modeling

    NASA Astrophysics Data System (ADS)

    Miralha, L.; Kim, D.

    2017-12-01

    Although many studies have attempted to investigate the spatial trends of water quality, more attention is yet to be paid to the consequences of considering and ignoring the spatial autocorrelation (SAC) that exists in water quality parameters. Several studies have mentioned the importance of accounting for SAC in water quality modeling, as well as the differences in outcomes between models that account for and ignore SAC. However, the capacity to predict the magnitude of such differences is still ambiguous. In this study, we hypothesized that SAC inherently possessed by a response variable (i.e., water quality parameter) influences the outcomes of spatial modeling. We evaluated whether the level of inherent SAC is associated with changes in R-Squared, Akaike Information Criterion (AIC), and residual SAC (rSAC), after accounting for SAC during modeling procedure. The main objective was to analyze if water quality parameters with higher Moran's I values (inherent SAC measure) undergo a greater increase in R² and a greater reduction in both AIC and rSAC. We compared a non-spatial model (OLS) to two spatial regression approaches (spatial lag and error models). Predictor variables were the principal components of topographic (elevation and slope), land cover, and hydrological soil group variables. We acquired these data from federal online sources (e.g. USGS). Ten watersheds were selected, each in a different state of the USA. Results revealed that water quality parameters with higher inherent SAC showed substantial increase in R² and decrease in rSAC after performing spatial regressions. However, AIC values did not show significant changes. Overall, the higher the level of inherent SAC in water quality variables, the greater improvement of model performance. This indicates a linear and direct relationship between the spatial model outcomes (R² and rSAC) and the degree of SAC in each water quality variable. Therefore, our study suggests that the inherent level of

  11. Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality

    NASA Astrophysics Data System (ADS)

    Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.

    2017-12-01

    Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.

  12. A system identification approach for developing model predictive controllers of antibody quality attributes in cell culture processes

    PubMed Central

    Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey

    2017-01-01

    As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed‐batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647–1661, 2017 PMID:28786215

  13. Inverse modeling with RZWQM2 to predict water quality

    USGS Publications Warehouse

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    This chapter presents guidelines for autocalibration of the Root Zone Water Quality Model (RZWQM2) by inverse modeling using PEST parameter estimation software (Doherty, 2010). Two sites with diverse climate and management were considered for simulation of N losses by leaching and in drain flow: an almond [Prunus dulcis (Mill.) D.A. Webb] orchard in the San Joaquin Valley, California and the Walnut Creek watershed in central Iowa, which is predominantly in corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals and sensitivities. We describe operation of PEST in both parameter estimation and predictive analysis modes. The goal of parameter estimation is to identify a unique set of parameters that minimize a weighted least squares objective function, and the goal of predictive analysis is to construct a nonlinear confidence interval for a prediction of interest by finding a set of parameters that maximizes or minimizes the prediction while maintaining the model in a calibrated state. We also describe PEST utilities (PAR2PAR, TSPROC) for maintaining ordered relations among model parameters (e.g., soil root growth factor) and for post-processing of RZWQM2 outputs representing different cropping practices at the Iowa site. Inverse modeling provided reasonable fits to observed water and N fluxes and directly benefitted the modeling through: (i) simultaneous adjustment of multiple parameters versus one-at-a-time adjustment in manual approaches; (ii) clear indication by convergence criteria of when calibration is complete; (iii) straightforward detection of nonunique and insensitive parameters, which can affect the stability of PEST and RZWQM2; and (iv) generation of confidence intervals for uncertainty analysis of parameters and model predictions. Composite scaled sensitivities, which

  14. Comprehensive model for predicting perceptual image quality of smart mobile devices.

    PubMed

    Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng

    2015-01-01

    An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data.

  15. A system identification approach for developing model predictive controllers of antibody quality attributes in cell culture processes.

    PubMed

    Downey, Brandon; Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey

    2017-11-01

    As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed-batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647-1661, 2017. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.

  16. Congestion Prediction Modeling for Quality of Service Improvement in Wireless Sensor Networks

    PubMed Central

    Lee, Ga-Won; Lee, Sung-Young; Huh, Eui-Nam

    2014-01-01

    Information technology (IT) is pushing ahead with drastic reforms of modern life for improvement of human welfare. Objects constitute “Information Networks” through smart, self-regulated information gathering that also recognizes and controls current information states in Wireless Sensor Networks (WSNs). Information observed from sensor networks in real-time is used to increase quality of life (QoL) in various industries and daily life. One of the key challenges of the WSNs is how to achieve lossless data transmission. Although nowadays sensor nodes have enhanced capacities, it is hard to assure lossless and reliable end-to-end data transmission in WSNs due to the unstable wireless links and low hard ware resources to satisfy high quality of service (QoS) requirements. We propose a node and path traffic prediction model to predict and minimize the congestion. This solution includes prediction of packet generation due to network congestion from both periodic and event data generation. Simulation using NS-2 and Matlab is used to demonstrate the effectiveness of the proposed solution. PMID:24784035

  17. Prediction of Indoor Air Exposure from Outdoor Air Quality Using an Artificial Neural Network Model for Inner City Commercial Buildings.

    PubMed

    Challoner, Avril; Pilla, Francesco; Gill, Laurence

    2015-12-01

    NO₂ and particulate matter are the air pollutants of most concern in Ireland, with possible links to the higher respiratory and cardiovascular mortality and morbidity rates found in the country compared to the rest of Europe. Currently, air quality limits in Europe only cover outdoor environments yet the quality of indoor air is an essential determinant of a person's well-being, especially since the average person spends more than 90% of their time indoors. The modelling conducted in this research aims to provide a framework for epidemiological studies by the use of publically available data from fixed outdoor monitoring stations to predict indoor air quality more accurately. Predictions are made using two modelling techniques, the Personal-exposure Activity Location Model (PALM), to predict outdoor air quality at a particular building, and Artificial Neural Networks, to model the indoor/outdoor relationship of the building. This joint approach has been used to predict indoor air concentrations for three inner city commercial buildings in Dublin, where parallel indoor and outdoor diurnal monitoring had been carried out on site. This modelling methodology has been shown to provide reasonable predictions of average NO₂ indoor air quality compared to the monitored data, but did not perform well in the prediction of indoor PM2.5 concentrations. Hence, this approach could be used to determine NO₂ exposures more rigorously of those who work and/or live in the city centre, which can then be linked to potential health impacts.

  18. Prediction of Indoor Air Exposure from Outdoor Air Quality Using an Artificial Neural Network Model for Inner City Commercial Buildings

    PubMed Central

    Challoner, Avril; Pilla, Francesco; Gill, Laurence

    2015-01-01

    NO2 and particulate matter are the air pollutants of most concern in Ireland, with possible links to the higher respiratory and cardiovascular mortality and morbidity rates found in the country compared to the rest of Europe. Currently, air quality limits in Europe only cover outdoor environments yet the quality of indoor air is an essential determinant of a person’s well-being, especially since the average person spends more than 90% of their time indoors. The modelling conducted in this research aims to provide a framework for epidemiological studies by the use of publically available data from fixed outdoor monitoring stations to predict indoor air quality more accurately. Predictions are made using two modelling techniques, the Personal-exposure Activity Location Model (PALM), to predict outdoor air quality at a particular building, and Artificial Neural Networks, to model the indoor/outdoor relationship of the building. This joint approach has been used to predict indoor air concentrations for three inner city commercial buildings in Dublin, where parallel indoor and outdoor diurnal monitoring had been carried out on site. This modelling methodology has been shown to provide reasonable predictions of average NO2 indoor air quality compared to the monitored data, but did not perform well in the prediction of indoor PM2.5 concentrations. Hence, this approach could be used to determine NO2 exposures more rigorously of those who work and/or live in the city centre, which can then be linked to potential health impacts. PMID:26633448

  19. Quality prediction modeling for sintered ores based on mechanism models of sintering and extreme learning machine based error compensation

    NASA Astrophysics Data System (ADS)

    Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang

    2018-06-01

    Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.

  20. Crime prediction modeling

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.

  1. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    PubMed

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Postprocessing for Air Quality Predictions

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.

    2017-12-01

    In recent year, air quality (AQ) forecasting has made significant progress towards better predictions with the goal of protecting the public from harmful pollutants. This progress is the results of improvements in weather and chemical transport models, their coupling, and more accurate emission inventories (e.g., with the development of new algorithms to account in near real-time for fires). Nevertheless, AQ predictions are still affected at times by significant biases which stem from limitations in both weather and chemistry transport models. Those are the result of numerical approximations and the poor representation (and understanding) of important physical and chemical process. Moreover, although the quality of emission inventories has been significantly improved, they are still one of the main sources of uncertainties in AQ predictions. For operational real-time AQ forecasting, a significant portion of these biases can be reduced with the implementation of postprocessing methods. We will review some of the techniques that have been proposed to reduce both systematic and random errors of AQ predictions, and improve the correlation between predictions and observations of ground-level ozone and surface particulate matter less than 2.5 µm in diameter (PM2.5). These methods, which can be applied to both deterministic and probabilistic predictions, include simple bias-correction techniques, corrections inspired by the Kalman filter, regression methods, and the more recently developed analog-based algorithms. These approaches will be compared and contrasted, and strength and weaknesses of each will be discussed.

  3. Prediction of pilot opinion ratings using an optimal pilot model. [of aircraft handling qualities in multiaxis tasks

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1977-01-01

    A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.

  4. Hyperspectral Imaging for Predicting the Internal Quality of Kiwifruits Based on Variable Selection Algorithms and Chemometric Models.

    PubMed

    Zhu, Hongyan; Chu, Bingquan; Fan, Yangyang; Tao, Xiaoya; Yin, Wenxin; He, Yong

    2017-08-10

    We investigated the feasibility and potentiality of determining firmness, soluble solids content (SSC), and pH in kiwifruits using hyperspectral imaging, combined with variable selection methods and calibration models. The images were acquired by a push-broom hyperspectral reflectance imaging system covering two spectral ranges. Weighted regression coefficients (BW), successive projections algorithm (SPA) and genetic algorithm-partial least square (GAPLS) were compared and evaluated for the selection of effective wavelengths. Moreover, multiple linear regression (MLR), partial least squares regression and least squares support vector machine (LS-SVM) were developed to predict quality attributes quantitatively using effective wavelengths. The established models, particularly SPA-MLR, SPA-LS-SVM and GAPLS-LS-SVM, performed well. The SPA-MLR models for firmness (R pre  = 0.9812, RPD = 5.17) and SSC (R pre  = 0.9523, RPD = 3.26) at 380-1023 nm showed excellent performance, whereas GAPLS-LS-SVM was the optimal model at 874-1734 nm for predicting pH (R pre  = 0.9070, RPD = 2.60). Image processing algorithms were developed to transfer the predictive model in every pixel to generate prediction maps that visualize the spatial distribution of firmness and SSC. Hence, the results clearly demonstrated that hyperspectral imaging has the potential as a fast and non-invasive method to predict the quality attributes of kiwifruits.

  5. Comparison of modelling accuracy with and without exploiting automated optical monitoring information in predicting the treated wastewater quality.

    PubMed

    Tomperi, Jani; Leiviskä, Kauko

    2018-06-01

    Traditionally the modelling in an activated sludge process has been based on solely the process measurements, but as the interest to optically monitor wastewater samples to characterize the floc morphology has increased, in the recent years the results of image analyses have been more frequently utilized to predict the characteristics of wastewater. This study shows that the traditional process measurements or the automated optical monitoring variables by themselves are not capable of developing the best predictive models for the treated wastewater quality in a full-scale wastewater treatment plant, but utilizing these variables together the optimal models, which show the level and changes in the treated wastewater quality, are achieved. By this early warning, process operation can be optimized to avoid environmental damages and economic losses. The study also shows that specific optical monitoring variables are important in modelling a certain quality parameter, regardless of the other input variables available.

  6. Predictive Techniques for Spacecraft Cabin Air Quality Control

    NASA Technical Reports Server (NTRS)

    Perry, J. L.; Cromes, Scott D. (Technical Monitor)

    2001-01-01

    As assembly of the International Space Station (ISS) proceeds, predictive techniques are used to determine the best approach for handling a variety of cabin air quality challenges. These techniques use equipment offgassing data collected from each ISS module before flight to characterize the trace chemical contaminant load. Combined with crew metabolic loads, these data serve as input to a predictive model for assessing the capability of the onboard atmosphere revitalization systems to handle the overall trace contaminant load as station assembly progresses. The techniques for predicting in-flight air quality are summarized along with results from early ISS mission analyses. Results from groundbased analyses of in-flight air quality samples are compared to the predictions to demonstrate the technique's relative conservatism.

  7. Mamdani-Fuzzy Modeling Approach for Quality Prediction of Non-Linear Laser Lathing Process

    NASA Astrophysics Data System (ADS)

    Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

    2018-03-01

    Lathing is a process to fashioning stock materials into desired cylindrical shapes which usually performed by traditional lathe machine. But, the recent rapid advancements in engineering materials and precision demand gives a great challenge to the traditional method. The main drawback of conventional lathe is its mechanical contact which brings to the undesirable tool wear, heat affected zone, finishing, and dimensional accuracy especially taper quality in machining of stock with high length to diameter ratio. Therefore, a novel approach has been devised to investigate in transforming a 2D flatbed CO2 laser cutting machine into 3D laser lathing capability as an alternative solution. Three significant design parameters were selected for this experiment, namely cutting speed, spinning speed, and depth of cut. Total of 24 experiments were performed with eight (8) sequential runs where they were then replicated three (3) times. The experimental results were then used to establish Mamdani - Fuzzy predictive model where it yields the accuracy of more than 95%. Thus, the proposed Mamdani - Fuzzy modelling approach is found very much suitable and practical for quality prediction of non-linear laser lathing process for cylindrical stocks of 10mm diameter.

  8. Predicting aged pork quality using a portable Raman device.

    PubMed

    Santos, C C; Zhao, J; Dong, X; Lonergan, S M; Huff-Lonergan, E; Outhouse, A; Carlson, K B; Prusa, K J; Fedler, C A; Yu, C; Shackelford, S D; King, D A; Wheeler, T L

    2018-05-29

    The utility of Raman spectroscopic signatures of fresh pork loin (1 d & 15 d postmortem) in predicting fresh pork tenderness and slice shear force (SSF) was determined. Partial least square models showed that sensory tenderness and SSF are weakly correlated (R 2  = 0.2). Raman spectral data were collected in 6 s using a portable Raman spectrometer (RS). A PLS regression model was developed to predict quantitatively the tenderness scores and SSF values from Raman spectral data, with very limited success. It was discovered that the prediction accuracies for day 15 post mortem samples are significantly greater than that for day 1 postmortem samples. Classification models were developed to predict tenderness at two ends of sensory quality as "poor" vs. "good". The accuracies of classification into different quality categories (1st to 4th percentile) are also greater for the day 15 postmortem samples for sensory tenderness (93.5% vs 76.3%) and SSF (92.8% vs 76.1%). RS has the potential to become a rapid on-line screening tool for the pork producers to quickly select meats with superior quality and/or cull poor quality to meet market demand/expectations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Identifying pollution sources and predicting urban air quality using ensemble learning methods

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar P.; Gupta, Shikha; Rai, Premanjali

    2013-12-01

    In this study, principal components analysis (PCA) was performed to identify air pollution sources and tree based ensemble learning models were constructed to predict the urban air quality of Lucknow (India) using the air quality and meteorological databases pertaining to a period of five years. PCA identified vehicular emissions and fuel combustion as major air pollution sources. The air quality indices revealed the air quality unhealthy during the summer and winter. Ensemble models were constructed to discriminate between the seasonal air qualities, factors responsible for discrimination, and to predict the air quality indices. Accordingly, single decision tree (SDT), decision tree forest (DTF), and decision treeboost (DTB) were constructed and their generalization and predictive performance was evaluated in terms of several statistical parameters and compared with conventional machine learning benchmark, support vector machines (SVM). The DT and SVM models discriminated the seasonal air quality rendering misclassification rate (MR) of 8.32% (SDT); 4.12% (DTF); 5.62% (DTB), and 6.18% (SVM), respectively in complete data. The AQI and CAQI regression models yielded a correlation between measured and predicted values and root mean squared error of 0.901, 6.67 and 0.825, 9.45 (SDT); 0.951, 4.85 and 0.922, 6.56 (DTF); 0.959, 4.38 and 0.929, 6.30 (DTB); 0.890, 7.00 and 0.836, 9.16 (SVR) in complete data. The DTF and DTB models outperformed the SVM both in classification and regression which could be attributed to the incorporation of the bagging and boosting algorithms in these models. The proposed ensemble models successfully predicted the urban ambient air quality and can be used as effective tools for its management.

  10. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  11. [A predictive model for the quality of sexual life in hysterectomized women].

    PubMed

    Urrutia, María Teresa; Araya, Alejandra; Rivera, Soledad; Viviani, Paola; Villarroel, Luis

    2007-03-01

    The effects of hysterectomy on sexuality has been extensively studied. To establish a model to predict the quality of sexual life in hysterectomized women, six months after surgery. Analytical, longitudinal and prospective study of 90 hysterectomized women aged 45+/-7 years. Two structured interviews at the time of surgery and six months later were carried out to determine the characteristics of sexuality and communication within the couple. In the two interviews, communication and the quality of sexual life were described as "good" in 72 and 77% of women, respectively (NS). The variables that had a 40% influence on the quality of sexual life sixth months after surgery, were oophorectomy status, the presence of orgasm, the characteristics of communication and the basal sexuality with the couple. The sexuality of the hysterectomized women will depend, on a great extent, of pre-surgical variables. Therefore, it is important to consider these variables for the education of hysterectomized women.

  12. A statistical model for water quality predictions from a river discharge using coastal observations

    NASA Astrophysics Data System (ADS)

    Kim, S.; Terrill, E. J.

    2007-12-01

    Understanding and predicting coastal ocean water quality has benefits for reducing human health risks, protecting the environment, and improving local economies which depend on clean beaches. Continuous observations of coastal physical oceanography increase the understanding of the processes which control the fate and transport of a riverine plume which potentially contains high levels of contaminants from the upstream watershed. A data-driven model of the fate and transport of river plume water from the Tijuana River has been developed using surface current observations provided by a network of HF radar operated as part of a local coastal observatory that has been in place since 2002. The model outputs are compared with water quality sampling of shoreline indicator bacteria, and the skill of an alarm for low water quality is evaluated using the receiver operating characteristic (ROC) curve. In addition, statistical analysis of beach closures in comparison with environmental variables is also discussed.

  13. Sleep Quality Prediction From Wearable Data Using Deep Learning

    PubMed Central

    Sathyanarayana, Aarti; Joty, Shafiq; Ofli, Ferda; Srivastava, Jaideep; Elmagarmid, Ahmed; Arora, Teresa; Taheri, Shahrad

    2016-01-01

    Background The importance of sleep is paramount to health. Insufficient sleep can reduce physical, emotional, and mental well-being and can lead to a multitude of health complications among people with chronic conditions. Physical activity and sleep are highly interrelated health behaviors. Our physical activity during the day (ie, awake time) influences our quality of sleep, and vice versa. The current popularity of wearables for tracking physical activity and sleep, including actigraphy devices, can foster the development of new advanced data analytics. This can help to develop new electronic health (eHealth) applications and provide more insights into sleep science. Objective The objective of this study was to evaluate the feasibility of predicting sleep quality (ie, poor or adequate sleep efficiency) given the physical activity wearable data during awake time. In this study, we focused on predicting good or poor sleep efficiency as an indicator of sleep quality. Methods Actigraphy sensors are wearable medical devices used to study sleep and physical activity patterns. The dataset used in our experiments contained the complete actigraphy data from a subset of 92 adolescents over 1 full week. Physical activity data during awake time was used to create predictive models for sleep quality, in particular, poor or good sleep efficiency. The physical activity data from sleep time was used for the evaluation. We compared the predictive performance of traditional logistic regression with more advanced deep learning methods: multilayer perceptron (MLP), convolutional neural network (CNN), simple Elman-type recurrent neural network (RNN), long short-term memory (LSTM-RNN), and a time-batched version of LSTM-RNN (TB-LSTM). Results Deep learning models were able to predict the quality of sleep (ie, poor or good sleep efficiency) based on wearable data from awake periods. More specifically, the deep learning methods performed better than traditional linear regression. CNN

  14. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  15. Perceptual quality prediction on authentically distorted images using a bag of features approach

    PubMed Central

    Ghadiyaram, Deepti; Bovik, Alan C.

    2017-01-01

    Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417

  16. Prediction of passenger ride quality in a multifactor environment

    NASA Technical Reports Server (NTRS)

    Dempsey, T. K.; Leatherwood, J. D.

    1976-01-01

    A model being developed, permits the understanding and prediction of passenger discomfort in a multifactor environment with particular emphasis upon combined noise and vibration. The model has general applicability to diverse transportation systems and provides a means of developing ride quality design criteria as well as a diagnostic tool for identifying the vibration and/or noise stimuli causing discomfort. Presented are: (1) a review of the basic theoretical and mathematical computations associated with the model, (2) a discussion of methodological and criteria investigations for both the vertical and roll axes of vibration, (3) a description of within-axis masking of discomfort responses for the vertical axis, thereby allowing prediction of the total discomfort due to any random vertical vibration, (4) a discussion of initial data on between-axis masking, and (5) discussion of a study directed towards extension of the vibration model to the more general case of predicting ride quality in the combined noise and vibration environments.

  17. Evaluating Air-Quality Models: Review and Outlook.

    NASA Astrophysics Data System (ADS)

    Weil, J. C.; Sykes, R. I.; Venkatram, A.

    1992-10-01

    Over the past decade, much attention has been devoted to the evaluation of air-quality models with emphasis on model performance in predicting the high concentrations that are important in air-quality regulations. This paper stems from our belief that this practice needs to be expanded to 1) evaluate model physics and 2) deal with the large natural or stochastic variability in concentration. The variability is represented by the root-mean- square fluctuating concentration (c about the mean concentration (C) over an ensemble-a given set of meteorological, source, etc. conditions. Most air-quality models used in applications predict C, whereas observations are individual realizations drawn from an ensemble. For cC large residuals exist between predicted and observed concentrations, which confuse model evaluations.This paper addresses ways of evaluating model physics in light of the large c the focus is on elevated point-source models. Evaluation of model physics requires the separation of the mean model error-the difference between the predicted and observed C-from the natural variability. A residual analysis is shown to be an elective way of doing this. Several examples demonstrate the usefulness of residuals as well as correlation analyses and laboratory data in judging model physics.In general, c models and predictions of the probability distribution of the fluctuating concentration (c), (c, are in the developmental stage, with laboratory data playing an important role. Laboratory data from point-source plumes in a convection tank show that (c approximates a self-similar distribution along the plume center plane, a useful result in a residual analysis. At pmsent,there is one model-ARAP-that predicts C, c, and (c for point-source plumes. This model is more computationally demanding than other dispersion models (for C only) and must be demonstrated as a practical tool. However, it predicts an important quantity for applications- the uncertainty in the very high and

  18. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    PubMed

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  19. Impact of inherent meteorology uncertainty on air quality model predictions

    EPA Science Inventory

    It is well established that there are a number of different classifications and sources of uncertainties in environmental modeling systems. Air quality models rely on two key inputs, namely, meteorology and emissions. When using air quality models for decision making, it is impor...

  20. NOAA's National Air Quality Predictions and Development of Aerosol and Atmospheric Composition Prediction Components for the Next Generation Global Prediction System

    NASA Astrophysics Data System (ADS)

    Stajner, I.; Hou, Y. T.; McQueen, J.; Lee, P.; Stein, A. F.; Tong, D.; Pan, L.; Huang, J.; Huang, H. C.; Upadhayay, S.

    2016-12-01

    NOAA provides operational air quality predictions using the National Air Quality Forecast Capability (NAQFC): ozone and wildfire smoke for the United States and airborne dust for the contiguous 48 states at http://airquality.weather.gov. NOAA's predictions of fine particulate matter (PM2.5) became publicly available in February 2016. Ozone and PM2.5 predictions are produced using a system that operationally links the Community Multiscale Air Quality (CMAQ) model with meteorological inputs from the North American mesoscale forecast Model (NAM). Smoke and dust predictions are provided using the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model. Current NAQFC focus is on updating CMAQ to version 5.0.2, improving PM2.5 predictions, and updating emissions estimates, especially for NOx using recently observed trends. Wildfire smoke emissions from a newer version of the USFS BlueSky system are being included in a new configuration of the NAQFC NAM-CMAQ system, which is re-run for the previous 24 hours when the wildfires were observed from satellites, to better represent wildfire emissions prior to initiating predictions for the next 48 hours. In addition, NOAA is developing the Next Generation Global Prediction System (NGGPS) to represent the earth system for extended weather prediction. NGGPS will include a representation of atmospheric dynamics, physics, aerosols and atmospheric composition as well as coupling with ocean, wave, ice and land components. NGGPS is being developed with a broad community involvement, including community developed components and academic research to develop and test potential improvements for potentially inclusion in NGGPS. Several investigators at NOAA's research laboratories and in academia are working to improve the aerosol and gaseous chemistry representation for NGGPS, to develop and evaluate the representation of atmospheric composition, and to establish and improve the coupling with radiation and microphysics

  1. Ensemble prediction of air quality using the WRF/CMAQ model system for health effect studies in China

    NASA Astrophysics Data System (ADS)

    Hu, Jianlin; Li, Xun; Huang, Lin; Ying, Qi; Zhang, Qiang; Zhao, Bin; Wang, Shuxiao; Zhang, Hongliang

    2017-11-01

    Accurate exposure estimates are required for health effect analyses of severe air pollution in China. Chemical transport models (CTMs) are widely used to provide spatial distribution, chemical composition, particle size fractions, and source origins of air pollutants. The accuracy of air quality predictions in China is greatly affected by the uncertainties of emission inventories. The Community Multiscale Air Quality (CMAQ) model with meteorological inputs from the Weather Research and Forecasting (WRF) model were used in this study to simulate air pollutants in China in 2013. Four simulations were conducted with four different anthropogenic emission inventories, including the Multi-resolution Emission Inventory for China (MEIC), the Emission Inventory for China by School of Environment at Tsinghua University (SOE), the Emissions Database for Global Atmospheric Research (EDGAR), and the Regional Emission inventory in Asia version 2 (REAS2). Model performance of each simulation was evaluated against available observation data from 422 sites in 60 cities across China. Model predictions of O3 and PM2.5 generally meet the model performance criteria, but performance differences exist in different regions, for different pollutants, and among inventories. Ensemble predictions were calculated by linearly combining the results from different inventories to minimize the sum of the squared errors between the ensemble results and the observations in all cities. The ensemble concentrations show improved agreement with observations in most cities. The mean fractional bias (MFB) and mean fractional errors (MFEs) of the ensemble annual PM2.5 in the 60 cities are -0.11 and 0.24, respectively, which are better than the MFB (-0.25 to -0.16) and MFE (0.26-0.31) of individual simulations. The ensemble annual daily maximum 1 h O3 (O3-1h) concentrations are also improved, with mean normalized bias (MNB) of 0.03 and mean normalized errors (MNE) of 0.14, compared to MNB of 0.06-0.19 and

  2. a Bayesian Synthesis of Predictions from Different Models for Setting Water Quality Criteria

    NASA Astrophysics Data System (ADS)

    Arhonditsis, G. B.; Ecological Modelling Laboratory

    2011-12-01

    Skeptical views of the scientific value of modelling argue that there is no true model of an ecological system, but rather several adequate descriptions of different conceptual basis and structure. In this regard, rather than picking the single "best-fit" model to predict future system responses, we can use Bayesian model averaging to synthesize the forecasts from different models. Hence, by acknowledging that models from different areas of the complexity spectrum have different strengths and weaknesses, the Bayesian model averaging is an appealing approach to improve the predictive capacity and to overcome the ambiguity surrounding the model selection or the risk of basing ecological forecasts on a single model. Our study addresses this question using a complex ecological model, developed by Ramin et al. (2011; Environ Modell Softw 26, 337-353) to guide the water quality criteria setting process in the Hamilton Harbour (Ontario, Canada), along with a simpler plankton model that considers the interplay among phosphate, detritus, and generic phytoplankton and zooplankton state variables. This simple approach is more easily subjected to detailed sensitivity analysis and also has the advantage of fewer unconstrained parameters. Using Markov Chain Monte Carlo simulations, we calculate the relative mean standard error to assess the posterior support of the two models from the existing data. Predictions from the two models are then combined using the respective standard error estimates as weights in a weighted model average. The model averaging approach is used to examine the robustness of predictive statements made from our earlier work regarding the response of Hamilton Harbour to the different nutrient loading reduction strategies. The two eutrophication models are then used in conjunction with the SPAtially Referenced Regressions On Watershed attributes (SPARROW) watershed model. The Bayesian nature of our work is used: (i) to alleviate problems of spatiotemporal

  3. Sleep Quality Prediction From Wearable Data Using Deep Learning.

    PubMed

    Sathyanarayana, Aarti; Joty, Shafiq; Fernandez-Luque, Luis; Ofli, Ferda; Srivastava, Jaideep; Elmagarmid, Ahmed; Arora, Teresa; Taheri, Shahrad

    2016-11-04

    The importance of sleep is paramount to health. Insufficient sleep can reduce physical, emotional, and mental well-being and can lead to a multitude of health complications among people with chronic conditions. Physical activity and sleep are highly interrelated health behaviors. Our physical activity during the day (ie, awake time) influences our quality of sleep, and vice versa. The current popularity of wearables for tracking physical activity and sleep, including actigraphy devices, can foster the development of new advanced data analytics. This can help to develop new electronic health (eHealth) applications and provide more insights into sleep science. The objective of this study was to evaluate the feasibility of predicting sleep quality (ie, poor or adequate sleep efficiency) given the physical activity wearable data during awake time. In this study, we focused on predicting good or poor sleep efficiency as an indicator of sleep quality. Actigraphy sensors are wearable medical devices used to study sleep and physical activity patterns. The dataset used in our experiments contained the complete actigraphy data from a subset of 92 adolescents over 1 full week. Physical activity data during awake time was used to create predictive models for sleep quality, in particular, poor or good sleep efficiency. The physical activity data from sleep time was used for the evaluation. We compared the predictive performance of traditional logistic regression with more advanced deep learning methods: multilayer perceptron (MLP), convolutional neural network (CNN), simple Elman-type recurrent neural network (RNN), long short-term memory (LSTM-RNN), and a time-batched version of LSTM-RNN (TB-LSTM). Deep learning models were able to predict the quality of sleep (ie, poor or good sleep efficiency) based on wearable data from awake periods. More specifically, the deep learning methods performed better than traditional logistic regression. “CNN had the highest specificity and

  4. MQAPRank: improved global protein model quality assessment by learning-to-rank.

    PubMed

    Jing, Xiaoyang; Dong, Qiwen

    2017-05-25

    Protein structure prediction has achieved a lot of progress during the last few decades and a greater number of models for a certain sequence can be predicted. Consequently, assessing the qualities of predicted protein models in perspective is one of the key components of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, which could be roughly divided into three categories: single methods, quasi-single methods and clustering (or consensus) methods. Although these methods achieve much success at different levels, accurate protein model quality assessment is still an open problem. Here, we present the MQAPRank, a global protein model quality assessment program based on learning-to-rank. The MQAPRank first sorts the decoy models by using single method based on learning-to-rank algorithm to indicate their relative qualities for the target protein. And then it takes the first five models as references to predict the qualities of other models by using average GDT_TS scores between reference models and other models. Benchmarked on CASP11 and 3DRobot datasets, the MQAPRank achieved better performances than other leading protein model quality assessment methods. Recently, the MQAPRank participated in the CASP12 under the group name FDUBio and achieved the state-of-the-art performances. The MQAPRank provides a convenient and powerful tool for protein model quality assessment with the state-of-the-art performances, it is useful for protein structure prediction and model quality assessment usages.

  5. Model-based monitoring of stormwater runoff quality.

    PubMed

    Birch, Heidi; Vezzaro, Luca; Mikkelsen, Peter Steen

    2013-01-01

    Monitoring of micropollutants (MP) in stormwater is essential to evaluate the impacts of stormwater on the receiving aquatic environment. The aim of this study was to investigate how different strategies for monitoring of stormwater quality (combining a model with field sampling) affect the information obtained about MP discharged from the monitored system. A dynamic stormwater quality model was calibrated using MP data collected by automatic volume-proportional sampling and passive sampling in a storm drainage system on the outskirts of Copenhagen (Denmark) and a 10-year rain series was used to find annual average (AA) and maximum event mean concentrations. Use of this model reduced the uncertainty of predicted AA concentrations compared to a simple stochastic method based solely on data. The predicted AA concentration, obtained by using passive sampler measurements (1 month installation) for calibration of the model, resulted in the same predicted level but with narrower model prediction bounds than by using volume-proportional samples for calibration. This shows that passive sampling allows for a better exploitation of the resources allocated for stormwater quality monitoring.

  6. Predicting perceptual quality of images in realistic scenario using deep filter banks

    NASA Astrophysics Data System (ADS)

    Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang

    2018-03-01

    Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.

  7. Multi-model analysis in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  8. Validating a model that predicts daily growth and feed quality of New Zealand dairy pastures.

    PubMed

    Woodward, S J

    2001-09-01

    The Pasture Quality (PQ) model is a simple, mechanistic, dynamical system model that was designed to capture the essential biological processes in grazed grass-clover pasture, and to be optimised to derive improved grazing strategies for New Zealand dairy farms. While the individual processes represented in the model (photosynthesis, tissue growth, flowering, leaf death, decomposition, worms) were based on experimental data, this did not guarantee that the assembled model would accurately predict the behaviour of the system as a whole (i.e., pasture growth and quality). Validation of the whole model was thus a priority, since any strategy derived from the model could impact a farm business in the order of thousands of dollars per annum if adopted. This paper describes the process of defining performance criteria for the model, obtaining suitable data to test the model, and carrying out the validation analysis. The validation process highlighted a number of weaknesses in the model, which will lead to the model being improved. As a result, the model's utility will be enhanced. Furthermore, validation was found to have an unexpected additional benefit, in that despite the model's poor initial performance, support was generated for the model among field scientists involved in the wider project.

  9. A new air quality monitoring and early warning system: Air quality assessment and air pollutant concentration prediction.

    PubMed

    Yang, Zhongshan; Wang, Jian

    2017-10-01

    Air pollution in many countries is worsening with industrialization and urbanization, resulting in climate change and affecting people's health, thus, making the work of policymakers more difficult. It is therefore both urgent and necessary to establish amore scientific air quality monitoring and early warning system to evaluate the degree of air pollution objectively, and predict pollutant concentrations accurately. However, the integration of air quality assessment and air pollutant concentration prediction to establish an air quality system is not common. In this paper, we propose a new air quality monitoring and early warning system, including an assessment module and forecasting module. In the air quality assessment module, fuzzy comprehensive evaluation is used to determine the main pollutants and evaluate the degree of air pollution more scientifically. In the air pollutant concentration prediction module, a novel hybridization model combining complementary ensemble empirical mode decomposition, a modified cuckoo search and differential evolution algorithm, and an Elman neural network, is proposed to improve the forecasting accuracy of six main air pollutant concentrations. To verify the effectiveness of this system, pollutant data for two cities in China are used. The result of the fuzzy comprehensive evaluation shows that the major air pollutants in Xi'an and Jinan are PM 10 and PM 2.5 respectively, and that the air quality of Xi'an is better than that of Jinan. The forecasting results indicate that the proposed hybrid model is remarkably superior to all benchmark models on account of its higher prediction accuracy and stability. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  11. Prediction of specialty coffee cup quality based on near infrared spectra of green coffee beans.

    PubMed

    Tolessa, Kassaye; Rademaker, Michael; De Baets, Bernard; Boeckx, Pascal

    2016-04-01

    The growing global demand for specialty coffee increases the need for improved coffee quality assessment methods. Green bean coffee quality analysis is usually carried out by physical (e.g. black beans, immature beans) and cup quality (e.g. acidity, flavour) evaluation. However, these evaluation methods are subjective, costly, time consuming, require sample preparation and may end up in poor grading systems. This calls for the development of a rapid, low-cost, reliable and reproducible analytical method to evaluate coffee quality attributes and eventually chemical compounds of interest (e.g. chlorogenic acid) in coffee beans. The aim of this study was to develop a model able to predict coffee cup quality based on NIR spectra of green coffee beans. NIR spectra of 86 samples of green Arabica beans of varying quality were analysed. Partial least squares (PLS) regression method was used to develop a model correlating spectral data to cupping score data (cup quality). The selected PLS model had a good predictive power for total specialty cup quality and its individual quality attributes (overall cup preference, acidity, body and aftertaste) showing a high correlation coefficient with r-values of 90, 90,78, 72 and 72, respectively, between measured and predicted cupping scores for 20 out of 86 samples. The corresponding root mean square error of prediction (RMSEP) was 1.04, 0.22, 0.27, 0.24 and 0.27 for total specialty cup quality, overall cup preference, acidity, body and aftertaste, respectively. The results obtained suggest that NIR spectra of green coffee beans are a promising tool for fast and accurate prediction of coffee quality and for classifying green coffee beans into different specialty grades. However, the model should be further tested for coffee samples from different regions in Ethiopia and test if one generic or region-specific model should be developed. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Development and implementation of a regression model for predicting recreational water quality in the Cuyahoga River, Cuyahoga Valley National Park, Ohio 2009-11

    USGS Publications Warehouse

    Brady, Amie M.G.; Plona, Meg B.

    2012-01-01

    The Cuyahoga River within Cuyahoga Valley National Park (CVNP) is at times impaired for recreational use due to elevated concentrations of Escherichia coli (E. coli), a fecal-indicator bacterium. During the recreational seasons of mid-May through September during 2009–11, samples were collected 4 days per week and analyzed for E. coli concentrations at two sites within CVNP. Other water-quality and environ-mental data, including turbidity, rainfall, and streamflow, were measured and (or) tabulated for analysis. Regression models developed to predict recreational water quality in the river were implemented during the recreational seasons of 2009–11 for one site within CVNP–Jaite. For the 2009 and 2010 seasons, the regression models were better at predicting exceedances of Ohio's single-sample standard for primary-contact recreation compared to the traditional method of using the previous day's E. coli concentration. During 2009, the regression model was based on data collected during 2005 through 2008, excluding available 2004 data. The resulting model for 2009 did not perform as well as expected (based on the calibration data set) and tended to overestimate concentrations (correct responses at 69 percent). During 2010, the regression model was based on data collected during 2004 through 2009, including all of the available data. The 2010 model performed well, correctly predicting 89 percent of the samples above or below the single-sample standard, even though the predictions tended to be lower than actual sample concentrations. During 2011, the regression model was based on data collected during 2004 through 2010 and tended to overestimate concentrations. The 2011 model did not perform as well as the traditional method or as expected, based on the calibration dataset (correct responses at 56 percent). At a second site—Lock 29, approximately 5 river miles upstream from Jaite, a regression model based on data collected at the site during the recreational

  13. Improved model quality assessment using ProQ2.

    PubMed

    Ray, Arjun; Lindahl, Erik; Wallner, Björn

    2012-09-10

    Employing methods to assess the quality of modeled protein structures is now standard practice in bioinformatics. In a broad sense, the techniques can be divided into methods relying on consensus prediction on the one hand, and single-model methods on the other. Consensus methods frequently perform very well when there is a clear consensus, but this is not always the case. In particular, they frequently fail in selecting the best possible model in the hard cases (lacking consensus) or in the easy cases where models are very similar. In contrast, single-model methods do not suffer from these drawbacks and could potentially be applied on any protein of interest to assess quality or as a scoring function for sampling-based refinement. Here, we present a new single-model method, ProQ2, based on ideas from its predecessor, ProQ. ProQ2 is a model quality assessment algorithm that uses support vector machines to predict local as well as global quality of protein models. Improved performance is obtained by combining previously used features with updated structural and predicted features. The most important contribution can be attributed to the use of profile weighting of the residue specific features and the use features averaged over the whole model even though the prediction is still local. ProQ2 is significantly better than its predecessors at detecting high quality models, improving the sum of Z-scores for the selected first-ranked models by 20% and 32% compared to the second-best single-model method in CASP8 and CASP9, respectively. The absolute quality assessment of the models at both local and global level is also improved. The Pearson's correlation between the correct and local predicted score is improved from 0.59 to 0.70 on CASP8 and from 0.62 to 0.68 on CASP9; for global score to the correct GDT_TS from 0.75 to 0.80 and from 0.77 to 0.80 again compared to the second-best single methods in CASP8 and CASP9, respectively. ProQ2 is available at http://proq2

  14. Experiments with data assimilation in comprehensive air quality models: Impacts on model predictions and observation requirements (Invited)

    NASA Astrophysics Data System (ADS)

    Mathur, R.

    2009-12-01

    Emerging regional scale atmospheric simulation models must address the increasing complexity arising from new model applications that treat multi-pollutant interactions. Sophisticated air quality modeling systems are needed to develop effective abatement strategies that focus on simultaneously controlling multiple criteria pollutants as well as use in providing short term air quality forecasts. In recent years the applications of such models is continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physical and chemical atmospheric processes occurring at these disparate spatial and temporal scales requires the use of observation data beyond traditional in-situ networks so that the model simulations can be reasonably constrained. Preliminary applications of assimilation of remote sensing and aloft observations within a comprehensive regional scale atmospheric chemistry-transport modeling system will be presented: (1) A methodology is developed to assimilate MODIS aerosol optical depths in the model to represent the impacts long-range transport associated with the summer 2004 Alaskan fires on surface-level regional fine particulate matter (PM2.5) concentrations across the Eastern U.S. The episodic impact of this pollution transport event on PM2.5 concentrations over the eastern U.S. during mid-July 2004, is quantified through the complementary use of the model with remotely-sensed, aloft, and surface measurements; (2) Simple nudging experiments with limited aloft measurements are performed to identify uncertainties in model representations of physical processes and assess the potential use of such measurements in improving the predictive capability of atmospheric chemistry-transport models. The results from these early applications will be discussed in context of uncertainties in the model and in the remote sensing

  15. Geostatistical Prediction of Microbial Water Quality Throughout a Stream Network Using Meteorology, Land Cover, and Spatiotemporal Autocorrelation.

    PubMed

    Holcomb, David A; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R

    2018-06-25

    Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modeled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was ≥90%, ≤10%, or >10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.

  16. [Prediction of regional soil quality based on mutual information theory integrated with decision tree algorithm].

    PubMed

    Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu

    2012-02-01

    In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.

  17. PconsFold: improved contact predictions improve protein models.

    PubMed

    Michel, Mirco; Hayat, Sikander; Skwark, Marcin J; Sander, Chris; Marks, Debora S; Elofsson, Arne

    2014-09-01

    Recently it has been shown that the quality of protein contact prediction from evolutionary information can be improved significantly if direct and indirect information is separated. Given sufficiently large protein families, the contact predictions contain sufficient information to predict the structure of many protein families. However, since the first studies contact prediction methods have improved. Here, we ask how much the final models are improved if improved contact predictions are used. In a small benchmark of 15 proteins, we show that the TM-scores of top-ranked models are improved by on average 33% using PconsFold compared with the original version of EVfold. In a larger benchmark, we find that the quality is improved with 15-30% when using PconsC in comparison with earlier contact prediction methods. Further, using Rosetta instead of CNS does not significantly improve global model accuracy, but the chemistry of models generated with Rosetta is improved. PconsFold is a fully automated pipeline for ab initio protein structure prediction based on evolutionary information. PconsFold is based on PconsC contact prediction and uses the Rosetta folding protocol. Due to its modularity, the contact prediction tool can be easily exchanged. The source code of PconsFold is available on GitHub at https://www.github.com/ElofssonLab/pcons-fold under the MIT license. PconsC is available from http://c.pcons.net/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  18. Air Quality Response Modeling for Decision Support | Science ...

    EPA Pesticide Factsheets

    Air quality management relies on photochemical models to predict the responses of pollutant concentrations to changes in emissions. Such modeling is especially important for secondary pollutants such as ozone and fine particulate matter which vary nonlinearly with changes in emissions. Numerous techniques for probing pollutant-emission relationships within photochemical models have been developed and deployed for a variety of decision support applications. However, atmospheric response modeling remains complicated by the challenge of validating sensitivity results against observable data. This manuscript reviews the state of the science of atmospheric response modeling as well as efforts to characterize the accuracy and uncertainty of sensitivity results. The National Exposure Research Laboratory′s (NERL′s) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being use

  19. APOLLO: a quality assessment service for single and multiple protein models.

    PubMed

    Wang, Zheng; Eickholt, Jesse; Cheng, Jianlin

    2011-06-15

    We built a web server named APOLLO, which can evaluate the absolute global and local qualities of a single protein model using machine learning methods or the global and local qualities of a pool of models using a pair-wise comparison approach. Based on our evaluations on 107 CASP9 (Critical Assessment of Techniques for Protein Structure Prediction) targets, the predicted quality scores generated from our machine learning and pair-wise methods have an average per-target correlation of 0.671 and 0.917, respectively, with the true model quality scores. Based on our test on 92 CASP9 targets, our predicted absolute local qualities have an average difference of 2.60 Å with the actual distances to native structure. http://sysbio.rnet.missouri.edu/apollo/. Single and pair-wise global quality assessment software is also available at the site.

  20. Reflexion on linear regression trip production modelling method for ensuring good model quality

    NASA Astrophysics Data System (ADS)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  1. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    NASA Astrophysics Data System (ADS)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  2. Atmospheric Model Evaluation Tool for meteorological and air quality simulations

    EPA Pesticide Factsheets

    The Atmospheric Model Evaluation Tool compares model predictions to observed data from various meteorological and air quality observation networks to help evaluate meteorological and air quality simulations.

  3. Predicting indoor pollutant concentrations, and applications to air quality management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorenzetti, David M.

    Because most people spend more than 90% of their time indoors, predicting exposure to airborne pollutants requires models that incorporate the effect of buildings. Buildings affect the exposure of their occupants in a number of ways, both by design (for example, filters in ventilation systems remove particles) and incidentally (for example, sorption on walls can reduce peak concentrations, but prolong exposure to semivolatile organic compounds). Furthermore, building materials and occupant activities can generate pollutants. Indoor air quality depends not only on outdoor air quality, but also on the design, maintenance, and use of the building. For example, ''sick building'' symptomsmore » such as respiratory problems and headaches have been related to the presence of air-conditioning systems, to carpeting, to low ventilation rates, and to high occupant density (1). The physical processes of interest apply even in simple structures such as homes. Indoor air quality models simulate the processes, such as ventilation and filtration, that control pollutant concentrations in a building. Section 2 describes the modeling approach, and the important transport processes in buildings. Because advection usually dominates among the transport processes, Sections 3 and 4 describe methods for predicting airflows. The concluding section summarizes the application of these models.« less

  4. A Review of Surface Water Quality Models

    PubMed Central

    Li, Shibei; Jia, Peng; Qi, Changjun; Ding, Feng

    2013-01-01

    Surface water quality models can be useful tools to simulate and predict the levels, distributions, and risks of chemical pollutants in a given water body. The modeling results from these models under different pollution scenarios are very important components of environmental impact assessment and can provide a basis and technique support for environmental management agencies to make right decisions. Whether the model results are right or not can impact the reasonability and scientificity of the authorized construct projects and the availability of pollution control measures. We reviewed the development of surface water quality models at three stages and analyzed the suitability, precisions, and methods among different models. Standardization of water quality models can help environmental management agencies guarantee the consistency in application of water quality models for regulatory purposes. We concluded the status of standardization of these models in developed countries and put forward available measures for the standardization of these surface water quality models, especially in developing countries. PMID:23853533

  5. Data-Driven Nonlinear Subspace Modeling for Prediction and Control of Molten Iron Quality Indices in Blast Furnace Ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Song, Heda; Wang, Hong

    Blast furnace (BF) in ironmaking is a nonlinear dynamic process with complicated physical-chemical reactions, where multi-phase and multi-field coupling and large time delay occur during its operation. In BF operation, the molten iron temperature (MIT) as well as Si, P and S contents of molten iron are the most essential molten iron quality (MIQ) indices, whose measurement, modeling and control have always been important issues in metallurgic engineering and automation field. This paper develops a novel data-driven nonlinear state space modeling for the prediction and control of multivariate MIQ indices by integrating hybrid modeling and control techniques. First, to improvemore » modeling efficiency, a data-driven hybrid method combining canonical correlation analysis and correlation analysis is proposed to identify the most influential controllable variables as the modeling inputs from multitudinous factors would affect the MIQ indices. Then, a Hammerstein model for the prediction of MIQ indices is established using the LS-SVM based nonlinear subspace identification method. Such a model is further simplified by using piecewise cubic Hermite interpolating polynomial method to fit the complex nonlinear kernel function. Compared to the original Hammerstein model, this simplified model can not only significantly reduce the computational complexity, but also has almost the same reliability and accuracy for a stable prediction of MIQ indices. Last, in order to verify the practicability of the developed model, it is applied in designing a genetic algorithm based nonlinear predictive controller for multivariate MIQ indices by directly taking the established model as a predictor. Industrial experiments show the advantages and effectiveness of the proposed approach.« less

  6. PconsD: ultra rapid, accurate model quality assessment for protein structure prediction.

    PubMed

    Skwark, Marcin J; Elofsson, Arne

    2013-07-15

    Clustering methods are often needed for accurately assessing the quality of modeled protein structures. Recent blind evaluation of quality assessment methods in CASP10 showed that there is little difference between many different methods as far as ranking models and selecting best model are concerned. When comparing many models, the computational cost of the model comparison can become significant. Here, we present PconsD, a fast, stream-computing method for distance-driven model quality assessment that runs on consumer hardware. PconsD is at least one order of magnitude faster than other methods of comparable accuracy. The source code for PconsD is freely available at http://d.pcons.net/. Supplementary benchmarking data are also available there. arne@bioinfo.se Supplementary data are available at Bioinformatics online.

  7. CIEL*a*b* color space predictive models for colorimetry devices--analysis of perfume quality.

    PubMed

    Korifi, Rabia; Le Dréau, Yveline; Antinelli, Jean-François; Valls, Robert; Dupuy, Nathalie

    2013-01-30

    Color perception plays a major role in the consumer evaluation of perfume quality. Consumers need first to be entirely satisfied with the sensory properties of products, before other quality dimensions become relevant. The evaluation of complex mixtures color presents a challenge even for modern analytical techniques. A variety of instruments are available for color measurement. They can be classified as tristimulus colorimeters and spectrophotometers. Obsolescence of the electronics of old tristimulus colorimeter arises from the difficulty in finding repair parts and leads to its replacement by more modern instruments. High quality levels in color measurement, i.e., accuracy and reliability in color control are the major advantages of the new generation of color instrumentation, the integrating sphere spectrophotometer. Two models of spectrophotometer were tested in transmittance mode, employing the d/0° geometry. The CIEL(*)a(*)b(*) color space parameters were measured with each instrument for 380 samples of raw materials and bases used in the perfume compositions. The results were graphically compared between the colorimeter device and the spectrophotometer devices. All color space parameters obtained with the colorimeter were used as dependent variables to generate regression equations with values obtained from the spectrophotometers. The data was statistically analyzed to create predictive model between the reference and the target instruments through two methods. The first method uses linear regression analysis and the second method consists of partial least square regression (PLS) on each component. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Application of Time-series Model to Predict Groundwater Quality Parameters for Agriculture: (Plain Mehran Case Study)

    NASA Astrophysics Data System (ADS)

    Mehrdad Mirsanjari, Mir; Mohammadyari, Fatemeh

    2018-03-01

    Underground water is regarded as considerable water source which is mainly available in arid and semi arid with deficient surface water source. Forecasting of hydrological variables are suitable tools in water resources management. On the other hand, time series concepts is considered efficient means in forecasting process of water management. In this study the data including qualitative parameters (electrical conductivity and sodium adsorption ratio) of 17 underground water wells in Mehran Plain has been used to model the trend of parameters change over time. Using determined model, the qualitative parameters of groundwater is predicted for the next seven years. Data from 2003 to 2016 has been collected and were fitted by AR, MA, ARMA, ARIMA and SARIMA models. Afterward, the best model is determined using information criterion or Akaike (AIC) and correlation coefficient. After modeling parameters, the map of agricultural land use in 2016 and 2023 were generated and the changes between these years were studied. Based on the results, the average of predicted SAR (Sodium Adsorption Rate) in all wells in the year 2023 will increase compared to 2016. EC (Electrical Conductivity) average in the ninth and fifteenth holes and decreases in other wells will be increased. The results indicate that the quality of groundwater for Agriculture Plain Mehran will decline in seven years.

  9. Prediction of wastewater quality indicators at the inflow to the wastewater treatment plant using data mining methods

    NASA Astrophysics Data System (ADS)

    Szeląg, Bartosz; Barbusiński, Krzysztof; Studziński, Jan; Bartkiewicz, Lidia

    2017-11-01

    In the study, models developed using data mining methods are proposed for predicting wastewater quality indicators: biochemical and chemical oxygen demand, total suspended solids, total nitrogen and total phosphorus at the inflow to wastewater treatment plant (WWTP). The models are based on values measured in previous time steps and daily wastewater inflows. Also, independent prediction systems that can be used in case of monitoring devices malfunction are provided. Models of wastewater quality indicators were developed using MARS (multivariate adaptive regression spline) method, artificial neural networks (ANN) of the multilayer perceptron type combined with the classification model (SOM) and cascade neural networks (CNN). The lowest values of absolute and relative errors were obtained using ANN+SOM, whereas the MARS method produced the highest error values. It was shown that for the analysed WWTP it is possible to obtain continuous prediction of selected wastewater quality indicators using the two developed independent prediction systems. Such models can ensure reliable WWTP work when wastewater quality monitoring systems become inoperable, or are under maintenance.

  10. Improving Air Quality (and Weather) Predictions using Advanced Data Assimilation Techniques Applied to Coupled Models during KORUS-AQ

    NASA Astrophysics Data System (ADS)

    Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.

    2017-12-01

    Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.

  11. Preparing the Model for Prediction Across Scales (MPAS) for global retrospective air quality modeling

    EPA Science Inventory

    The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...

  12. A Mass-balance nitrate model for predicting the effects of land use on ground-water quality in municipal wellhead-protection areas

    USGS Publications Warehouse

    Frimpter, M.H.; Donohue, J.J.; Rapacz, M.V.; Beye, H.G.

    1990-01-01

    A mass-balance accounting model can be used to guide the management of septic systems and fertilizers to control the degradation of groundwater quality in zones of an aquifer that contributes water to public supply wells. The nitrate nitrogen concentration of the mixture in the well can be predicted for steady-state conditions by calculating the concentration that results from the total weight of nitrogen and total volume of water entering the zone of contribution to the well. These calculations will allow water-quality managers to predict the nitrate concentrations that would be produced by different types and levels of development, and to plan development accordingly. Computations for different development schemes provide a technical basis for planners and managers to compare water quality effects and to select alternatives that limit nitrate concentration in wells. Appendix A contains tables of nitrate loads and water volumes from common sources for use with the accounting model. Appendix B describes the preparation of a spreadsheet for the nitrate loading calculations with a software package generally available for desktop computers. (USGS)

  13. Logistic regression models for predicting physical and mental health-related quality of life in rheumatoid arthritis patients.

    PubMed

    Alishiri, Gholam Hossein; Bayat, Noushin; Fathi Ashtiani, Ali; Tavallaii, Seyed Abbas; Assari, Shervin; Moharamzad, Yashar

    2008-01-01

    The aim of this work was to develop two logistic regression models capable of predicting physical and mental health related quality of life (HRQOL) among rheumatoid arthritis (RA) patients. In this cross-sectional study which was conducted during 2006 in the outpatient rheumatology clinic of our university hospital, Short Form 36 (SF-36) was used for HRQOL measurements in 411 RA patients. A cutoff point to define poor versus good HRQOL was calculated using the first quartiles of SF-36 physical and mental component scores (33.4 and 36.8, respectively). Two distinct logistic regression models were used to derive predictive variables including demographic, clinical, and psychological factors. The sensitivity, specificity, and accuracy of each model were calculated. Poor physical HRQOL was positively associated with pain score, disease duration, monthly family income below 300 US$, comorbidity, patient global assessment of disease activity or PGA, and depression (odds ratios: 1.1; 1.004; 15.5; 1.1; 1.02; 2.08, respectively). The variables that entered into the poor mental HRQOL prediction model were monthly family income below 300 US$, comorbidity, PGA, and bodily pain (odds ratios: 6.7; 1.1; 1.01; 1.01, respectively). Optimal sensitivity and specificity were achieved at a cutoff point of 0.39 for the estimated probability of poor physical HRQOL and 0.18 for mental HRQOL. Sensitivity, specificity, and accuracy of the physical and mental models were 73.8, 87, 83.7% and 90.38, 70.36, 75.43%, respectively. The results show that the suggested models can be used to predict poor physical and mental HRQOL separately among RA patients using simple variables with acceptable accuracy. These models can be of use in the clinical decision-making of RA patients and to recognize patients with poor physical or mental HRQOL in advance, for better management.

  14. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    EPA Science Inventory

    The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...

  15. Protein single-model quality assessment by feature-based probability density functions.

    PubMed

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  16. NOAA's National Air Quality Prediction and Development of Aerosol and Atmospheric Composition Prediction Components for NGGPS

    NASA Astrophysics Data System (ADS)

    Stajner, I.; McQueen, J.; Lee, P.; Stein, A. F.; Wilczak, J. M.; Upadhayay, S.; daSilva, A.; Lu, C. H.; Grell, G. A.; Pierce, R. B.

    2017-12-01

    NOAA's operational air quality predictions of ozone, fine particulate matter (PM2.5) and wildfire smoke over the United States and airborne dust over the contiguous 48 states are distributed at http://airquality.weather.gov. The National Air Quality Forecast Capability (NAQFC) providing these predictions was updated in June 2017. Ozone and PM2.5 predictions are now produced using the system linking the Community Multiscale Air Quality model (CMAQ) version 5.0.2 with meteorological inputs from the North American Mesoscale Forecast System (NAM) version 4. Predictions of PM2.5 include intermittent dust emissions and wildfire emissions from an updated version of BlueSky system. For the latter, the CMAQ system is initialized by rerunning it over the previous 24 hours to include wildfire emissions at the time when they were observed from the satellites. Post processing to reduce the bias in PM2.5 prediction was updated using the Kalman filter analog (KFAN) technique. Dust related aerosol species at the CMAQ domain lateral boundaries now come from the NEMS Global Aerosol Component (NGAC) v2 predictions. Further development of NAQFC includes testing of CMAQ predictions to 72 hours, Canadian fire emissions data from Environment and Climate Change Canada (ECCC) and the KFAN technique to reduce bias in ozone predictions. NOAA is developing the Next Generation Global Predictions System (NGGPS) with an aerosol and gaseous atmospheric composition component to improve and integrate aerosol and ozone predictions and evaluate their impacts on physics, data assimilation and weather prediction. Efforts are underway to improve cloud microphysics, investigate aerosol effects and include representations of atmospheric composition of varying complexity into NGGPS: from the operational ozone parameterization, GOCART aerosols, with simplified ozone chemistry, to CMAQ chemistry with aerosol modules. We will present progress on community building, planning and development of NGGPS.

  17. Likelihood of achieving air quality targets under model uncertainties.

    PubMed

    Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W

    2011-01-01

    Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.

  18. Evaluation of ride quality prediction methods for operational military helicopters

    NASA Technical Reports Server (NTRS)

    Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.

    1984-01-01

    The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots' discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.

  19. Meteorological Processes Affecting Air Quality – Research and Model Development Needs

    EPA Science Inventory

    Meteorology modeling is an important component of air quality modeling systems that defines the physical and dynamical environment for atmospheric chemistry. The meteorology models used for air quality applications are based on numerical weather prediction models that were devel...

  20. PredictABEL: an R package for the assessment of risk prediction models.

    PubMed

    Kundu, Suman; Aulchenko, Yurii S; van Duijn, Cornelia M; Janssens, A Cecile J W

    2011-04-01

    The rapid identification of genetic markers for multifactorial diseases from genome-wide association studies is fuelling interest in investigating the predictive ability and health care utility of genetic risk models. Various measures are available for the assessment of risk prediction models, each addressing a different aspect of performance and utility. We developed PredictABEL, a package in R that covers descriptive tables, measures and figures that are used in the analysis of risk prediction studies such as measures of model fit, predictive ability and clinical utility, and risk distributions, calibration plot and the receiver operating characteristic plot. Tables and figures are saved as separate files in a user-specified format, which include publication-quality EPS and TIFF formats. All figures are available in a ready-made layout, but they can be customized to the preferences of the user. The package has been developed for the analysis of genetic risk prediction studies, but can also be used for studies that only include non-genetic risk factors. PredictABEL is freely available at the websites of GenABEL ( http://www.genabel.org ) and CRAN ( http://cran.r-project.org/).

  1. Prediction of harmful water quality parameters combining weather, air quality and ecosystem models with in situ measurement

    EPA Science Inventory

    The ability to predict water quality in lakes is important since lakes are sources of water for agriculture, drinking, and recreational uses. Lakes are also home to a dynamic ecosystem of lacustrine wetlands and deep waters. They are sensitive to pH changes and are dependent on d...

  2. Effect of horizontal resolution on meteorology and air-quality prediction with a regional scale model

    NASA Astrophysics Data System (ADS)

    Varghese, Saji; Langmann, Baerbel; Ceburnis, Darius; O'Dowd, Colin D.

    2011-08-01

    Horizontal resolution sensitivity can significantly contribute to the uncertainty in predictions of meteorology and air-quality from a regional climate model. In the study presented here, a state-of-the-art regional scale atmospheric climate-chemistry-aerosol model REMOTE is used to understand the influence of spatial model resolutions of 1.0°, 0.5° and 0.25° on predicted meteorological and aerosol parameters for June 2003 for the European domain comprising North-east Atlantic and Western Europe. Model precipitation appears to improve with resolution while wind speed has shown best results for 0.25° resolution for most of the stations compared with ECAD data. Low root mean square error and spatial bias for surface pressure, precipitation and surface temperature show that the model is very reliable. Spatial and temporal variation in black carbon, primary organic carbon, sea-salt and sulphate concentrations and their burden are presented. In most cases, chemical species concentrations at the surface show no particular trend or improvement with increase in resolution. There has been a pronounced influence of horizontal resolution on the vertical distribution pattern of some aerosol species. Some of these effects are due to the improvement in topographical details, flow characteristics and associated vertical and horizontal dynamic processes. The different sink processes have contributed very differently to the various aerosol species in terms of deposition (wet and dry) and sedimentation which are strongly linked to the meteorological processes. Overall, considering the performance of meteorological parameters and chemical species concentrations, a horizontal model resolution of 0.5° is suggested to achieve reasonable results within the limitations of this model.

  3. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11

    PubMed Central

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-01-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671

  4. Development of visibility forecasting modeling framework for the Lower Fraser Valley of British Columbia using Canada's Regional Air Quality Deterministic Prediction System.

    PubMed

    So, Rita; Teakles, Andrew; Baik, Jonathan; Vingarzan, Roxanne; Jones, Keith

    2018-05-01

    Visibility degradation, one of the most noticeable indicators of poor air quality, can occur despite relatively low levels of particulate matter when the risk to human health is low. The availability of timely and reliable visibility forecasts can provide a more comprehensive understanding of the anticipated air quality conditions to better inform local jurisdictions and the public. This paper describes the development of a visibility forecasting modeling framework, which leverages the existing air quality and meteorological forecasts from Canada's operational Regional Air Quality Deterministic Prediction System (RAQDPS) for the Lower Fraser Valley of British Columbia. A baseline model (GM-IMPROVE) was constructed using the revised IMPROVE algorithm based on unprocessed forecasts from the RAQDPS. Three additional prototypes (UMOS-HYB, GM-MLR, GM-RF) were also developed and assessed for forecast performance of up to 48 hr lead time during various air quality and meteorological conditions. Forecast performance was assessed by examining their ability to provide both numerical and categorical forecasts in the form of 1-hr total extinction and Visual Air Quality Ratings (VAQR), respectively. While GM-IMPROVE generally overestimated extinction more than twofold, it had skill in forecasting the relative species contribution to visibility impairment, including ammonium sulfate and ammonium nitrate. Both statistical prototypes, GM-MLR and GM-RF, performed well in forecasting 1-hr extinction during daylight hours, with correlation coefficients (R) ranging from 0.59 to 0.77. UMOS-HYB, a prototype based on postprocessed air quality forecasts without additional statistical modeling, provided reasonable forecasts during most daylight hours. In terms of categorical forecasts, the best prototype was approximately 75 to 87% correct, when forecasting for a condensed three-category VAQR. A case study, focusing on a poor visual air quality yet low Air Quality Health Index episode

  5. The North American Multi-Model Ensemble (NMME): Phase-1 Seasonal to Interannual Prediction, Phase-2 Toward Developing Intra-Seasonal Prediction

    NASA Technical Reports Server (NTRS)

    Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily; hide

    2013-01-01

    The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.

  6. Model design for predicting extreme precipitation event impacts on water quality in a water supply reservoir

    NASA Astrophysics Data System (ADS)

    Hagemann, M.; Jeznach, L. C.; Park, M. H.; Tobiason, J. E.

    2016-12-01

    Extreme precipitation events such as tropical storms and hurricanes are by their nature rare, yet have disproportionate and adverse effects on surface water quality. In the context of drinking water reservoirs, common concerns of such events include increased erosion and sediment transport and influx of natural organic matter and nutrients. As part of an effort to model the effects of an extreme precipitation event on water quality at the reservoir intake of a major municipal water system, this study sought to estimate extreme-event watershed responses including streamflow and exports of nutrients and organic matter for use as inputs to a 2-D hydrodynamic and water quality reservoir model. Since extreme-event watershed exports are highly uncertain, we characterized and propagated predictive uncertainty using a quasi-Monte Carlo approach to generate reservoir model inputs. Three storm precipitation depths—corresponding to recurrence intervals of 5, 50, and 100 years—were converted to streamflow in each of 9 tributaries by volumetrically scaling 2 storm hydrographs from the historical record. Rating-curve models for concentratoin, calibrated using 10 years of data for each of 5 constituents, were then used to estimate the parameters of a multivariate lognormal probability model of constituent concentrations, conditional on each scenario's storm date and streamflow. A quasi-random Halton sequence (n = 100) was drawn from the conditional distribution for each event scenario, and used to generate input files to a calibrated CE-QUAL-W2 reservoir model. The resulting simulated concentrations at the reservoir's drinking water intake constitute a low-discrepancy sample from the estimated uncertainty space of extreme-event source water-quality. Limiting factors to the suitability of this approach include poorly constrained relationships between hydrology and constituent concentrations, a high-dimensional space from which to generate inputs, and relatively long run

  7. A neighborhood statistics model for predicting stream pathogen indicator levels.

    PubMed

    Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S

    2015-03-01

    Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.

  8. Quality of life among people with multiple sclerosis: Replication of a three-factor prediction model.

    PubMed

    Bishop, Malachy; Rumrill, Phillip D; Roessler, Richard T

    2015-01-01

    This article presents a replication of Rumrill, Roessler, and Fitzgerald's 2004 analysis of a three-factor model of the impact of multiple sclerosis (MS) on quality of life (QOL). The three factors in the original model included illness-related, employment-related, and psychosocial adjustment factors. To test hypothesized relationships between QOL and illness-related, employment-related, and psychosocial variables using data from a survey of the employment concerns of Americans with MS (N = 1,839). An ex post facto, multiple correlational design was employed incorporating correlational and multiple regression analyses. QOL was positively related to educational level, employment status, job satisfaction, and job-match, and negatively related to number of symptoms, severity of symptoms, and perceived stress level. The three-factor model explained approximately 37 percent of the variance in QOL scores. The results of this replication confirm the continuing value of the three-factor model for predicting the QOL of adults with MS, and demonstrate the importance of medical, mental health, and vocational rehabilitation interventions and services in promoting QOL.

  9. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2016-09-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  10. Improved protein model quality assessments by changing the target function.

    PubMed

    Uziela, Karolis; Menéndez Hurtado, David; Shu, Nanjiang; Wallner, Björn; Elofsson, Arne

    2018-06-01

    Protein modeling quality is an important part of protein structure prediction. We have for more than a decade developed a set of methods for this problem. We have used various types of description of the protein and different machine learning methodologies. However, common to all these methods has been the target function used for training. The target function in ProQ describes the local quality of a residue in a protein model. In all versions of ProQ the target function has been the S-score. However, other quality estimation functions also exist, which can be divided into superposition- and contact-based methods. The superposition-based methods, such as S-score, are based on a rigid body superposition of a protein model and the native structure, while the contact-based methods compare the local environment of each residue. Here, we examine the effects of retraining our latest predictor, ProQ3D, using identical inputs but different target functions. We find that the contact-based methods are easier to predict and that predictors trained on these measures provide some advantages when it comes to identifying the best model. One possible reason for this is that contact based methods are better at estimating the quality of multi-domain targets. However, training on the S-score gives the best correlation with the GDT_TS score, which is commonly used in CASP to score the global model quality. To take the advantage of both of these features we provide an updated version of ProQ3D that predicts local and global model quality estimates based on different quality estimates. © 2018 Wiley Periodicals, Inc.

  11. Health-Related Quality of Life in a Predictive Model for Mortality in Older Breast Cancer Survivors.

    PubMed

    DuMontier, Clark; Clough-Gorr, Kerri M; Silliman, Rebecca A; Stuck, Andreas E; Moser, André

    2018-03-13

    To develop a predictive model and risk score for 10-year mortality using health-related quality of life (HRQOL) in a cohort of older women with early-stage breast cancer. Prospective cohort. Community. U.S. women aged 65 and older diagnosed with Stage I to IIIA primary breast cancer (N=660). We used medical variables (age, comorbidity), HRQOL measures (10-item Physical Function Index and 5-item Mental Health Index from the Medical Outcomes Study (MOS) 36-item Short-Form Survey; 8-item Modified MOS Social Support Survey), and breast cancer variables (stage, surgery, chemotherapy, endocrine therapy) to develop a 10-year mortality risk score using penalized logistic regression models. We assessed model discriminative performance using the area under the receiver operating characteristic curve (AUC), calibration performance using the Hosmer-Lemeshow test, and overall model performance using Nagelkerke R 2 (NR). Compared to a model including only age, comorbidity, and cancer stage and treatment variables, adding HRQOL variables improved discrimination (AUC 0.742 from 0.715) and overall performance (NR 0.221 from 0.190) with good calibration (p=0.96 from HL test). In a cohort of older women with early-stage breast cancer, HRQOL measures predict 10-year mortality independently of traditional breast cancer prognostic variables. These findings suggest that interventions aimed at improving physical function, mental health, and social support might improve both HRQOL and survival. © 2018, Copyright the Authors Journal compilation © 2018, The American Geriatrics Society.

  12. Predicting the Effect of Changing Precipitation Extremes and Land Cover Change on Urban Water Quality

    NASA Astrophysics Data System (ADS)

    SUN, N.; Yearsley, J. R.; Lettenmaier, D. P.

    2013-12-01

    Recent research shows that precipitation extremes in many of the largest U.S. urban areas have increased over the last 60 years. These changes have important implications for stormwater runoff and water quality, which in urban areas are dominated by the most extreme precipitation events. We assess the potential implications of changes in extreme precipitation and changing land cover in urban and urbanizing watersheds at the regional scale using a combination of hydrology and water quality models. Specifically, we describe the integration of a spatially distributed hydrological model - the Distributed Hydrology Soil Vegetation Model (DHSVM), the urban water quality model in EPA's Storm Water Management Model (SWMM), the semi-Lagrangian stream temperature model RBM10, and dynamical and statistical downscaling methods applied to global climate predictions. Key output water quality parameters include total suspended solids (TSS), toal nitrogen, total phosphorous, fecal coliform bacteria and stream temperature. We have evaluated the performance of the modeling system in the highly urbanized Mercer Creek watershed in the rapidly growing Bellevue urban area in WA, USA. The results suggest that the model is able to (1) produce reasonable streamflow predictions at fine temporal and spatial scales; (2) provide spatially distributed water temperature predictions that mostly agree with observations throughout a complex stream network, and characterize impacts of climate, landscape, near-stream vegetation change on stream temperature at local and regional scales; and (3) capture plausibly the response of water quality constituents to varying magnitude of precipitation events in urban environments. Next we will extend the scope of the study from the Mercer Creek watershed to include the entire Puget Sound Basin, WA, USA.

  13. Quantitative Prediction of Beef Quality Using Visible and NIR Spectroscopy with Large Data Samples Under Industry Conditions

    NASA Astrophysics Data System (ADS)

    Qiao, T.; Ren, J.; Craigie, C.; Zabalza, J.; Maltin, Ch.; Marshall, S.

    2015-03-01

    It is well known that the eating quality of beef has a significant influence on the repurchase behavior of consumers. There are several key factors that affect the perception of quality, including color, tenderness, juiciness, and flavor. To support consumer repurchase choices, there is a need for an objective measurement of quality that could be applied to meat prior to its sale. Objective approaches such as offered by spectral technologies may be useful, but the analytical algorithms used remain to be optimized. For visible and near infrared (VISNIR) spectroscopy, Partial Least Squares Regression (PLSR) is a widely used technique for meat related quality modeling and prediction. In this paper, a Support Vector Machine (SVM) based machine learning approach is presented to predict beef eating quality traits. Although SVM has been successfully used in various disciplines, it has not been applied extensively to the analysis of meat quality parameters. To this end, the performance of PLSR and SVM as tools for the analysis of meat tenderness is evaluated, using a large dataset acquired under industrial conditions. The spectral dataset was collected using VISNIR spectroscopy with the wavelength ranging from 350 to 1800 nm on 234 beef M. longissimus thoracis steaks from heifers, steers, and young bulls. As the dimensionality with the VISNIR data is very high (over 1600 spectral bands), the Principal Component Analysis (PCA) technique was applied for feature extraction and data reduction. The extracted principal components (less than 100) were then used for data modeling and prediction. The prediction results showed that SVM has a greater potential to predict beef eating quality than PLSR, especially for the prediction of tenderness. The infl uence of animal gender on beef quality prediction was also investigated, and it was found that beef quality traits were predicted most accurately in beef from young bulls.

  14. The statistical evaluation and comparison of ADMS-Urban model for the prediction of nitrogen dioxide with air quality monitoring network.

    PubMed

    Dėdelė, Audrius; Miškinytė, Auksė

    2015-09-01

    In many countries, road traffic is one of the main sources of air pollution associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered to be a measure of traffic-related air pollution, with concentrations tending to be higher near highways, along busy roads, and in the city centers, and the exceedances are mainly observed at measurement stations located close to traffic. In order to assess the air quality in the city and the air pollution impact on public health, air quality models are used. However, firstly, before the model can be used for these purposes, it is important to evaluate the accuracy of the dispersion modelling as one of the most widely used method. The monitoring and dispersion modelling are two components of air quality monitoring system (AQMS), in which statistical comparison was made in this research. The evaluation of the Atmospheric Dispersion Modelling System (ADMS-Urban) was made by comparing monthly modelled NO2 concentrations with the data of continuous air quality monitoring stations in Kaunas city. The statistical measures of model performance were calculated for annual and monthly concentrations of NO2 for each monitoring station site. The spatial analysis was made using geographic information systems (GIS). The calculation of statistical parameters indicated a good ADMS-Urban model performance for the prediction of NO2. The results of this study showed that the agreement of modelled values and observations was better for traffic monitoring stations compared to the background and residential stations.

  15. Watershed Models for Predicting Nitrogen Loads from Artificially Drained Lands

    Treesearch

    R. Wayne Skaggs; George M. Chescheir; Glenn Fernandez; Devendra M. Amatya

    2003-01-01

    Non-point sources of pollutants originate at the field scale but water quality problems usually occur at the watershed or basin scale. This paper describes a series of models developed for poorly drained watersheds. The models use DRAINMOD to predict hydrology at the field scale and a range of methods to predict channel hydraulics and nitrogen transport. In-stream...

  16. Testing and analysis of internal hardwood log defect prediction models

    Treesearch

    R. Edward Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  17. United3D: a protein model quality assessment program that uses two consensus based methods.

    PubMed

    Terashi, Genki; Oosawa, Makoto; Nakamura, Yuuki; Kanou, Kazuhiko; Takeda-Shitaka, Mayuko

    2012-01-01

    In protein structure prediction, such as template-based modeling and free modeling (ab initio modeling), the step that assesses the quality of protein models is very important. We have developed a model quality assessment (QA) program United3D that uses an optimized clustering method and a simple Cα atom contact-based potential. United3D automatically estimates the quality scores (Qscore) of predicted protein models that are highly correlated with the actual quality (GDT_TS). The performance of United3D was tested in the ninth Critical Assessment of protein Structure Prediction (CASP9) experiment. In CASP9, United3D showed the lowest average loss of GDT_TS (5.3) among the QA methods participated in CASP9. This result indicates that the performance of United3D to identify the high quality models from the models predicted by CASP9 servers on 116 targets was best among the QA methods that were tested in CASP9. United3D also produced high average Pearson correlation coefficients (0.93) and acceptable Kendall rank correlation coefficients (0.68) between the Qscore and GDT_TS. This performance was competitive with the other top ranked QA methods that were tested in CASP9. These results indicate that United3D is a useful tool for selecting high quality models from many candidate model structures provided by various modeling methods. United3D will improve the accuracy of protein structure prediction.

  18. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  19. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    PubMed

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  20. Air pollution dispersion models for human exposure predictions in London.

    PubMed

    Beevers, Sean D; Kitwiroon, Nutthida; Williams, Martin L; Kelly, Frank J; Ross Anderson, H; Carslaw, David C

    2013-01-01

    The London household survey has shown that people travel and are exposed to air pollutants differently. This argues for human exposure to be based upon space-time-activity data and spatio-temporal air quality predictions. For the latter, we have demonstrated the role that dispersion models can play by using two complimentary models, KCLurban, which gives source apportionment information, and Community Multi-scale Air Quality Model (CMAQ)-urban, which predicts hourly air quality. The KCLurban model is in close agreement with observations of NO(X), NO(2) and particulate matter (PM)(10/2.5), having a small normalised mean bias (-6% to 4%) and a large Index of Agreement (0.71-0.88). The temporal trends of NO(X) from the CMAQ-urban model are also in reasonable agreement with observations. Spatially, NO(2) predictions show that within 10's of metres of major roads, concentrations can range from approximately 10-20 p.p.b. up to 70 p.p.b. and that for PM(10/2.5) central London roadside concentrations are approximately double the suburban background concentrations. Exposure to different PM sources is important and we predict that brake wear-related PM(10) concentrations are approximately eight times greater near major roads than at suburban background locations. Temporally, we have shown that average NO(X) concentrations close to roads can range by a factor of approximately six between the early morning minimum and morning rush hour maximum periods. These results present strong arguments for the hybrid exposure model under development at King's and, in future, for in-building models and a model for the London Underground.

  1. Predictive risk models for proximal aortic surgery

    PubMed Central

    Díaz, Rocío; Pascual, Isaac; Álvarez, Rubén; Alperi, Alberto; Rozado, Jose; Morales, Carlos; Silva, Jacobo; Morís, César

    2017-01-01

    Predictive risk models help improve decision making, information to our patients and quality control comparing results between surgeons and between institutions. The use of these models promotes competitiveness and led to increasingly better results. All these virtues are of utmost importance when the surgical operation entails high-risk. Although proximal aortic surgery is less frequent than other cardiac surgery operations, this procedure itself is more challenging and technically demanding than other common cardiac surgery techniques. The aim of this study is to review the current status of predictive risk models for patients who undergo proximal aortic surgery, which means aortic root replacement, supracoronary ascending aortic replacement or aortic arch surgery. PMID:28616348

  2. Predicting fire effects on water quality: a perspective and future needs

    NASA Astrophysics Data System (ADS)

    Smith, Hugh; Sheridan, Gary; Nyman, Petter; Langhans, Christoph; Noske, Philip; Lane, Patrick

    2017-04-01

    Forest environments are a globally significant source of drinking water. Fire presents a credible threat to the supply of high quality water in many forested regions. The post-fire risk to water supplies depends on storm event characteristics, vegetation cover and fire-related changes in soil infiltration and erodibility modulated by landscape position. The resulting magnitude of runoff generation, erosion and constituent flux to streams and reservoirs determines the severity of water quality impacts in combination with the physical and chemical composition of the entrained material. Research to date suggests that most post-fire water quality impacts are due to large increases in the supply of particulates (fine-grained sediment and ash) and particle-associated chemical constituents. The largest water quality impacts result from high magnitude erosion events, including debris flow processes, which typically occur in response to short duration, high intensity storm events during the recovery period. Most research to date focuses on impacts on water quality after fire. However, information on potential water quality impacts is required prior to fire events for risk planning. Moreover, changes in climate and forest management (e.g. prescribed burning) that affect fire regimes may alter water quality risks. Therefore, prediction requires spatial-temporal representation of fire and rainfall regimes coupled with information on fire-related changes to soil hydrologic parameters. Recent work has applied such an approach by combining a fire spread model with historic fire weather data in a Monte Carlo simulation to quantify probabilities associated with fire and storm events generating debris flows and fine sediment influx to a reservoir located in Victoria, Australia. Prediction of fire effects on water quality would benefit from further research in several areas. First, more work on regional-scale stochastic modelling of intersecting fire and storm events with landscape

  3. Development of a multi-ensemble Prediction Model for China

    NASA Astrophysics Data System (ADS)

    Brasseur, G. P.; Bouarar, I.; Petersen, A. K.

    2016-12-01

    As part of the EU-sponsored Panda and MarcoPolo Projects, a multi-model prediction system including 7 models has been developed. Most regional models use global air quality predictions provided by the Copernicus Atmospheric Monitoring Service and downscale the forecast at relatively high spatial resolution in eastern China. The paper will describe the forecast system and show examples of forecasts produced for several Chinese urban areas and displayed on a web site developed by the Dutch Meteorological service. A discussion on the accuracy of the predictions based on a detailed validation process using surface measurements from the Chinese monitoring network will be presented.

  4. Evaluation of multivariate linear regression and artificial neural networks in prediction of water quality parameters

    PubMed Central

    2014-01-01

    This paper examined the efficiency of multivariate linear regression (MLR) and artificial neural network (ANN) models in prediction of two major water quality parameters in a wastewater treatment plant. Biochemical oxygen demand (BOD) and chemical oxygen demand (COD) as well as indirect indicators of organic matters are representative parameters for sewer water quality. Performance of the ANN models was evaluated using coefficient of correlation (r), root mean square error (RMSE) and bias values. The computed values of BOD and COD by model, ANN method and regression analysis were in close agreement with their respective measured values. Results showed that the ANN performance model was better than the MLR model. Comparative indices of the optimized ANN with input values of temperature (T), pH, total suspended solid (TSS) and total suspended (TS) for prediction of BOD was RMSE = 25.1 mg/L, r = 0.83 and for prediction of COD was RMSE = 49.4 mg/L, r = 0.81. It was found that the ANN model could be employed successfully in estimating the BOD and COD in the inlet of wastewater biochemical treatment plants. Moreover, sensitive examination results showed that pH parameter have more effect on BOD and COD predicting to another parameters. Also, both implemented models have predicted BOD better than COD. PMID:24456676

  5. Water Quality, Cyanobacteria, and Environmental Factors and Their Relations to Microcystin Concentrations for Use in Predictive Models at Ohio Lake Erie and Inland Lake Recreational Sites, 2013-14

    USGS Publications Warehouse

    Francy, Donna S.; Graham, Jennifer L.; Stelzer, Erin A.; Ecker, Christopher D.; Brady, Amie M. G.; Pam Struffolino,; Loftin, Keith A.

    2015-11-06

    The results of this study showed that water-quality and environmental variables are promising for use in site-specific daily or long-term predictive models. In order to develop more accurate models to predict toxin concentrations at freshwater lake sites, data need to be collected more frequently and for consecutive days in future studies.

  6. Klang River water quality modelling using music

    NASA Astrophysics Data System (ADS)

    Zahari, Nazirul Mubin; Zawawi, Mohd Hafiz; Muda, Zakaria Che; Sidek, Lariyah Mohd; Fauzi, Nurfazila Mohd; Othman, Mohd Edzham Fareez; Ahmad, Zulkepply

    2017-09-01

    Water is an essential resource that sustains life on earth; changes in the natural quality and distribution of water have ecological impacts that can sometimes be devastating. Recently, Malaysia is facing many environmental issues regarding water pollution. The main causes of river pollution are rapid urbanization, arising from the development of residential, commercial, industrial sites, infrastructural facilities and others. The purpose of the study was to predict the water quality of the Connaught Bridge Power Station (CBPS), Klang River. Besides that, affects to the low tide and high tide and. to forecast the pollutant concentrations of the Biochemical Oxygen Demand (BOD) and Total Suspended Solid (TSS) for existing land use of the catchment area through water quality modeling (by using the MUSIC software). Besides that, to identifying an integrated urban stormwater treatment system (Best Management Practice or BMPs) to achieve optimal performance in improving the water quality of the catchment using the MUSIC software in catchment areas having tropical climates. Result from MUSIC Model such as BOD5 at station 1 can be reduce the concentration from Class IV to become Class III. Whereas, for TSS concentration from Class III to become Class II at the station 1. The model predicted a mean TSS reduction of 0.17%, TP reduction of 0.14%, TN reduction of 0.48% and BOD5 reduction of 0.31% for Station 1 Thus, from the result after purposed BMPs the water quality is safe to use because basically water quality monitoring is important due to threat such as activities are harmful to aquatic organisms and public health.

  7. Using Intel's Knight Landing Processor to Accelerate Global Nested Air Quality Prediction Modeling System (GNAQPMS) Model

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, H.; Chen, X.; Wu, Q.; Wang, Z.

    2016-12-01

    The Global Nested Air Quality Prediction Modeling System for Hg (GNAQPMS-Hg) is a global chemical transport model coupled Hg transport module to investigate the mercury pollution. In this study, we present our work of transplanting the GNAQPMS model on Intel Xeon Phi processor, Knights Landing (KNL) to accelerate the model. KNL is the second-generation product adopting Many Integrated Core Architecture (MIC) architecture. Compared with the first generation Knight Corner (KNC), KNL has more new hardware features, that it can be used as unique processor as well as coprocessor with other CPU. According to the Vtune tool, the high overhead modules in GNAQPMS model have been addressed, including CBMZ gas chemistry, advection and convection module, and wet deposition module. These high overhead modules were accelerated by optimizing code and using new techniques of KNL. The following optimized measures was done: 1) Changing the pure MPI parallel mode to hybrid parallel mode with MPI and OpenMP; 2.Vectorizing the code to using the 512-bit wide vector computation unit. 3. Reducing unnecessary memory access and calculation. 4. Reducing Thread Local Storage (TLS) for common variables with each OpenMP thread in CBMZ. 5. Changing the way of global communication from files writing and reading to MPI functions. After optimization, the performance of GNAQPMS is greatly increased both on CPU and KNL platform, the single-node test showed that optimized version has 2.6x speedup on two sockets CPU platform and 3.3x speedup on one socket KNL platform compared with the baseline version code, which means the KNL has 1.29x speedup when compared with 2 sockets CPU platform.

  8. Modeling hydrodynamics, water quality, and benthic processes to predict ecological effects in Narragansett Bay

    EPA Science Inventory

    The environmental fluid dynamics code (EFDC) was used to study the three dimensional (3D) circulation, water quality, and ecology in Narragansett Bay, RI. Predictions of the Bay hydrodynamics included the behavior of the water surface elevation, currents, salinity, and temperatur...

  9. Application of statistical classification methods for predicting the acceptability of well-water quality

    NASA Astrophysics Data System (ADS)

    Cameron, Enrico; Pilla, Giorgio; Stella, Fabio A.

    2018-06-01

    The application of statistical classification methods is investigated—in comparison also to spatial interpolation methods—for predicting the acceptability of well-water quality in a situation where an effective quantitative model of the hydrogeological system under consideration cannot be developed. In the example area in northern Italy, in particular, the aquifer is locally affected by saline water and the concentration of chloride is the main indicator of both saltwater occurrence and groundwater quality. The goal is to predict if the chloride concentration in a water well will exceed the allowable concentration so that the water is unfit for the intended use. A statistical classification algorithm achieved the best predictive performances and the results of the study show that statistical classification methods provide further tools for dealing with groundwater quality problems concerning hydrogeological systems that are too difficult to describe analytically or to simulate effectively.

  10. DeepQA: improving the estimation of single protein model quality with deep belief networks.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-12-05

    Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .

  11. Key Questions in Building Defect Prediction Models in Practice

    NASA Astrophysics Data System (ADS)

    Ramler, Rudolf; Wolfmaier, Klaus; Stauder, Erwin; Kossak, Felix; Natschläger, Thomas

    The information about which modules of a future version of a software system are defect-prone is a valuable planning aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. However, constructing effective defect prediction models in an industrial setting involves a number of key questions. In this paper we discuss ten key questions identified in context of establishing defect prediction in a large software development project. Seven consecutive versions of the software system have been used to construct and validate defect prediction models for system test planning. Furthermore, the paper presents initial empirical results from the studied project and, by this means, contributes answers to the identified questions.

  12. Predicting soil quality indices with near infrared analysis in a wildfire chronosequence.

    PubMed

    Cécillon, Lauric; Cassagne, Nathalie; Czarnes, Sonia; Gros, Raphaël; Vennetier, Michel; Brun, Jean-Jacques

    2009-01-15

    We investigated the power of near infrared (NIR) analysis for the quantitative assessment of soil quality in a wildfire chronosequence. The effect of wildfire disturbance and soil engineering activity of earthworms on soil organic matter quality was first assessed with principal component analysis of NIR spectra. Three soil quality indices were further calculated using an adaptation of the method proposed by Velasquez et al. [Velasquez, E., Lavelle, P., Andrade, M. GISQ, a multifunctional indicator of soil quality. Soil Biol Biochem 2007; 39: 3066-3080.], each one addressing an ecosystem service provided by soils: organic matter storage, nutrient supply and biological activity. Partial least squares regression models were developed to test the predicting ability of NIR analysis for these soil quality indices. All models reached coefficients of determination above 0.90 and ratios of performance to deviation above 2.8. This finding provides new opportunities for the monitoring of soil quality, using NIR scanning of soil samples.

  13. Measuring and predicting prostate cancer related quality of life changes using EPIC for clinical practice.

    PubMed

    Chipman, Jonathan J; Sanda, Martin G; Dunn, Rodney L; Wei, John T; Litwin, Mark S; Crociani, Catrina M; Regan, Meredith M; Chang, Peter

    2014-03-01

    We expanded the clinical usefulness of EPIC-CP (Expanded Prostate Cancer Index Composite for Clinical Practice) by evaluating its responsiveness to health related quality of life changes, defining the minimally important differences for an individual patient change in each domain and applying it to a sexual outcome prediction model. In 1,201 subjects from a previously described multicenter longitudinal cohort we modeled the EPIC-CP domain scores of each treatment group before treatment, and at short-term and long-term followup. We considered a posttreatment domain score change from pretreatment of 0.5 SD or greater clinically significant and p ≤ 0.01 statistically significant. We determined the domain minimally important differences using the pooled 0.5 SD of the 2, 6, 12 and 24-month posttreatment changes from pretreatment values. We then recalibrated an EPIC-CP based nomogram model predicting 2-year post-prostatectomy functional erection from that developed using EPIC-26. For each health related quality of life domain EPIC-CP was sensitive to similar posttreatment health related quality of life changes with time, as was observed using EPIC-26. The EPIC-CP minimally important differences in changes in the urinary incontinence, urinary irritation/obstruction, bowel, sexual and vitality/hormonal domains were 1.0, 1.3, 1.2, 1.6 and 1.0, respectively. The EPIC-CP based sexual prediction model performed well (AUC 0.76). It showed robust agreement with its EPIC-26 based counterpart with 10% or less predicted probability differences between models in 95% of individuals and a mean ± SD difference of 0.0 ± 0.05 across all individuals. EPIC-CP is responsive to health related quality of life changes during convalescence and it can be used to predict 2-year post-prostatectomy sexual outcomes. It can facilitate shared medical decision making and patient centered care. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc

  14. Development of VIS/NIR spectroscopic system for real-time prediction of fresh pork quality

    NASA Astrophysics Data System (ADS)

    Zhang, Haiyun; Peng, Yankun; Zhao, Songwei; Sasao, Akira

    2013-05-01

    Quality attributes of fresh meat will influence nutritional value and consumers' purchasing power. The aim of the research was to develop a prototype for real-time detection of quality in meat. It consisted of hardware system and software system. A VIS/NIR spectrograph in the range of 350 to 1100 nm was used to collect the spectral data. In order to acquire more potential information of the sample, optical fiber multiplexer was used. A conveyable and cylindrical device was designed and fabricated to hold optical fibers from multiplexer. High power halogen tungsten lamp was collected as the light source. The spectral data were obtained with the exposure time of 2.17ms from the surface of the sample by press down the trigger switch on the self-developed system. The system could automatically acquire, process, display and save the data. Moreover the quality could be predicted on-line. A total of 55 fresh pork samples were used to develop prediction model for real time detection. The spectral data were pretreated with standard normalized variant (SNV) and partial least squares regression (PLSR) was used to develop prediction model. The correlation coefficient and root mean square error of the validation set for water content and pH were 0.810, 0.653, and 0.803, 0.098 respectively. The research shows that the real-time non-destructive detection system based on VIS/NIR spectroscopy can be efficient to predict the quality of fresh meat.

  15. Odor Emotional Quality Predicts Odor Identification.

    PubMed

    Bestgen, Anne-Kathrin; Schulze, Patrick; Kuchinke, Lars

    2015-09-01

    It is commonly agreed upon a strong link between emotion and olfaction. Odor-evoked memories are experienced as more emotional compared with verbal, visual, and tactile stimuli. Moreover, the emotional quality of odor cues increases memory performance, but contrary to this, odors are poor retrieval cues for verbal labels. To examine the relation between the emotional quality of an odor and its likelihood of identification, this study evaluates how normative emotion ratings based on the 3-dimensional affective space model (that includes valence, arousal, and dominance), using the Self-Assessment Manikin by Bradley and Lang (Bradley MM, Lang PJ. 1994. Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. J Behav Ther Exp Psychiatry. 25(1):49-59.) and the Positive and Negative Affect Schedule (Watson D, Clark LA, Tellegen A. 1988. Development and validation of brief measures of positive and negative affect: the PANAS scales. J Pers Soc Psychol. 54(6):1063-1070.) predict the identification of odors in a multiple choice condition. The best fitting logistic regression model includes squared valence and dominance and thus, points to a significant role of specific emotional features of odors as a main clue for odor identification. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. COMPARISONS OF SPATIAL PATTERNS OF WET DEPOSITION TO MODEL PREDICTIONS

    EPA Science Inventory

    The Community Multiscale Air Quality model, (CMAQ), is a "one-atmosphere" model, in that it uses a consistent set of chemical reactions and physical principles to predict concentrations of primary pollutants, photochemical smog, and fine aerosols, as well as wet and dry depositi...

  17. An Overview of Atmospheric Chemistry and Air Quality Modeling

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew S.

    2017-01-01

    This presentation will include my personal research experience and an overview of atmospheric chemistry and air quality modeling to the participants of the NASA Student Airborne Research Program (SARP 2017). The presentation will also provide examples on ways to apply airborne observations for chemical transport (CTM) and air quality (AQ) model evaluation. CTM and AQ models are important tools in understanding tropospheric-stratospheric composition, atmospheric chemistry processes, meteorology, and air quality. This presentation will focus on how NASA scientist currently apply CTM and AQ models to better understand these topics. Finally, the importance of airborne observation in evaluating these topics and how in situ and remote sensing observations can be used to evaluate and improve CTM and AQ model predictions will be highlighted.

  18. The Level of Quality of Work Life to Predict Work Alienation

    ERIC Educational Resources Information Center

    Erdem, Mustafa

    2014-01-01

    The current research aims to determine the level of elementary school teachers' quality of work life (QWL) to predict work alienation. The study was designed using the relational survey model. The research population consisted of 1096 teachers employed at 25 elementary schools within the city of Van in the academic year 2010- 2011, and 346…

  19. Developing Risk Prediction Models for Postoperative Pancreatic Fistula: a Systematic Review of Methodology and Reporting Quality.

    PubMed

    Wen, Zhang; Guo, Ya; Xu, Banghao; Xiao, Kaiyin; Peng, Tao; Peng, Minhao

    2016-04-01

    Postoperative pancreatic fistula is still a major complication after pancreatic surgery, despite improvements of surgical technique and perioperative management. We sought to systematically review and critically access the conduct and reporting of methods used to develop risk prediction models for predicting postoperative pancreatic fistula. We conducted a systematic search of PubMed and EMBASE databases to identify articles published before January 1, 2015, which described the development of models to predict the risk of postoperative pancreatic fistula. We extracted information of developing a prediction model including study design, sample size and number of events, definition of postoperative pancreatic fistula, risk predictor selection, missing data, model-building strategies, and model performance. Seven studies of developing seven risk prediction models were included. In three studies (42 %), the number of events per variable was less than 10. The number of candidate risk predictors ranged from 9 to 32. Five studies (71 %) reported using univariate screening, which was not recommended in building a multivariate model, to reduce the number of risk predictors. Six risk prediction models (86 %) were developed by categorizing all continuous risk predictors. The treatment and handling of missing data were not mentioned in all studies. We found use of inappropriate methods that could endanger the development of model, including univariate pre-screening of variables, categorization of continuous risk predictors, and model validation. The use of inappropriate methods affects the reliability and the accuracy of the probability estimates of predicting postoperative pancreatic fistula.

  20. A state of the art regarding urban air quality prediction models

    NASA Astrophysics Data System (ADS)

    Croitoru, Cristiana; Nastase, Ilinca

    2018-02-01

    Urban pollution represents an increasing risk to residents of urban regions, particularly in large, over-industrialized cities knowing that the traffic is responsible for more than 25% of air gaseous pollutants and dust particles. Air quality modelling plays an important role in addressing air pollution control and management approaches by providing guidelines for better and more efficient air quality forecasting, along with smart monitoring sensor networks. The advances in technology regarding simulations, forecasting and monitoring are part of the new smart cities which offers a healthy environment for their occupants.

  1. LINKING ETA MODEL WITH THE COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODELING SYSTEM: OZONE BOUNDARY CONDITIONS

    EPA Science Inventory

    A prototype surface ozone concentration forecasting model system for the Eastern U.S. has been developed. The model system is consisting of a regional meteorological and a regional air quality model. It demonstrated a strong prediction dependence on its ozone boundary conditions....

  2. Modeling Benthic Sediment Processes to Predict Water ...

    EPA Pesticide Factsheets

    The benthic sediment acts as a huge reservoir of particulate and dissolved material (within interstitial water) which can contribute to loading of contaminants and nutrients to the water column. A benthic sediment model is presented in this report to predict spatial and temporal benthic fluxes of nutrients and chemicals in Narragansett Bay. A benthic sediment model is presented in this report to identify benthic flux into the water column in Narragansett Bay. Benthic flux is essential to properly model water quality and ecology in estuarine and coastal systems.

  3. Implications of Modeling Uncertainty for Water Quality Decision Making

    NASA Astrophysics Data System (ADS)

    Shabman, L.

    2002-05-01

    The report, National Academy of Sciences report, "Assessing the TMDL Approach to Water Quality Management" endorsed the "watershed" and "ambient water quality focused" approach" to water quality management called for in the TMDL program. The committee felt that available data and models were adequate to move such a program forward, if the EPA and all stakeholders better understood the nature of the scientific enterprise and its application to the TMDL program. Specifically, the report called for a greater acknowledgement of model prediction uncertinaity in making and implementing TMDL plans. To assure that such uncertinaity was addressed in water quality decision making the committee called for a commitment to "adaptive implementation" of water quality management plans. The committee found that the number and complexity of the interactions of multiple stressors, combined with model prediction uncertinaity means that we need to avoid the temptation to make assurances that specific actions will result in attainment of particular water quality standards. Until the work on solving a water quality problem begins, analysts and decision makers cannot be sure what the correct solutions are, or even what water quality goals a community should be seeking. In complex systems we need to act in order to learn; adaptive implementation is a concurrent process of action and learning. Learning requires (1) continued monitoring of the waterbody to determine how it responds to the actions taken and (2) carefully designed experiments in the watershed. If we do not design learning into what we attempt we are not doing adaptive implementation. Therefore, there needs to be an increased commitment to monitoring and experiments in watersheds that will lead to learning. This presentation will 1) explain the logic for adaptive implementation; 2) discuss the ways that water quality modelers could characterize and explain model uncertinaity to decision makers; 3) speculate on the implications

  4. The Effect of Data Quality on Short-term Growth Model Projections

    Treesearch

    David Gartner

    2005-01-01

    This study was designed to determine the effect of FIA's data quality on short-term growth model projections. The data from Georgia's 1996 statewide survey were used for the Southern variant of the Forest Vegetation Simulator to predict Georgia's first annual panel. The effect of several data error sources on growth modeling prediction errors...

  5. External validation of Vascular Study Group of New England risk predictive model of mortality after elective abdominal aorta aneurysm repair in the Vascular Quality Initiative and comparison against established models.

    PubMed

    Eslami, Mohammad H; Rybin, Denis V; Doros, Gheorghe; Siracuse, Jeffrey J; Farber, Alik

    2018-01-01

    The purpose of this study is to externally validate a recently reported Vascular Study Group of New England (VSGNE) risk predictive model of postoperative mortality after elective abdominal aortic aneurysm (AAA) repair and to compare its predictive ability across different patients' risk categories and against the established risk predictive models using the Vascular Quality Initiative (VQI) AAA sample. The VQI AAA database (2010-2015) was queried for patients who underwent elective AAA repair. The VSGNE cases were excluded from the VQI sample. The external validation of a recently published VSGNE AAA risk predictive model, which includes only preoperative variables (age, gender, history of coronary artery disease, chronic obstructive pulmonary disease, cerebrovascular disease, creatinine levels, and aneurysm size) and planned type of repair, was performed using the VQI elective AAA repair sample. The predictive value of the model was assessed via the C-statistic. Hosmer-Lemeshow method was used to assess calibration and goodness of fit. This model was then compared with the Medicare, Vascular Governance Northwest model, and Glasgow Aneurysm Score for predicting mortality in VQI sample. The Vuong test was performed to compare the model fit between the models. Model discrimination was assessed in different risk group VQI quintiles. Data from 4431 cases from the VSGNE sample with the overall mortality rate of 1.4% was used to develop the model. The internally validated VSGNE model showed a very high discriminating ability in predicting mortality (C = 0.822) and good model fit (Hosmer-Lemeshow P = .309) among the VSGNE elective AAA repair sample. External validation on 16,989 VQI cases with an overall 0.9% mortality rate showed very robust predictive ability of mortality (C = 0.802). Vuong tests yielded a significant fit difference favoring the VSGNE over then Medicare model (C = 0.780), Vascular Governance Northwest (0.774), and Glasgow Aneurysm Score (0

  6. Assessment of air quality benefits from national air pollution control policies in China. Part II: Evaluation of air quality predictions and air quality benefits assessment

    NASA Astrophysics Data System (ADS)

    Wang, Litao; Jang, Carey; Zhang, Yang; Wang, Kai; Zhang, Qiang; Streets, David; Fu, Joshua; Lei, Yu; Schreifels, Jeremy; He, Kebin; Hao, Jiming; Lam, Yun-Fat; Lin, Jerry; Meskhidze, Nicholas; Voorhees, Scott; Evarts, Dale; Phillips, Sharon

    2010-09-01

    Following the meteorological evaluation in Part I, this Part II paper presents the statistical evaluation of air quality predictions by the U.S. Environmental Protection Agency (U.S. EPA)'s Community Multi-Scale Air Quality (Models-3/CMAQ) model for the four simulated months in the base year 2005. The surface predictions were evaluated using the Air Pollution Index (API) data published by the China Ministry of Environmental Protection (MEP) for 31 capital cities and daily fine particulate matter (PM 2.5, particles with aerodiameter less than or equal to 2.5 μm) observations of an individual site in Tsinghua University (THU). To overcome the shortage in surface observations, satellite data are used to assess the column predictions including tropospheric nitrogen dioxide (NO 2) column abundance and aerosol optical depth (AOD). The result shows that CMAQ gives reasonably good predictions for the air quality. The air quality improvement that would result from the targeted sulfur dioxide (SO 2) and nitrogen oxides (NO x) emission controls in China were assessed for the objective year 2010. The results show that the emission controls can lead to significant air quality benefits. SO 2 concentrations in highly polluted areas of East China in 2010 are estimated to be decreased by 30-60% compared to the levels in the 2010 Business-As-Usual (BAU) case. The annual PM 2.5 can also decline by 3-15 μg m -3 (4-25%) due to the lower SO 2 and sulfate concentrations. If similar controls are implemented for NO x emissions, NO x concentrations are estimated to decrease by 30-60% as compared with the 2010 BAU scenario. The annual mean PM 2.5 concentrations will also decline by 2-14 μg m -3 (3-12%). In addition, the number of ozone (O 3) non-attainment areas in the northern China is projected to be much lower, with the maximum 1-h average O 3 concentrations in the summer reduced by 8-30 ppb.

  7. Performance evaluation of air quality models for predicting PM10 and PM2.5 concentrations at urban traffic intersection during winter period.

    PubMed

    Gokhale, Sharad; Raokhande, Namita

    2008-05-01

    There are several models that can be used to evaluate roadside air quality. The comparison of the operational performance of different models pertinent to local conditions is desirable so that the model that performs best can be identified. Three air quality models, namely the 'modified General Finite Line Source Model' (M-GFLSM) of particulates, the 'California Line Source' (CALINE3) model, and the 'California Line Source for Queuing & Hot Spot Calculations' (CAL3QHC) model have been identified for evaluating the air quality at one of the busiest traffic intersections in the city of Guwahati. These models have been evaluated statistically with the vehicle-derived airborne particulate mass emissions in two sizes, i.e. PM10 and PM2.5, the prevailing meteorology and the temporal distribution of the measured daily average PM10 and PM2.5 concentrations in wintertime. The study has shown that the CAL3QHC model would make better predictions compared to other models for varied meteorology and traffic conditions. The detailed study reveals that the agreements between the measured and the modeled PM10 and PM2.5 concentrations have been reasonably good for CALINE3 and CAL3QHC models. Further detailed analysis shows that the CAL3QHC model performed well compared to the CALINE3. The monthly performance measures have also led to the similar results. These two models have also outperformed for a class of wind speed velocities except for low winds (<1 m s(-1)), for which, the M-GFLSM model has shown the tendency of better performance for PM10. Nevertheless, the CAL3QHC model has outperformed for both the particulate sizes and for all the wind classes, which therefore can be optional for air quality assessment at urban traffic intersections.

  8. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    NASA Astrophysics Data System (ADS)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two

  9. Assessing chemistry schemes and constraints in air quality models used to predict ozone in London against the detailed Master Chemical Mechanism.

    PubMed

    Malkin, Tamsin L; Heard, Dwayne E; Hood, Christina; Stocker, Jenny; Carruthers, David; MacKenzie, Ian A; Doherty, Ruth M; Vieno, Massimo; Lee, James; Kleffmann, Jörg; Laufs, Sebastian; Whalley, Lisa K

    2016-07-18

    Air pollution is the environmental factor with the greatest impact on human health in Europe. Understanding the key processes driving air quality across the relevant spatial scales, especially during pollution exceedances and episodes, is essential to provide effective predictions for both policymakers and the public. It is particularly important for policy regulators to understand the drivers of local air quality that can be regulated by national policies versus the contribution from regional pollution transported from mainland Europe or elsewhere. One of the main objectives of the Coupled Urban and Regional processes: Effects on AIR quality (CUREAIR) project is to determine local and regional contributions to ozone events. A detailed zero-dimensional (0-D) box model run with the Master Chemical Mechanism (MCMv3.2) is used as the benchmark model against which the less explicit chemistry mechanisms of the Generic Reaction Set (GRS) and the Common Representative Intermediates (CRIv2-R5) schemes are evaluated. GRS and CRI are used by the Atmospheric Dispersion Modelling System (ADMS-Urban) and the regional chemistry transport model EMEP4UK, respectively. The MCM model uses a near-explicit chemical scheme for the oxidation of volatile organic compounds (VOCs) and is constrained to observations of VOCs, NOx, CO, HONO (nitrous acid), photolysis frequencies and meteorological parameters measured during the ClearfLo (Clean Air for London) campaign. The sensitivity of the less explicit chemistry schemes to different model inputs has been investigated: Constraining GRS to the total VOC observed during ClearfLo as opposed to VOC derived from ADMS-Urban dispersion calculations, including emissions and background concentrations, led to a significant increase (674% during winter) in modelled ozone. The inclusion of HONO chemistry in this mechanism, particularly during wintertime when other radical sources are limited, led to substantial increases in the ozone levels predicted

  10. Evaluation of ride quality prediction methods for helicopter interior noise and vibration environments

    NASA Technical Reports Server (NTRS)

    Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.

    1984-01-01

    The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.

  11. An examination of data quality on QSAR Modeling in regards ...

    EPA Pesticide Factsheets

    The development of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available to develop and validate QSAR models. We have focused our efforts on the widely used EPISuite software that was initially developed over two decades ago and, specifically, on the PHYSPROP dataset used to train the EPISuite prediction models. This presentation will review our approaches to examining key datasets, the delivery of curated data and the development of machine-learning models for thirteen separate property endpoints of interest to environmental science. We will also review how these data will be made freely accessible to the community via a new “chemistry dashboard”. This abstract does not reflect U.S. EPA policy. presentation at UNC-CH.

  12. Quantifying the uncertainty of nonpoint source attribution in distributed water quality models: A Bayesian assessment of SWAT's sediment export predictions

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Long, Tanya; Boyd, Duncan

    2014-11-01

    Spatially distributed nonpoint source watershed models are essential tools to estimate the magnitude and sources of diffuse pollution. However, little work has been undertaken to understand the sources and ramifications of the uncertainty involved in their use. In this study we conduct the first Bayesian uncertainty analysis of the water quality components of the SWAT model, one of the most commonly used distributed nonpoint source models. Working in Southern Ontario, we apply three Bayesian configurations for calibrating SWAT to Redhill Creek, an urban catchment, and Grindstone Creek, an agricultural one. We answer four interrelated questions: can SWAT determine suspended sediment sources with confidence when end of basin data is used for calibration? How does uncertainty propagate from the discharge submodel to the suspended sediment submodels? Do the estimated sediment sources vary when different calibration approaches are used? Can we combine the knowledge gained from different calibration approaches? We show that: (i) despite reasonable fit at the basin outlet, the simulated sediment sources are subject to uncertainty sufficient to undermine the typical approach of reliance on a single, best fit simulation; (ii) more than a third of the uncertainty of sediment load predictions may stem from the discharge submodel; (iii) estimated sediment sources do vary significantly across the three statistical configurations of model calibration despite end-of-basin predictions being virtually identical; and (iv) Bayesian model averaging is an approach that can synthesize predictions when a number of adequate distributed models make divergent source apportionments. We conclude with recommendations for future research to reduce the uncertainty encountered when using distributed nonpoint source models for source apportionment.

  13. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  14. A mathematical model of reservoir sediment quality prediction based on land-use and erosion processes in watershed

    NASA Astrophysics Data System (ADS)

    Junakova, N.; Balintova, M.; Junak, J.

    2017-10-01

    The aim of this paper is to propose a mathematical model for determining of total nitrogen (N) and phosphorus (P) content in eroded soil particles with emphasis on prediction of bottom sediment quality in reservoirs. The adsorbed nutrient concentrations are calculated using the Universal Soil Loss Equation (USLE) extended by the determination of the average soil nutrient concentration in top soils. The average annual vegetation and management factor is divided into five periods of the cropping cycle. For selected plants, the average plant nutrient uptake divided into five cropping periods is also proposed. The average nutrient concentrations in eroded soil particles in adsorbed form are modified by sediment enrichment ratio to obtain the total nutrient content in transported soil particles. The model was designed for the conditions of north-eastern Slovakia. The study was carried out in the agricultural basin of the small water reservoir Klusov.

  15. An Artificial Intelligence System to Predict Quality of Service in Banking Organizations

    PubMed Central

    Popovič, Aleš

    2016-01-01

    Quality of service, that is, the waiting time that customers must endure in order to receive a service, is a critical performance aspect in private and public service organizations. Providing good service quality is particularly important in highly competitive sectors where similar services exist. In this paper, focusing on banking sector, we propose an artificial intelligence system for building a model for the prediction of service quality. While the traditional approach used for building analytical models relies on theories and assumptions about the problem at hand, we propose a novel approach for learning models from actual data. Thus, the proposed approach is not biased by the knowledge that experts may have about the problem, but it is completely based on the available data. The system is based on a recently defined variant of genetic programming that allows practitioners to include the concept of semantics in the search process. This will have beneficial effects on the search process and will produce analytical models that are based only on the data and not on domain-dependent knowledge. PMID:27313604

  16. An Artificial Intelligence System to Predict Quality of Service in Banking Organizations.

    PubMed

    Castelli, Mauro; Manzoni, Luca; Popovič, Aleš

    2016-01-01

    Quality of service, that is, the waiting time that customers must endure in order to receive a service, is a critical performance aspect in private and public service organizations. Providing good service quality is particularly important in highly competitive sectors where similar services exist. In this paper, focusing on banking sector, we propose an artificial intelligence system for building a model for the prediction of service quality. While the traditional approach used for building analytical models relies on theories and assumptions about the problem at hand, we propose a novel approach for learning models from actual data. Thus, the proposed approach is not biased by the knowledge that experts may have about the problem, but it is completely based on the available data. The system is based on a recently defined variant of genetic programming that allows practitioners to include the concept of semantics in the search process. This will have beneficial effects on the search process and will produce analytical models that are based only on the data and not on domain-dependent knowledge.

  17. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  18. Predicting the Accuracy of Protein–Ligand Docking on Homology Models

    PubMed Central

    BORDOGNA, ANNALISA; PANDINI, ALESSANDRO; BONATI, LAURA

    2011-01-01

    Ligand–protein docking is increasingly used in Drug Discovery. The initial limitations imposed by a reduced availability of target protein structures have been overcome by the use of theoretical models, especially those derived by homology modeling techniques. While this greatly extended the use of docking simulations, it also introduced the need for general and robust criteria to estimate the reliability of docking results given the model quality. To this end, a large-scale experiment was performed on a diverse set including experimental structures and homology models for a group of representative ligand–protein complexes. A wide spectrum of model quality was sampled using templates at different evolutionary distances and different strategies for target–template alignment and modeling. The obtained models were scored by a selection of the most used model quality indices. The binding geometries were generated using AutoDock, one of the most common docking programs. An important result of this study is that indeed quantitative and robust correlations exist between the accuracy of docking results and the model quality, especially in the binding site. Moreover, state-of-the-art indices for model quality assessment are already an effective tool for an a priori prediction of the accuracy of docking experiments in the context of groups of proteins with conserved structural characteristics. PMID:20607693

  19. Prediction models for successful external cephalic version: a systematic review.

    PubMed

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein

    2015-12-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.

  20. Can Predictive Modeling Identify Head and Neck Oncology Patients at Risk for Readmission?

    PubMed

    Manning, Amy M; Casper, Keith A; Peter, Kay St; Wilson, Keith M; Mark, Jonathan R; Collar, Ryan M

    2018-05-01

    Objective Unplanned readmission within 30 days is a contributor to health care costs in the United States. The use of predictive modeling during hospitalization to identify patients at risk for readmission offers a novel approach to quality improvement and cost reduction. Study Design Two-phase study including retrospective analysis of prospectively collected data followed by prospective longitudinal study. Setting Tertiary academic medical center. Subjects and Methods Prospectively collected data for patients undergoing surgical treatment for head and neck cancer from January 2013 to January 2015 were used to build predictive models for readmission within 30 days of discharge using logistic regression, classification and regression tree (CART) analysis, and random forests. One model (logistic regression) was then placed prospectively into the discharge workflow from March 2016 to May 2016 to determine the model's ability to predict which patients would be readmitted within 30 days. Results In total, 174 admissions had descriptive data. Thirty-two were excluded due to incomplete data. Logistic regression, CART, and random forest predictive models were constructed using the remaining 142 admissions. When applied to 106 consecutive prospective head and neck oncology patients at the time of discharge, the logistic regression model predicted readmissions with a specificity of 94%, a sensitivity of 47%, a negative predictive value of 90%, and a positive predictive value of 62% (odds ratio, 14.9; 95% confidence interval, 4.02-55.45). Conclusion Prospectively collected head and neck cancer databases can be used to develop predictive models that can accurately predict which patients will be readmitted. This offers valuable support for quality improvement initiatives and readmission-related cost reduction in head and neck cancer care.

  1. Short and long term improvements in quality of chronic care delivery predict program sustainability.

    PubMed

    Cramm, Jane Murray; Nieboer, Anna Petra

    2014-01-01

    Empirical evidence on sustainability of programs that improve the quality of care delivery over time is lacking. Therefore, this study aims to identify the predictive role of short and long term improvements in quality of chronic care delivery on program sustainability. In this longitudinal study, professionals [2010 (T0): n=218, 55% response rate; 2011 (T1): n=300, 68% response rate; 2012 (T2): n=265, 63% response rate] from 22 Dutch disease-management programs completed surveys assessing quality of care and program sustainability. Our study findings indicated that quality of chronic care delivery improved significantly in the first 2 years after implementation of the disease-management programs. At T1, overall quality, self-management support, delivery system design, and integration of chronic care components, as well as health care delivery and clinical information systems and decision support, had improved. At T2, overall quality again improved significantly, as did community linkages, delivery system design, clinical information systems, decision support and integration of chronic care components, and self-management support. Multilevel regression analysis revealed that quality of chronic care delivery at T0 (p<0.001) and quality changes in the first (p<0.001) and second (p<0.01) years predicted program sustainability. In conclusion this study showed that disease-management programs based on the chronic care model improved the quality of chronic care delivery over time and that short and long term changes in the quality of chronic care delivery predicted the sustainability of the projects. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection

    EPA Science Inventory

    Methods are needed improve the timeliness and accuracy of recreational water‐quality assessments. Traditional culture methods require 18–24 h to obtain results and may not reflect current conditions. Predictive models, based on environmental and water quality variables, have been...

  3. Predicting Recreational Water Quality Using Turbidity in the Cuyahoga River, Cuyahoga Valley National Park, Ohio, 2004-7

    USGS Publications Warehouse

    Brady, Amie M.G.; Bushon, Rebecca N.; Plona, Meg B.

    2009-01-01

    The Cuyahoga River within Cuyahoga Valley National Park (CVNP) in Ohio is often impaired for recreational use because of elevated concentrations of bacteria, which are indicators of fecal contamination. During the recreational seasons (May through August) of 2004 through 2007, samples were collected at two river sites, one upstream of and one centrally-located within CVNP. Bacterial concentrations and turbidity were determined, and streamflow at time of sampling and rainfall amounts over the previous 24 hours prior to sampling were ascertained. Statistical models to predict Escherichia coli (E. coli) concentrations were developed for each site (with data from 2004 through 2006) and tested during an independent year (2007). At Jaite, a sampling site near the center of CVNP, the predictive model performed better than the traditional method of determining the current day's water quality using the previous day's E. coli concentration. During 2007, the Jaite model, based on turbidity, produced more correct responses (81 percent) and fewer false negatives (3.2 percent) than the traditional method (68 and 26 percent, respectively). At Old Portage, a sampling site just upstream from CVNP, a predictive model with turbidity and rainfall as explanatory variables did not perform as well as the traditional method. The Jaite model was used to estimate water quality at three other sites in the park; although it did not perform as well as the traditional method, it performed well - yielding between 68 and 91 percent correct responses. Further research would be necessary to determine whether using the Jaite model to predict recreational water quality elsewhere on the river would provide accurate results.

  4. Impact of modellers' decisions on hydrological a priori predictions

    NASA Astrophysics Data System (ADS)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2013-07-01

    The purpose of this paper is to stimulate a re-thinking of how we, the catchment hydrologists, could become reliable forecasters. A group of catchment modellers predicted the hydrological response of a man-made 6 ha catchment in its initial phase (Chicken Creek) without having access to the observed records. They used conceptually different model families. Their modelling experience differed largely. The prediction exercise was organized in three steps: (1) for the 1st prediction modellers received a basic data set describing the internal structure of the catchment (somewhat more complete than usually available to a priori predictions in ungauged catchments). They did not obtain time series of stream flow, soil moisture or groundwater response. (2) Before the 2nd improved prediction they inspected the catchment on-site and attended a workshop where the modellers presented and discussed their first attempts. (3) For their improved 3rd prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step 1. Here, we detail the modeller's decisions in accounting for the various processes based on what they learned during the field visit (step 2) and add the final outcome of step 3 when the modellers made use of additional data. We document the prediction progress as well as the learning process resulting from the availability of added information. For the 2nd and 3rd step, the progress in prediction quality could be evaluated in relation to individual modelling experience and costs of added information. We learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing

  5. Strategies to predict and improve eating quality of cooked beef using carcass and meat composition traits in Angus cattle.

    PubMed

    Mateescu, R G; Oltenacu, P A; Garmyn, A J; Mafi, G G; VanOverbeke, D L

    2016-05-01

    Product quality is a high priority for the beef industry because of its importance as a major driver of consumer demand for beef and the ability of the industry to improve it. A 2-prong approach based on implementation of a genetic program to improve eating quality and a system to communicate eating quality and increase the probability that consumers' eating quality expectations are met is outlined. The objectives of this study were 1) to identify the best carcass and meat composition traits to be used in a selection program to improve eating quality and 2) to develop a relatively small number of classes that reflect real and perceptible differences in eating quality that can be communicated to consumers and identify a subset of carcass and meat composition traits with the highest predictive accuracy across all eating quality classes. Carcass traits, meat composition, including Warner-Bratzler shear force (WBSF), intramuscular fat content (IMFC), trained sensory panel scores, and mineral composition traits of 1,666 Angus cattle were used in this study. Three eating quality indexes, EATQ1, EATQ2, and EATQ3, were generated by using different weights for the sensory traits (emphasis on tenderness, flavor, and juiciness, respectively). The best model for predicting eating quality explained 37%, 9%, and 19% of the variability of EATQ1, EATQ2, and EATQ3, and 2 traits, WBSF and IMFC, accounted for most of the variability explained by the best models. EATQ1 combines tenderness, juiciness, and flavor assessed by trained panels with 0.60, 0.15, and 0.25 weights, best describes North American consumers, and has a moderate heritability (0.18 ± 0.06). A selection index (I= -0.5[WBSF] + 0.3[IMFC]) based on phenotypic and genetic variances and covariances can be used to improve eating quality as a correlated trait. The 3 indexes (EATQ1, EATQ2, and EATQ3) were used to generate 3 equal (33.3%) low, medium, and high eating quality classes, and linear combinations of traits that

  6. A Public-Private Partnership Develops and Externally Validates a 30-Day Hospital Readmission Risk Prediction Model

    PubMed Central

    Choudhry, Shahid A.; Li, Jing; Davis, Darcy; Erdmann, Cole; Sikka, Rishi; Sutariya, Bharat

    2013-01-01

    Introduction: Preventing the occurrence of hospital readmissions is needed to improve quality of care and foster population health across the care continuum. Hospitals are being held accountable for improving transitions of care to avert unnecessary readmissions. Advocate Health Care in Chicago and Cerner (ACC) collaborated to develop all-cause, 30-day hospital readmission risk prediction models to identify patients that need interventional resources. Ideally, prediction models should encompass several qualities: they should have high predictive ability; use reliable and clinically relevant data; use vigorous performance metrics to assess the models; be validated in populations where they are applied; and be scalable in heterogeneous populations. However, a systematic review of prediction models for hospital readmission risk determined that most performed poorly (average C-statistic of 0.66) and efforts to improve their performance are needed for widespread usage. Methods: The ACC team incorporated electronic health record data, utilized a mixed-method approach to evaluate risk factors, and externally validated their prediction models for generalizability. Inclusion and exclusion criteria were applied on the patient cohort and then split for derivation and internal validation. Stepwise logistic regression was performed to develop two predictive models: one for admission and one for discharge. The prediction models were assessed for discrimination ability, calibration, overall performance, and then externally validated. Results: The ACC Admission and Discharge Models demonstrated modest discrimination ability during derivation, internal and external validation post-recalibration (C-statistic of 0.76 and 0.78, respectively), and reasonable model fit during external validation for utility in heterogeneous populations. Conclusions: The ACC Admission and Discharge Models embody the design qualities of ideal prediction models. The ACC plans to continue its partnership to

  7. Urban Landscape Characterization Using Remote Sensing Data For Input into Air Quality Modeling

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Estes, Maurice G., Jr.; Crosson, William; Khan, Maudood

    2005-01-01

    The urban landscape is inherently complex and this complexity is not adequately captured in air quality models that are used to assess whether urban areas are in attainment of EPA air quality standards, particularly for ground level ozone. This inadequacy of air quality models to sufficiently respond to the heterogeneous nature of the urban landscape can impact how well these models predict ozone pollutant levels over metropolitan areas and ultimately, whether cities exceed EPA ozone air quality standards. We are exploring the utility of high-resolution remote sensing data and urban growth projections as improved inputs to meteorological and air quality models focusing on the Atlanta, Georgia metropolitan area as a case study. The National Land Cover Dataset at 30m resolution is being used as the land use/land cover input and aggregated to the 4km scale for the MM5 mesoscale meteorological model and the Community Multiscale Air Quality (CMAQ) modeling schemes. Use of these data have been found to better characterize low density/suburban development as compared with USGS 1 km land use/land cover data that have traditionally been used in modeling. Air quality prediction for future scenarios to 2030 is being facilitated by land use projections using a spatial growth model. Land use projections were developed using the 2030 Regional Transportation Plan developed by the Atlanta Regional Commission. This allows the State Environmental Protection agency to evaluate how these transportation plans will affect future air quality.

  8. Predicting Air Quality in Smart Environments

    PubMed Central

    Deleawe, Seun; Kusznir, Jim; Lamb, Brian; Cook, Diane J.

    2011-01-01

    The pervasive sensing technologies found in smart environments offer unprecedented opportunities for monitoring and assisting the individuals who live and work in these spaces. As aspect of daily life that is often overlooked in maintaining a healthy lifestyle is the air quality of the environment. In this paper we investigate the use of machine learning technologies to predict CO2 levels as an indicator of air quality in smart environments. We introduce techniques for collecting and analyzing sensor information in smart environments and analyze the correlation between resident activities and air quality levels. The effectiveness of our techniques is evaluated using three physical smart environment testbeds. PMID:21617739

  9. Impact of modellers' decisions on hydrological a priori predictions

    NASA Astrophysics Data System (ADS)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  10. Frameworks for Assessing the Quality of Modeling and Simulation Capabilities

    NASA Astrophysics Data System (ADS)

    Rider, W. J.

    2012-12-01

    The importance of assuring quality in modeling and simulation has spawned several frameworks for structuring the examination of quality. The format and content of these frameworks provides an emphasis, completeness and flow to assessment activities. I will examine four frameworks that have been developed and describe how they can be improved and applied to a broader set of high consequence applications. Perhaps the first of these frameworks was known as CSAU [Boyack] (code scaling, applicability and uncertainty) used for nuclear reactor safety and endorsed the United States' Nuclear Regulatory Commission (USNRC). This framework was shaped by nuclear safety practice, and the practical structure needed after the Three Mile Island accident. It incorporated the dominant experimental program, the dominant analysis approach, and concerns about the quality of modeling. The USNRC gave it the force of law that made the nuclear industry take it seriously. After the cessation of nuclear weapons' testing the United States began a program of examining the reliability of these weapons without testing. This program utilizes science including theory, modeling, simulation and experimentation to replace the underground testing. The emphasis on modeling and simulation necessitated attention on the quality of these simulations. Sandia developed the PCMM (predictive capability maturity model) to structure this attention [Oberkampf]. PCMM divides simulation into six core activities to be examined and graded relative to the needs of the modeling activity. NASA [NASA] has built yet another framework in response to the tragedy of the space shuttle accidents. Finally, Ben-Haim and Hemez focus upon modeling robustness and predictive fidelity in another approach. These frameworks are similar, and applied in a similar fashion. The adoption of these frameworks at Sandia and NASA has been slow and arduous because the force of law has not assisted acceptance. All existing frameworks are

  11. Future missions studies: Combining Schatten's solar activity prediction model with a chaotic prediction model

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.

  12. Operation quality assessment model for video conference system

    NASA Astrophysics Data System (ADS)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  13. Relationship between soybean yield/quality and soil quality in a major soybean-producing area based on a 2D-QSAR model

    NASA Astrophysics Data System (ADS)

    Gao, Ming; Li, Shiwei

    2017-05-01

    Based on experimental data of the soybean yield and quality from 30 sampling points, a quantitative structure-activity relationship model (2D-QSAR) was established using the soil quality (elements, pH, organic matter content and cation exchange capacity) as independent variables and soybean yield or quality as the dependent variable, with SPSS software. During the modeling, the full data set (30 and 14 compounds) was divided into a training set (24 and 11 compounds) for model generation and a test set (6 and 3 compounds) for model validation. The R2 values of the resulting models and data were 0.826 and 0.808 for soybean yield and quality, respectively, and all regression coefficients were significant (P < 0.05). The correlation coefficient R2pred of observed values and predicted values of the soybean yield and soybean quality in the test set were 0.961 and 0.956, respectively, indicating that the models had a good predictive ability. Moreover, the Mo, Se, K, N and organic matter contents and the cation exchange capacity of soil had a positive effect on soybean production, and the B, Mo, Se, K and N contents and cation exchange coefficient had a positive effect on soybean quality. The results are instructive for enhancing soils to improve the yield and quality of soybean, and this method can also be used to study other crops or regions, providing a theoretical basis to improving the yield and quality of crops.

  14. Three-model ensemble wind prediction in southern Italy

    NASA Astrophysics Data System (ADS)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  15. Volatile profile analysis and quality prediction of Longjing tea (Camellia sinensis) by HS-SPME/GC-MS

    PubMed Central

    Lin, Jie; Dai, Yi; Guo, Ya-nan; Xu, Hai-rong; Wang, Xiao-chang

    2012-01-01

    This study aimed to analyze the volatile chemical profile of Longjing tea, and further develop a prediction model for aroma quality of Longjing tea based on potent odorants. A total of 21 Longjing samples were analyzed by headspace solid phase microextraction (HS-SPME) coupled with gas chromatography-mass spectrometry (GC-MS). Pearson’s linear correlation analysis and partial least square (PLS) regression were applied to investigate the relationship between sensory aroma scores and the volatile compounds. Results showed that 60 volatile compounds could be commonly detected in this famous green tea. Terpenes and esters were two major groups characterized, representing 33.89% and 15.53% of the total peak area respectively. Ten compounds were determined to contribute significantly to the perceived aroma quality of Longjing tea, especially linalool (0.701), nonanal (0.738), (Z)-3-hexenyl hexanoate (−0.785), and β-ionone (−0.763). On the basis of these 10 compounds, a model (correlation coefficient of 89.4% and cross-validated correlation coefficient of 80.4%) was constructed to predict the aroma quality of Longjing tea. Summarily, this study has provided a novel option for quality prediction of green tea based on HS-SPME/GC-MS technique. PMID:23225852

  16. Predictive Models for Escherichia coli Concentrations at Inland Lake Beaches and Relationship of Model Variables to Pathogen Detection

    PubMed Central

    Stelzer, Erin A.; Duris, Joseph W.; Brady, Amie M. G.; Harrison, John H.; Johnson, Heather E.; Ware, Michael W.

    2013-01-01

    Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public. PMID:23291550

  17. Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection

    USGS Publications Warehouse

    Francy, Donna S.; Stelzer, Erin A.; Duris, Joseph W.; Brady, Amie M.G.; Harrison, John H.; Johnson, Heather E.; Ware, Michael W.

    2013-01-01

    Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public.

  18. Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection.

    PubMed

    Francy, Donna S; Stelzer, Erin A; Duris, Joseph W; Brady, Amie M G; Harrison, John H; Johnson, Heather E; Ware, Michael W

    2013-03-01

    Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public.

  19. Artificial neural network modeling of the water quality index using land use areas as predictors.

    PubMed

    Gazzaz, Nabeel M; Yusoff, Mohd Kamil; Ramli, Mohammad Firuz; Juahir, Hafizan; Aris, Ahmad Zaharin

    2015-02-01

    This paper describes the design of an artificial neural network (ANN) model to predict the water quality index (WQI) using land use areas as predictors. Ten-year records of land use statistics and water quality data for Kinta River (Malaysia) were employed in the modeling process. The most accurate WQI predictions were obtained with the network architecture 7-23-1; the back propagation training algorithm; and a learning rate of 0.02. The WQI forecasts of this model had significant (p < 0.01), positive, very high correlation (ρs = 0.882) with the measured WQI values. Sensitivity analysis revealed that the relative importance of the land use classes to WQI predictions followed the order: mining > rubber > forest > logging > urban areas > agriculture > oil palm. These findings show that the ANNs are highly reliable means of relating water quality to land use, thus integrating land use development with river water quality management.

  20. Towards the Next Generation Air Quality Modeling System ...

    EPA Pesticide Factsheets

    The community multiscale air quality (CMAQ) model of the U.S. Environmental Protection Agency is one of the most widely used air quality model worldwide; it is employed for both research and regulatory applications at major universities and government agencies for improving understanding of the formation and transport of air pollutants. It is noted, however, that air quality issues and climate change assessments need to be addressed globally recognizing the linkages and interactions between meteorology and atmospheric chemistry across a wide range of scales. Therefore, an effort is currently underway to develop the next generation air quality modeling system (NGAQM) that will be based on a global integrated meteorology and chemistry system. The model for prediction across scales-atmosphere (MPAS-A), a global fully compressible non-hydrostatic model with seamlessly refined centroidal Voronoi grids, has been chosen as the meteorological driver of this modeling system. The initial step of adapting MPAS-A for the NGAQM was to implement and test the physics parameterizations and options that are preferred for retrospective air quality simulations (see the work presented by R. Gilliam, R. Bullock, and J. Herwehe at this workshop). The next step, presented herein, would be to link the chemistry from CMAQ to MPAS-A to build a prototype for the NGAQM. Furthermore, the techniques to harmonize transport processes between CMAQ and MPAS-A, methodologies to connect the chemis

  1. A diagnostic model for studying daytime urban air quality trends

    NASA Technical Reports Server (NTRS)

    Brewer, D. A.; Remsberg, E. E.; Woodbury, G. E.

    1981-01-01

    A single cell Eulerian photochemical air quality simulation model was developed and validated for selected days of the 1976 St. Louis Regional Air Pollution Study (RAPS) data sets; parameterizations of variables in the model and validation studies using the model are discussed. Good agreement was obtained between measured and modeled concentrations of NO, CO, and NO2 for all days simulated. The maximum concentration of O3 was also predicted well. Predicted species concentrations were relatively insensitive to small variations in CO and NOx emissions and to the concentrations of species which are entrained as the mixed layer rises.

  2. Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.

    PubMed

    Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh

    2014-07-01

    This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Using the Gamma-Poisson Model to Predict Library Circulations.

    ERIC Educational Resources Information Center

    Burrell, Quentin L.

    1990-01-01

    Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)

  4. Predicting quality of life in pediatric asthma: the role of emotional competence and personality.

    PubMed

    Lahaye, Magali; Van Broeck, Nady; Bodart, Eddy; Luminet, Olivier

    2013-05-01

    The present study examined the predictive value of emotional competence and the five-factor model of personality on the quality of life of children with asthma. Participants were 90 children (M age = 11.73, SD = 2.60) having controlled and partly controlled asthma, undergoing everyday treatment. Children filled in questionnaires assessing emotional competence and quality of life. Parents completed questionnaires assessing the personality of their child. Results showed that two emotional competences, bodily awareness and verbal sharing of emotions, were related to the quality of life of children with asthma. Moreover, one personality trait, benevolence, was associated with children's quality of life. Regression analyses showed that the predictive value of these three dimensions remained significant over and above asthma control and socio-demographic variables frequently associated with the quality of life of children with asthma (age, gender, and educational level of parents). These findings emphasize the importance of alerting the clinician who works with children with asthma to observe and assess the child's expression of emotions, attention to bodily sensations, and benevolence.

  5. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-08-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy

  6. The Atlanta Urban Heat Island Mitigation and Air Quality Modeling Project: How High-Resoution Remote Sensing Data Can Improve Air Quality Models

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Estes, Maurice G., Jr.; Crosson, William L.; Khan, Maudood N.

    2006-01-01

    The Atlanta Urban Heat Island and Air Quality Project had its genesis in Project ATLANTA (ATlanta Land use Analysis: Temperature and Air quality) that began in 1996. Project ATLANTA examined how high-spatial resolution thermal remote sensing data could be used to derive better measurements of the Urban Heat Island effect over Atlanta. We have explored how these thermal remote sensing, as well as other imaged datasets, can be used to better characterize the urban landscape for improved air quality modeling over the Atlanta area. For the air quality modeling project, the National Land Cover Dataset and the local scale Landpro99 dataset at 30m spatial resolutions have been used to derive land use/land cover characteristics for input into the MM5 mesoscale meteorological model that is one of the foundations for the Community Multiscale Air Quality (CMAQ) model to assess how these data can improve output from CMAQ. Additionally, land use changes to 2030 have been predicted using a Spatial Growth Model (SGM). SGM simulates growth around a region using population, employment and travel demand forecasts. Air quality modeling simulations were conducted using both current and future land cover. Meteorological modeling simulations indicate a 0.5 C increase in daily maximum air temperatures by 2030. Air quality modeling simulations show substantial differences in relative contributions of individual atmospheric pollutant constituents as a result of land cover change. Enhanced boundary layer mixing over the city tends to offset the increase in ozone concentration expected due to higher surface temperatures as a result of urbanization.

  7. Prediction models of health-related quality of life in different neck pain conditions: a cross-sectional study.

    PubMed

    Beltran-Alacreu, Hector; López-de-Uralde-Villanueva, Ibai; Calvo-Lobo, César; La Touche, Roy; Cano-de-la-Cuerda, Roberto; Gil-Martínez, Alfonso; Fernández-Ayuso, David; Fernández-Carnero, Josué

    2018-01-01

    The main aim of the study was to predict the health-related quality of life (HRQoL) based on physical, functional, and psychological measures in patients with different types of neck pain (NP). This cross-sectional study included 202 patients from a primary health center and the physiotherapy outpatient department of a hospital. Patients were divided into four groups according to their NP characteristics: chronic (CNP), acute whiplash (WHIP), chronic NP associated with temporomandibular dysfunction (NP-TMD), or chronic NP associated with chronic primary headache (NP-PH). The following measures were performed: Short Form-12 Health Survey (SF-12), Neck Disability Index (NDI), visual analog scale (VAS), State-Trait Anxiety Inventory (STAI), Beck Depression Inventory (BECK), and cervical range of movement (CROM). The regression models based on the SF-12 total HRQoL for CNP and NP-TMD groups showed that only NDI was a significant predictor of the worst HRQoL (48.9% and 48.4% of the variance, respectively). In the WHIP group, the regression model showed that BECK was the only significant predictor variable for the worst HRQoL (31.7% of the variance). Finally, in the NP-PH group, the regression showed that the BECK, STAI, and VAS model predicted the worst HRQoL (75.1% of the variance). Chronic nonspecific NP and chronic NP associated with temporomandibular dysfunction were the main predictors of neck disability. In addition, depression, anxiety, and pain were the main predictors of WHIP or primary headache associated with CNP.

  8. Surface mine planning and design implications and theory of a visual environmental quality predictive model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burley, J.B.

    1999-07-01

    Surface mine planners and designers are searching for scientifically based tools to assist in the pre-mine planning and post-mine development or surface mine sites. In this study, the author presents a science based visual and environmental quality predictive model useful in preparing and assessing landscape treatments for surface mine sites. The equation explains 67 percent of respondent preference, with an overall p-value for the equation >0.0001 and a p-value >0.05 for each regressor. Regressors employed in the equation include an environmental quality index, foreground vegetation, distant nonvegetation, people, vehicles, utilities, foreground flowers, foreground erosion, wildlife, landscape openness, landscape mystery, andmore » noosphericness (a measure of human disturbance). The equation can be explained with an Intrusion/Neutral Modifier/Temporal Enhancement Theory which suggests that human intrusions upon other humans results in landscape of low preference and which also suggests that landscape containing natural and special temporal features such as wildlife and flowers can enhance the value of a landscape scene. This research supports the importance of visual barriers such as berms and vegetation screens during mining operations and supports public perceptions concerning many types of industrial activities. In addition, the equation can be applied to study post-mining landscape development plans to maximize the efficiency and effectiveness of landscape treatments.« less

  9. Development of an analytical-numerical model to predict radiant emission or absorption

    NASA Technical Reports Server (NTRS)

    Wallace, Tim L.

    1994-01-01

    The development of an analytical-numerical model to predict radiant emission or absorption is discussed. A voigt profile is assumed to predict the spectral qualities of a singlet atomic transition line for atomic species of interest to the OPAD program. The present state of this model is described in each progress report required under contract. Model and code development is guided by experimental data where available. When completed, the model will be used to provide estimates of specie erosion rates from spectral data collected from rocket exhaust plumes or other sources.

  10. Implementing subgrid-scale cloudiness into the Model for Prediction Across Scales-Atmosphere (MPAS-A) for next generation global air quality modeling

    EPA Science Inventory

    A next generation air quality modeling system is being developed at the U.S. EPA to enable seamless modeling of air quality from global to regional to (eventually) local scales. State of the science chemistry and aerosol modules from the Community Multiscale Air Quality (CMAQ) mo...

  11. New smoke predictions for Alaska in NOAA’s National Air Quality Forecast Capability

    NASA Astrophysics Data System (ADS)

    Davidson, P. M.; Ruminski, M.; Draxler, R.; Kondragunta, S.; Zeng, J.; Rolph, G.; Stajner, I.; Manikin, G.

    2009-12-01

    Smoke from wildfire is an important component of fine particle pollution, which is responsible for tens of thousands of premature deaths each year in the US. In Alaska, wildfire smoke is the leading cause of poor air quality in summer. Smoke forecast guidance helps air quality forecasters and the public take steps to limit exposure to airborne particulate matter. A new smoke forecast guidance tool, built by a cross-NOAA team, leverages efforts of NOAA’s partners at the USFS on wildfire emissions information, and with EPA, in coordinating with state/local air quality forecasters. Required operational deployment criteria, in categories of objective verification, subjective feedback, and production readiness, have been demonstrated in experimental testing during 2008-2009, for addition to the operational products in NOAA's National Air Quality Forecast Capability. The Alaska smoke forecast tool is an adaptation of NOAA’s smoke predictions implemented operationally for the lower 48 states (CONUS) in 2007. The tool integrates satellite information on location of wildfires with weather (North American mesoscale model) and smoke dispersion (HYSPLIT) models to produce daily predictions of smoke transport for Alaska, in binary and graphical formats. Hour-by hour predictions at 12km grid resolution of smoke at the surface and in the column are provided each day by 13 UTC, extending through midnight next day. Forecast accuracy and reliability are monitored against benchmark criteria for accuracy and reliability. While wildfire activity in the CONUS is year-round, the intense wildfire activity in AK is limited to the summer. Initial experimental testing during summer 2008 was hindered by unusually limited wildfire activity and very cloudy conditions. In contrast, heavier than average wildfire activity during summer 2009 provided a representative basis (more than 60 days of wildfire smoke) for demonstrating required prediction accuracy. A new satellite observation product

  12. Operational prediction of air quality for the United States: applications of satellite observations

    NASA Astrophysics Data System (ADS)

    Stajner, Ivanka; Lee, Pius; Tong, Daniel; Pan, Li; McQueen, Jeff; Huang, Jianping; Huang, Ho-Chun; Draxler, Roland; Kondragunta, Shobha; Upadhayay, Sikchya

    2015-04-01

    Operational predictions of ozone and wildfire smoke over United States (U.S.) and predictions of airborne dust over the contiguous 48 states are provided by NOAA at http://airquality.weather.gov/. North American Mesoscale (NAM) weather predictions with inventory based emissions estimates from the U.S. Environmental Protection Agency (EPA) and chemical processes within the Community Multiscale Air Quality (CMAQ) model are combined together to produce ozone predictions. Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model is used to predict wildfire smoke and dust storm predictions. Routine verification of ozone predictions relies on AIRNow compilation of observations from surface monitors. Retrievals of smoke column integrals from GOES satellites and dust column integrals from MODIS satellite instruments are used for verification of smoke and dust predictions. Recent updates of NOAA's operational air quality predictions have focused on mobile emissions using the projections of mobile sources for 2012. Since emission inventories are complex and take years to assemble and evaluate causing a lag of information, we recently began combing inventory information with projections of mobile sources. In order to evaluate this emission update, these changes in projected NOx emissions from 2005-2012 were compared with observed changes in Ozone Monitoring Instrument (OMI) NO2 observations and NOx measured by surface monitors over large U.S. cities over the same period. Comparisons indicate that projected decreases in NOx emissions from 2005 to 2012 are similar, but not as strong as the decreases in the observed NOx concentrations and in OMI NO2 retrievals. Nevertheless, the use of projected mobile NOx emissions in the predictions reduced biases in predicted NOx concentrations, with the largest improvement in the urban areas. Ozone biases are reduced as well, with the largest improvement seen in rural areas. Recent testing of PM2.5 predictions is relying on

  13. Applying Risk Prediction Models to Optimize Lung Cancer Screening: Current Knowledge, Challenges, and Future Directions.

    PubMed

    Sakoda, Lori C; Henderson, Louise M; Caverly, Tanner J; Wernli, Karen J; Katki, Hormuzd A

    2017-12-01

    Risk prediction models may be useful for facilitating effective and high-quality decision-making at critical steps in the lung cancer screening process. This review provides a current overview of published lung cancer risk prediction models and their applications to lung cancer screening and highlights both challenges and strategies for improving their predictive performance and use in clinical practice. Since the 2011 publication of the National Lung Screening Trial results, numerous prediction models have been proposed to estimate the probability of developing or dying from lung cancer or the probability that a pulmonary nodule is malignant. Respective models appear to exhibit high discriminatory accuracy in identifying individuals at highest risk of lung cancer or differentiating malignant from benign pulmonary nodules. However, validation and critical comparison of the performance of these models in independent populations are limited. Little is also known about the extent to which risk prediction models are being applied in clinical practice and influencing decision-making processes and outcomes related to lung cancer screening. Current evidence is insufficient to determine which lung cancer risk prediction models are most clinically useful and how to best implement their use to optimize screening effectiveness and quality. To address these knowledge gaps, future research should be directed toward validating and enhancing existing risk prediction models for lung cancer and evaluating the application of model-based risk calculators and its corresponding impact on screening processes and outcomes.

  14. Linked Hydrologic-Hydrodynamic Model Framework to Forecast Impacts of Rivers on Beach Water Quality

    NASA Astrophysics Data System (ADS)

    Anderson, E. J.; Fry, L. M.; Kramer, E.; Ritzenthaler, A.

    2014-12-01

    The goal of NOAA's beach quality forecasting program is to use a multi-faceted approach to aid in detection and prediction of bacteria in recreational waters. In particular, our focus has been on the connection between tributary loads and bacteria concentrations at nearby beaches. While there is a clear link between stormwater runoff and beach water quality, quantifying the contribution of river loadings to nearshore bacterial concentrations is complicated due to multiple processes that drive bacterial concentrations in rivers as well as those processes affecting the fate and transport of bacteria upon exiting the rivers. In order to forecast potential impacts of rivers on beach water quality, we developed a linked hydrologic-hydrodynamic water quality framework that simulates accumulation and washoff of bacteria from the landscape, and then predicts the fate and transport of washed off bacteria from the watershed to the coastal zone. The framework includes a watershed model (IHACRES) to predict fecal indicator bacteria (FIB) loadings to the coastal environment (accumulation, wash-off, die-off) as a function of effective rainfall. These loadings are input into a coastal hydrodynamic model (FVCOM), including a bacteria transport model (Lagrangian particle), to simulate 3D bacteria transport within the coastal environment. This modeling system provides predictive tools to assist local managers in decision-making to reduce human health threats.

  15. Use of watershed factors to predict consumer surfactant risk, water quality, and habitat quality in the upper Trinity River, Texas.

    PubMed

    Atkinson, S F; Johnson, D R; Venables, B J; Slye, J L; Kennedy, J R; Dyer, S D; Price, B B; Ciarlo, M; Stanton, K; Sanderson, H; Nielsen, A

    2009-06-15

    Surfactants are high production volume chemicals that are used in a wide assortment of "down-the-drain" consumer products. Wastewater treatment plants (WWTPs) generally remove 85 to more than 99% of all surfactants from influents, but residual concentrations are discharged into receiving waters via wastewater treatment plant effluents. The Trinity River that flows through the Dallas-Fort Worth metropolitan area, Texas, is an ideal study site for surfactants due to the high ratio of wastewater treatment plant effluent to river flow (>95%) during late summer months, providing an interesting scenario for surfactant loading into the environment. The objective of this project was to determine whether surfactant concentrations, expressed as toxic units, in-stream water quality, and aquatic habitat in the upper Trinity River could be predicted based on easily accessible watershed characteristics. Surface water and pore water samples were collected in late summer 2005 at 11 sites on the Trinity River in and around the Dallas-Fort Worth metropolitan area. Effluents of 4 major waste water treatment plants that discharge effluents into the Trinity River were also sampled. General chemistries and individual surfactant concentrations were determined, and total surfactant toxic units were calculated. GIS models of geospatial, anthropogenic factors (e.g., population density) and natural factors (e.g., soil organic matter) were collected and analyzed according to subwatersheds. Multiple regression analyses using the stepwise maximum R(2) improvement method were performed to develop prediction models of surfactant risk, water quality, and aquatic habitat (dependent variables) using the geospatial parameters (independent variables) that characterized the upper Trinity River watershed. We show that GIS modeling has the potential to be a reliable and inexpensive method of predicting water and habitat quality in the upper Trinity River watershed and perhaps other highly urbanized

  16. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  17. [Hyperspectral Remote Sensing Estimation Models for Pasture Quality].

    PubMed

    Ma, Wei-wei; Gong, Cai-lan; Hu, Yong; Wei, Yong-lin; Li, Long; Liu, Feng-yi; Meng, Peng

    2015-10-01

    Crude protein (CP), crude fat (CFA) and crude fiber (CFI) are key indicators for evaluation of the quality and feeding value of pasture. Hence, identification of these biological contents is an essential practice for animal husbandry. As current approaches to pasture quality estimation are time-consuming and costly, and even generate hazardous waste, a real-time and non-destructive method is therefore developed in this study using pasture canopy hyperspectral data. A field campaign was carried out in August 2013 around Qinghai Lake in order to obtain field spectral properties of 19 types of natural pasture using the ASD Field Spec 3, a field spectrometer that works in the optical region (350-2 500 nm) of the electromagnetic spectrum. In additional to the spectral data, pasture samples were also collected from the field and examined in laboratory to measure the relative concentration of CP (%), CFA (%) and CFI (%). After spectral denoising and smoothing, the relationship of pasture quality parameters with the reflectance spectrum, the first derivatives of reflectance (FDR), band ratio and the wavelet coefficients (WCs) was analyzed respectively. The concentration of CP, CFA and CFI of pasture was found closely correlated with FDR with wavebands centered at 424, 1 668, and 918 nm as well as with the low-scale (scale = 2, 4) Morlet, Coiflets and Gassian WCs. Accordingly, the linear, exponential, and polynomial equations between each pasture variable and FDR or WCs were developed. Validation of the developed equations indicated that the polynomial model with an independent variable of Coiflets WCs (scale = 4, wavelength =1 209 nm), the polynomial model with an independent variable of FDR, and the exponential model with an independent variable of FDR were the optimal model for prediction of concentration of CP, CFA and CFI of pasture, respectively. The R2 of the pasture quality estimation models was between 0.646 and 0.762 at the 0.01 significance level. Results suggest

  18. Quality assessment of protein model-structures based on structural and functional similarities

    PubMed Central

    2012-01-01

    Background Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. Results GOBA - Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. Conclusions The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best

  19. Quality assessment of protein model-structures based on structural and functional similarities.

    PubMed

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  20. A time-varying subjective quality model for mobile streaming videos with stalling events

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.

    2015-09-01

    Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.

  1. Development and application of new quality model for software projects.

    PubMed

    Karnavel, K; Dillibabu, R

    2014-01-01

    The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects.

  2. Prediction of beef carcass and meat quality traits from factors characterising the rearing management system applied during the whole life of heifers.

    PubMed

    Soulat, J; Picard, B; Léger, S; Monteils, V

    2018-06-01

    In this study, four prediction models were developed by logistic regression using individual data from 96 heifers. Carcass and sensory rectus abdominis quality clusters were identified then predicted using the rearing factors data. The obtained models from rearing factors applied during the fattening period were compared to those characterising the heifers' whole life. The highest prediction power of carcass and meat quality clusters were obtained from the models considering the whole life, with success rates of 62.8% and 54.9%, respectively. Rearing factors applied during both pre-weaning and fattening periods influenced carcass and meat quality. According to models, carcass traits were improved when heifer's mother was older for first calving, calves ingested concentrates during pasture preceding weaning and heifers were slaughtered older. Meat traits were improved by the genetic of heifers' parents (i.e., calving ease and early muscularity) and when heifers were slaughtered older. A management of carcass and meat quality traits is possible at different periods of the heifers' life. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Using Empirical Models for Communication Prediction of Spacecraft

    NASA Technical Reports Server (NTRS)

    Quasny, Todd

    2015-01-01

    A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.

  4. Reliability prediction of ontology-based service compositions using Petri net and time series models.

    PubMed

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.

  5. Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models

    PubMed Central

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429

  6. EVALUATING THE PERFORMANCE OF REGIONAL-SCALE PHOTOCHEMICAL MODELING SYSTEMS: PART II--OZONE PREDICTIONS. (R825260)

    EPA Science Inventory

    In this paper, the concept of scale analysis is applied to evaluate ozone predictions from two regional-scale air quality models. To this end, seasonal time series of observations and predictions from the RAMS3b/UAM-V and MM5/MAQSIP (SMRAQ) modeling systems for ozone were spectra...

  7. Predictive Models of the Hydrological Regime of Unregulated Streams in Arizona

    USGS Publications Warehouse

    Anning, David W.; Parker, John T.C.

    2009-01-01

    Three statistical models were developed by the U.S. Geological Survey in cooperation with the Arizona Department of Environmental Quality to improve the predictability of flow occurrence in unregulated streams throughout Arizona. The models can be used to predict the probabilities of the hydrological regime being one of four categories developed by this investigation: perennial, which has streamflow year-round; nearly perennial, which has streamflow 90 to 99.9 percent of the year; weakly perennial, which has streamflow 80 to 90 percent of the year; or nonperennial, which has streamflow less than 80 percent of the year. The models were developed to assist the Arizona Department of Environmental Quality in selecting sites for participation in the U.S. Environmental Protection Agency's Environmental Monitoring and Assessment Program. One model was developed for each of the three hydrologic provinces in Arizona - the Plateau Uplands, the Central Highlands, and the Basin and Range Lowlands. The models for predicting the hydrological regime were calibrated using statistical methods and explanatory variables of discharge, drainage-area, altitude, and location data for selected U.S. Geological Survey streamflow-gaging stations and a climate index derived from annual precipitation data. Models were calibrated on the basis of streamflow data from 46 stations for the Plateau Uplands province, 82 stations for the Central Highlands province, and 90 stations for the Basin and Range Lowlands province. The models were developed using classification trees that facilitated the analysis of mixed numeric and factor variables. In all three models, a threshold stream discharge was the initial variable to be considered within the classification tree and was the single most important explanatory variable. If a stream discharge value at a station was below the threshold, then the station record was determined as being nonperennial. If, however, the stream discharge was above the threshold

  8. Anxiety, social skills, friendship quality, and peer victimization: an integrated model.

    PubMed

    Crawford, A Melissa; Manassis, Katharina

    2011-10-01

    This cross-sectional study investigated whether anxiety and social functioning interact in their prediction of peer victimization. A structural equation model linking anxiety, social skills, and friendship quality to victimization was tested separately for children with anxiety disorders and normal comparison children to explore whether the processes involved in victimization differ for these groups. Participants were 8-14 year old children: 55 (34 boys, 21 girls) diagnosed with an anxiety disorder and 85 (37 boys, 48 girls) normal comparison children. The final models for both groups yielded two independent pathways to victimization: (a) anxiety independently predicted being victimized; and (b) poor social skills predicted lower friendship quality, which in turn, placed a child at risk for victimization. These findings have important implications for the treatment of childhood anxiety disorders and for school-based anti-bullying interventions, but replication with larger samples is indicated. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. A risk factor-based predictive model of outcomes in carotid endarterectomy: the National Surgical Quality Improvement Program 2005-2010.

    PubMed

    Bekelis, Kimon; Bakhoum, Samuel F; Desai, Atman; Mackenzie, Todd A; Goodney, Philip; Labropoulos, Nicos

    2013-04-01

    Accurate knowledge of individualized risks and benefits is crucial to the surgical management of patients undergoing carotid endarterectomy (CEA). Although large randomized trials have determined specific cutoffs for the degree of stenosis, precise delineation of patient-level risks remains a topic of debate, especially in real world practice. We attempted to create a risk factor-based predictive model of outcomes in CEA. We performed a retrospective cohort study involving patients who underwent CEAs from 2005 to 2010 and were registered in the American College of Surgeons National Quality Improvement Project database. Of the 35 698 patients, 20 015 were asymptomatic (56.1%) and 15 683 were symptomatic (43.9%). These patients demonstrated a 1.64% risk of stroke, 0.69% risk of myocardial infarction, and 0.75% risk of death within 30 days after CEA. Multivariate analysis demonstrated that increasing age, male sex, history of chronic obstructive pulmonary disease, myocardial infarction, angina, congestive heart failure, peripheral vascular disease, previous stroke or transient ischemic attack, and dialysis were independent risk factors associated with an increased risk of the combined outcome of postoperative stroke, myocardial infarction, or death. A validated model for outcome prediction based on individual patient characteristics was developed. There was a steep effect of age on the risk of myocardial infarction and death. This national study confirms that that risks of CEA vary dramatically based on patient-level characteristics. Because of limited discrimination, it cannot be used for individual patient risk assessment. However, it can be used as a baseline for improvement and development of more accurate predictive models based on other databases or prospective studies.

  10. Modelling postharvest quality of blueberry affected by biological variability using image and spectral data.

    PubMed

    Hu, Meng-Han; Dong, Qing-Li; Liu, Bao-Lin

    2016-08-01

    Hyperspectral reflectance and transmittance sensing as well as near-infrared (NIR) spectroscopy were investigated as non-destructive tools for estimating blueberry firmness, elastic modulus and soluble solid content (SSC). Least squares-support vector machine models were established from these three spectra based on samples from three cultivars viz. Bluecrop, Duke and M2 and two harvest years viz. 2014 and 2015 for predicting blueberry postharvest quality. One-cultivar reflectance models (establishing model using one cultivar) derived better results than the corresponding transmittance and NIR models for predicting blueberry firmness with few cultivar effects. Two-cultivar NIR models (establishing model using two cultivars) proved to be suitable for estimating blueberry SSC with correlations over 0.83. Rp (RMSEp ) values of the three-cultivar reflectance models (establishing model using 75% of three cultivars) were 0.73 (0.094) and 0.73 (0.186), respectively , for predicting blueberry firmness and elastic modulus. For SSC prediction, the three-cultivar NIR model was found to achieve an Rp (RMSEp ) value of 0.85 (0.090). Adding Bluecrop samples harvested in 2014 could enhance the three-cultivar model robustness for firmness and elastic modulus. The above results indicated the potential for using spatial and spectral techniques to develop robust models for predicting blueberry postharvest quality containing biological variability. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  11. Near infrared spectroscopy based monitoring of extraction processes of raw material with the help of dynamic predictive modeling

    NASA Astrophysics Data System (ADS)

    Wang, Haixia; Suo, Tongchuan; Wu, Xiaolin; Zhang, Yue; Wang, Chunhua; Yu, Heshui; Li, Zheng

    2018-03-01

    The control of batch-to-batch quality variations remains a challenging task for pharmaceutical industries, e.g., traditional Chinese medicine (TCM) manufacturing. One difficult problem is to produce pharmaceutical products with consistent quality from raw material of large quality variations. In this paper, an integrated methodology combining the near infrared spectroscopy (NIRS) and dynamic predictive modeling is developed for the monitoring and control of the batch extraction process of licorice. With the spectra data in hand, the initial state of the process is firstly estimated with a state-space model to construct a process monitoring strategy for the early detection of variations induced by the initial process inputs such as raw materials. Secondly, the quality property of the end product is predicted at the mid-course during the extraction process with a partial least squares (PLS) model. The batch-end-time (BET) is then adjusted accordingly to minimize the quality variations. In conclusion, our study shows that with the help of the dynamic predictive modeling, NIRS can offer the past and future information of the process, which enables more accurate monitoring and control of process performance and product quality.

  12. A comparative study of kinetic and connectionist modeling for shelf-life prediction of Basundi mix.

    PubMed

    Ruhil, A P; Singh, R R B; Jain, D K; Patel, A A; Patil, G R

    2011-04-01

    A ready-to-reconstitute formulation of Basundi, a popular Indian dairy dessert was subjected to storage at various temperatures (10, 25 and 40 °C) and deteriorative changes in the Basundi mix were monitored using quality indices like pH, hydroxyl methyl furfural (HMF), bulk density (BD) and insolubility index (II). The multiple regression equations and the Arrhenius functions that describe the parameters' dependence on temperature for the four physico-chemical parameters were integrated to develop mathematical models for predicting sensory quality of Basundi mix. Connectionist model using multilayer feed forward neural network with back propagation algorithm was also developed for predicting the storage life of the product employing artificial neural network (ANN) tool box of MATLAB software. The quality indices served as the input parameters whereas the output parameters were the sensorily evaluated flavour and total sensory score. A total of 140 observations were used and the prediction performance was judged on the basis of per cent root mean square error. The results obtained from the two approaches were compared. Relatively lower magnitudes of percent root mean square error for both the sensory parameters indicated that the connectionist models were better fitted than kinetic models for predicting storage life.

  13. Stormwater quality modelling in combined sewers: calibration and uncertainty analysis.

    PubMed

    Kanso, A; Chebbo, G; Tassin, B

    2005-01-01

    Estimating the level of uncertainty in urban stormwater quality models is vital for their utilization. This paper presents the results of application of a Monte Carlo Markov Chain method based on the Bayesian theory for the calibration and uncertainty analysis of a storm water quality model commonly used in available software. The tested model uses a hydrologic/hydrodynamic scheme to estimate the accumulation, the erosion and the transport of pollutants on surfaces and in sewers. It was calibrated for four different initial conditions of in-sewer deposits. Calibration results showed large variability in the model's responses in function of the initial conditions. They demonstrated that the model's predictive capacity is very low.

  14. Predicting sleep quality from stress and prior sleep--a study of day-to-day covariation across six weeks.

    PubMed

    Åkerstedt, Torbjörn; Orsini, Nicola; Petersen, Helena; Axelsson, John; Lekander, Mats; Kecklund, Göran

    2012-06-01

    The connection between stress and sleep is well established in cross-sectional questionnaire studies and in a few prospective studies. Here, the intention was to study the link between stress and sleep on a day-to-day basis across 42 days. Fifty participants kept a sleep/wake diary across 42 days and responded to daily questions on sleep and stress. The results were analyzed with a mixed model approach using stress during the prior day to predict morning ratings of sleep quality. The results showed that bedtime stress and worries were the main predictors of sleep quality, but that, also, late awakening, short prior sleep, high quality of prior sleep, and good health the prior day predicted higher sleep quality. Stress during the day predicts subsequent sleep quality on a day-to-day basis across 42 days. The observed range of variation in stress/worries was modest, which is why it is suggested that the present data underestimates the impact of stress on subsequent sleep quality. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. AN INTERDISCIPLINARY APPROACH TO ADDRESSING NEIGHBORHOOD SCALE AIR QUALITY CONCERNS: THE INTEGRATION OF GIS, URBAN MORPHOLOGY, PREDICTIVE METEOROLOGY, AND AIR QUALITY MONITORING TOOLS

    EPA Science Inventory

    The paper describes a project that combines the capabilities of urban geography, raster-based GIS, predictive meteorological and air pollutant diffusion modeling, to support a neighborhood-scale air quality monitoring pilot study under the U.S. EPA EMPACT Program. The study ha...

  16. Application of experimental design for the optimization of artificial neural network-based water quality model: a case study of dissolved oxygen prediction.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-04-01

    This paper presents an application of experimental design for the optimization of artificial neural network (ANN) for the prediction of dissolved oxygen (DO) content in the Danube River. The aim of this research was to obtain a more reliable ANN model that uses fewer monitoring records, by simultaneous optimization of the following model parameters: number of monitoring sites, number of historical monitoring data (expressed in years), and number of input water quality parameters used. Box-Behnken three-factor at three levels experimental design was applied for simultaneous spatial, temporal, and input variables optimization of the ANN model. The prediction of DO was performed using a feed-forward back-propagation neural network (BPNN), while the selection of most important inputs was done off-model using multi-filter approach that combines a chi-square ranking in the first step with a correlation-based elimination in the second step. The contour plots of absolute and relative error response surfaces were utilized to determine the optimal values of design factors. From the contour plots, two BPNN models that cover entire Danube flow through Serbia are proposed: an upstream model (BPNN-UP) that covers 8 monitoring sites prior to Belgrade and uses 12 inputs measured in the 7-year period and a downstream model (BPNN-DOWN) which covers 9 monitoring sites and uses 11 input parameters measured in the 6-year period. The main difference between the two models is that BPNN-UP utilizes inputs such as BOD, P, and PO 4 3- , which is in accordance with the fact that this model covers northern part of Serbia (Vojvodina Autonomous Province) which is well-known for agricultural production and extensive use of fertilizers. Both models have shown very good agreement between measured and predicted DO (with R 2  ≥ 0.86) and demonstrated that they can effectively forecast DO content in the Danube River.

  17. Dynamic Evaluation of a Regional Air Quality Model: Assessing the Emissions-Induced Weekly Ozone Cycle

    EPA Science Inventory

    Air quality models are used to predict changes in pollutant concentrations resulting from envisioned emission control policies. Recognizing the need to assess the credibility of air quality models in a policy-relevant context, we perform a dynamic evaluation of the community Mult...

  18. Cultural Resource Predictive Modeling

    DTIC Science & Technology

    2017-10-01

    property to manage ? a. Yes 2) Do you use CRPM (Cultural Resource Predictive Modeling) No, but I use predictive modelling informally . For example...resource program and provide support to the test ranges for their missions. This document will provide information such as lessons learned, points...of contact, and resources to the range cultural resource managers . Objective/Scope: Identify existing cultural resource predictive models and

  19. Development and Application of New Quality Model for Software Projects

    PubMed Central

    Karnavel, K.; Dillibabu, R.

    2014-01-01

    The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects. PMID:25478594

  20. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  1. Bayesian Maximum Entropy Integration of Ozone Observations and Model Predictions: A National Application.

    PubMed

    Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William

    2016-04-19

    To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.

  2. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    NASA Astrophysics Data System (ADS)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  3. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  4. Breast cancer treatment decision making among Latinas and non-Latina Whites: a communication model predicting decisional outcomes and quality of life.

    PubMed

    Yanez, Betina; Stanton, Annette L; Maly, Rose C

    2012-09-01

    Deciding among medical treatment options is a pivotal event following cancer diagnosis, a task that can be particularly daunting for individuals uncomfortable with communication in a medical context. Few studies have explored the surgical decision-making process and associated outcomes among Latinas. We propose a model to elucidate pathways through which acculturation (indicated by language use) and reports of communication effectiveness specific to medical decision making contribute to decisional outcomes (i.e., congruency between preferred and actual involvement in decision making, treatment satisfaction) and quality of life among Latinas and non-Latina White women with breast cancer. Latinas (N = 326) and non-Latina Whites (N = 168) completed measures six months after breast cancer diagnosis, and quality of life was assessed 18 months after diagnosis. Structural equation modeling was used to examine relationships between language use, communication effectiveness, and outcomes. Among Latinas, 63% reported congruency in decision making, whereas 76% of non-Latina Whites reported congruency. In Latinas, greater use of English was related to better reported communication effectiveness. Effectiveness in communication was not related to congruency in decision making, but several indicators of effectiveness in communication were related to greater treatment satisfaction, as was greater congruency in decision making. Greater treatment satisfaction predicted more favorable quality of life. The final model fit the data well only for Latinas. Differences in quality of life and effectiveness in communication were observed between racial/ethnic groups. Findings underscore the importance of developing targeted interventions for physicians and Latinas with breast cancer to enhance communication in decision making. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  5. A manufacturing quality assessment model based-on two stages interval type-2 fuzzy logic

    NASA Astrophysics Data System (ADS)

    Purnomo, Muhammad Ridwan Andi; Helmi Shintya Dewi, Intan

    2016-01-01

    This paper presents the development of an assessment models for manufacturing quality using Interval Type-2 Fuzzy Logic (IT2-FL). The proposed model is developed based on one of building block in sustainable supply chain management (SSCM), which is benefit of SCM, and focuses more on quality. The proposed model can be used to predict the quality level of production chain in a company. The quality of production will affect to the quality of product. Practically, quality of production is unique for every type of production system. Hence, experts opinion will play major role in developing the assessment model. The model will become more complicated when the data contains ambiguity and uncertainty. In this study, IT2-FL is used to model the ambiguity and uncertainty. A case study taken from a company in Yogyakarta shows that the proposed manufacturing quality assessment model can work well in determining the quality level of production.

  6. Automatic evidence quality prediction to support evidence-based decision making.

    PubMed

    Sarker, Abeed; Mollá, Diego; Paris, Cécile

    2015-06-01

    Evidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care. Our approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments. We test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data. The experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance

  7. Predictive modeling of complications.

    PubMed

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  8. FSO and quality of service software prediction

    NASA Astrophysics Data System (ADS)

    Bouchet, O.; Marquis, T.; Chabane, M.; Alnaboulsi, M.; Sizun, H.

    2005-08-01

    Free-space optical (FSO) communication links constitute an alternative option to radio relay links and to optical cables facing growth needs in high-speed telecommunications (abundance of unregulated bandwidth, rapid installation, availability of low-cost optical components offering a high data rate, etc). Their operationalisation requires a good knowledge of the atmospheric effects which can negatively affect role propagation and the availability of the link, and thus to the quality of service (QoS). Better control of these phenomena will allow for the evaluation of system performance and thus assist with improving reliability. The aim of this paper is to compare the behavior of a FSO link located in south of France (Toulouse: with the following parameters: around 270 meters (0.2 mile) long, 34 Mbps data rate, 850 nm wavelength and PDH frame) with airport meteorological data. The second aim of the paper is to assess in-house FSO quality of service prediction software, through comparing simulations with the optical link data and the weather data. The analysis uses in-house software FSO quality of service prediction software ("FSO Prediction") developed by France Telecom Research & Development, which integrates news fog fading equations (compare to Kim & al.) and includes multiple effects (geometrical attenuation, atmospheric fading, rain, snow, scintillation and refraction attenuation due to atmospheric turbulence, optical mispointing attenuation). The FSO link field trial, intended to enable the demonstration and evaluation of these different effects, is described; and preliminary results of the field trial, from December 2004 to May 2005, are then presented.

  9. Feeding habitat quality and behavioral trade-offs in chimpanzees: a case for species distribution models.

    PubMed

    Foerster, Steffen; Zhong, Ying; Pintea, Lilian; Murray, Carson M; Wilson, Michael L; Mjungu, Deus C; Pusey, Anne E

    2016-01-01

    The distribution and abundance of food resources are among the most important factors that influence animal behavioral strategies. Yet, spatial variation in feeding habitat quality is often difficult to assess with traditional methods that rely on extrapolation from plot survey data or remote sensing. Here, we show that maximum entropy species distribution modeling can be used to successfully predict small-scale variation in the distribution of 24 important plant food species for chimpanzees at Gombe National Park, Tanzania. We combined model predictions with behavioral observations to quantify feeding habitat quality as the cumulative dietary proportion of the species predicted to occur in a given location. This measure exhibited considerable spatial heterogeneity with elevation and latitude, both within and across main habitat types. We used model results to assess individual variation in habitat selection among adult chimpanzees during a 10-year period, testing predictions about trade-offs between foraging and reproductive effort. We found that nonswollen females selected the highest-quality habitats compared with swollen females or males, in line with predictions based on their energetic needs. Swollen females appeared to compromise feeding in favor of mating opportunities, suggesting that females rather than males change their ranging patterns in search of mates. Males generally occupied feeding habitats of lower quality, which may exacerbate energetic challenges of aggression and territory defense. Finally, we documented an increase in feeding habitat quality with community residence time in both sexes during the dry season, suggesting an influence of familiarity on foraging decisions in a highly heterogeneous landscape.

  10. A Personalized Predictive Framework for Multivariate Clinical Time Series via Adaptive Model Selection.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2017-11-01

    Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.

  11. Voxel inversion of airborne electromagnetic data for improved groundwater model construction and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Kruse Christensen, Nikolaj; Ferre, Ty Paul A.; Fiandaca, Gianluca; Christensen, Steen

    2017-03-01

    We present a workflow for efficient construction and calibration of large-scale groundwater models that includes the integration of airborne electromagnetic (AEM) data and hydrological data. In the first step, the AEM data are inverted to form a 3-D geophysical model. In the second step, the 3-D geophysical model is translated, using a spatially dependent petrophysical relationship, to form a 3-D hydraulic conductivity distribution. The geophysical models and the hydrological data are used to estimate spatially distributed petrophysical shape factors. The shape factors primarily work as translators between resistivity and hydraulic conductivity, but they can also compensate for structural defects in the geophysical model. The method is demonstrated for a synthetic case study with sharp transitions among various types of deposits. Besides demonstrating the methodology, we demonstrate the importance of using geophysical regularization constraints that conform well to the depositional environment. This is done by inverting the AEM data using either smoothness (smooth) constraints or minimum gradient support (sharp) constraints, where the use of sharp constraints conforms best to the environment. The dependency on AEM data quality is also tested by inverting the geophysical model using data corrupted with four different levels of background noise. Subsequently, the geophysical models are used to construct competing groundwater models for which the shape factors are calibrated. The performance of each groundwater model is tested with respect to four types of prediction that are beyond the calibration base: a pumping well's recharge area and groundwater age, respectively, are predicted by applying the same stress as for the hydrologic model calibration; and head and stream discharge are predicted for a different stress situation. As expected, in this case the predictive capability of a groundwater model is better when it is based on a sharp geophysical model instead of a

  12. Conjunctively optimizing flash flood control and water quality in urban water reservoirs by model predictive control and dynamic emulation

    NASA Astrophysics Data System (ADS)

    Galelli, Stefano; Goedbloed, Albert; Schmitter, Petra; Castelletti, Andrea

    2014-05-01

    Urban water reservoirs are a viable adaptation option to account for increasing drinking water demand of urbanized areas as they allow storage and re-use of water that is normally lost. In addition, the direct availability of freshwater reduces pumping costs and diversifies the portfolios of drinking water supply. Yet, these benefits have an associated twofold cost. Firstly, the presence of large, impervious areas increases the hydraulic efficiency of urban catchments, with short time of concentration, increased runoff rates, losses of infiltration and baseflow, and higher risk of flash floods. Secondly, the high concentration of nutrients and sediments characterizing urban discharges is likely to cause water quality problems. In this study we propose a new control scheme combining Model Predictive Control (MPC), hydro-meteorological forecasts and dynamic model emulation to design real-time operating policies that conjunctively optimize water quantity and quality targets. The main advantage of this scheme stands in its capability of exploiting real-time hydro-meteorological forecasts, which are crucial in such fast-varying systems. In addition, the reduced computational requests of the MPC scheme allows coupling it with dynamic emulators of water quality processes. The approach is demonstrated on Marina Reservoir, a multi-purpose reservoir located in the heart of Singapore and characterized by a large, highly urbanized catchment with a short (i.e. approximately one hour) time of concentration. Results show that the MPC scheme, coupled with a water quality emulator, provides a good compromise between different operating objectives, namely flood risk reduction, drinking water supply and salinity control. Finally, the scheme is used to assess the effect of source control measures (e.g. green roofs) aimed at restoring the natural hydrological regime of Marina Reservoir catchment.

  13. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  14. A model to predict stream water temperature across the conterminous USA

    Treesearch

    Catalina Segura; Peter Caldwell; Ge Sun; Steve McNulty; Yang Zhang

    2014-01-01

    Stream water temperature (ts) is a critical water quality parameter for aquatic ecosystems. However, ts records are sparse or nonexistent in many river systems. In this work, we present an empirical model to predict ts at the site scale across the USA. The model, derived using data from 171 reference sites selected from the Geospatial Attributes of Gages for Evaluating...

  15. Low-Quality Structural and Interaction Data Improves Binding Affinity Prediction via Random Forest.

    PubMed

    Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J

    2015-06-12

    Docking scoring functions can be used to predict the strength of protein-ligand binding. It is widely believed that training a scoring function with low-quality data is detrimental for its predictive performance. Nevertheless, there is a surprising lack of systematic validation experiments in support of this hypothesis. In this study, we investigated to which extent training a scoring function with data containing low-quality structural and binding data is detrimental for predictive performance. We actually found that low-quality data is not only non-detrimental, but beneficial for the predictive performance of machine-learning scoring functions, though the improvement is less important than that coming from high-quality data. Furthermore, we observed that classical scoring functions are not able to effectively exploit data beyond an early threshold, regardless of its quality. This demonstrates that exploiting a larger data volume is more important for the performance of machine-learning scoring functions than restricting to a smaller set of higher data quality.

  16. Predictive analysis of beer quality by correlating sensory evaluation with higher alcohol and ester production using multivariate statistics methods.

    PubMed

    Dong, Jian-Jun; Li, Qing-Liang; Yin, Hua; Zhong, Cheng; Hao, Jun-Guang; Yang, Pan-Fei; Tian, Yu-Hong; Jia, Shi-Ru

    2014-10-15

    Sensory evaluation is regarded as a necessary procedure to ensure a reproducible quality of beer. Meanwhile, high-throughput analytical methods provide a powerful tool to analyse various flavour compounds, such as higher alcohol and ester. In this study, the relationship between flavour compounds and sensory evaluation was established by non-linear models such as partial least squares (PLS), genetic algorithm back-propagation neural network (GA-BP), support vector machine (SVM). It was shown that SVM with a Radial Basis Function (RBF) had a better performance of prediction accuracy for both calibration set (94.3%) and validation set (96.2%) than other models. Relatively lower prediction abilities were observed for GA-BP (52.1%) and PLS (31.7%). In addition, the kernel function of SVM played an essential role of model training when the prediction accuracy of SVM with polynomial kernel function was 32.9%. As a powerful multivariate statistics method, SVM holds great potential to assess beer quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    PubMed

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  18. Tracing the influence of land-use change on water quality and coral reefs using a Bayesian model.

    PubMed

    Brown, Christopher J; Jupiter, Stacy D; Albert, Simon; Klein, Carissa J; Mangubhai, Sangeeta; Maina, Joseph M; Mumby, Peter; Olley, Jon; Stewart-Koster, Ben; Tulloch, Vivitskaia; Wenger, Amelia

    2017-07-06

    Coastal ecosystems can be degraded by poor water quality. Tracing the causes of poor water quality back to land-use change is necessary to target catchment management for coastal zone management. However, existing models for tracing the sources of pollution require extensive data-sets which are not available for many of the world's coral reef regions that may have severe water quality issues. Here we develop a hierarchical Bayesian model that uses freely available satellite data to infer the connection between land-uses in catchments and water clarity in coastal oceans. We apply the model to estimate the influence of land-use change on water clarity in Fiji. We tested the model's predictions against underwater surveys, finding that predictions of poor water quality are consistent with observations of high siltation and low coverage of sediment-sensitive coral genera. The model thus provides a means to link land-use change to declines in coastal water quality.

  19. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    PubMed

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.

    PubMed

    Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam

    2015-06-22

    A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.

  1. Multiple Sensitivity Testing for Regional Air Quality Model in summer 2014

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Lee, P.; Pan, L.; Tong, D.; Kim, H. C.; Huang, M.; Wang, J.; McQueen, J.; Lu, C. H.; Artz, R. S.

    2015-12-01

    The NOAA Air Resources laboratory leads to improve the performance of the U.S. Air Quality Forecasting Capability (NAQFC). It is operational in NOAA National Centers for Environmental Prediction (NCEP) which focuses on predicting surface ozone and PM2.5. In order to improve its performance, we tested several approaches, including NOAA Environmental Modeling System Global Aerosol Component (NGAC) simulation derived ozone and aerosol lateral boundary conditions (LBC), bi-direction NH3 emission and HMS(Hazard Mapping System)-BlueSky emission with the latest U.S. EPA Community Multi-scale Air Quality model (CMAQ) version and the U.S EPA National Emission Inventory (NEI)-2011 anthropogenic emissions. The operational NAQFC uses static profiles for its lateral boundary condition (LBC), which does not impose severe issue for near-surface air quality prediction. However, its degraded performance for the upper layer (e.g. above 3km) is evident when comparing with aircraft measured ozone. NCEP's Global Forecast System (GFS) has tracer O3 prediction treated as 3-D prognostic variable (Moorthi and Iredell, 1998) after being initialized with Solar Backscatter Ultra Violet-2 (SBUV-2) satellite data. We applied that ozone LBC to the CMAQ's upper layers and yield more reasonable O3 prediction than that with static LBC comparing with the aircraft data in Discover-AQ Colorado campaign. NGAC's aerosol LBC also improved the PM2.5 prediction with more realistic background aerosols. The bi-direction NH3 emission used in CMAQ also help reduce the NH3 and nitrate under-prediction issue. During summer 2014, strong wildfires occurred in northwestern USA, and we used the US Forest Service's BlueSky fire emission with HMS fire counts to drive CMAQ and tested the difference of day-1 and day-2 fire emission estimation. Other related issues were also discussed.

  2. Modeling Benthic Sediment Processes to Predict Water Quality and Ecology in Narragansett Bay

    EPA Science Inventory

    The benthic sediment acts as a huge reservoir of particulate and dissolved material (within interstitial water) which can contribute to loading of contaminants and nutrients to the water column. A benthic sediment model is presented in this report to predict spatial and temporal ...

  3. Predictive Models for Carcinogenicity and Mutagenicity ...

    EPA Pesticide Factsheets

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  4. Modelling the effect of wildfire on forested catchment water quality using the SWAT model

    NASA Astrophysics Data System (ADS)

    Yu, M.; Bishop, T.; van Ogtrop, F. F.; Bell, T.

    2016-12-01

    Wildfire removes the surface vegetation, releases ash, increase erosion and runoff, and therefore effects the hydrological cycle of a forested water catchment. It is important to understand chnage and how the catchment recovers. These processes are spatially sensitive and effected by interactions between fire severity and hillslope, soil type and surface vegetation conditions. Thus, a distributed hydrological modelling approach is required. In this study, the Soil and Water Analysis Tool (SWAT) is used to predict the effect of 2001/02 Sydney wild fire on catchment water quality. 10 years pre-fire data is used to create and calibrate the SWAT model. The calibrated model was then used to simulate the water quality for the 10 years post-fire period without fire effect. The simulated water quality data are compared with recorded water quality data provided by Sydney catchment authority. The mean change of flow, total suspended solid, total nitrate and total phosphate are compare on monthly, three month, six month and annual basis. Two control catchment and three burn catchment were analysed.

  5. Predictive models in urology.

    PubMed

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  6. Ovary transcriptome profiling via artificial intelligence reveals a transcriptomic fingerprint predicting egg quality in striped bass, Morone saxatilis.

    PubMed

    Chapman, Robert W; Reading, Benjamin J; Sullivan, Craig V

    2014-01-01

    Inherited gene transcripts deposited in oocytes direct early embryonic development in all vertebrates, but transcript profiles indicative of embryo developmental competence have not previously been identified. We employed artificial intelligence to model profiles of maternal ovary gene expression and their relationship to egg quality, evaluated as production of viable mid-blastula stage embryos, in the striped bass (Morone saxatilis), a farmed species with serious egg quality problems. In models developed using artificial neural networks (ANNs) and supervised machine learning, collective changes in the expression of a limited suite of genes (233) representing <2% of the queried ovary transcriptome explained >90% of the eventual variance in embryo survival. Egg quality related to minor changes in gene expression (<0.2-fold), with most individual transcripts making a small contribution (<1%) to the overall prediction of egg quality. These findings indicate that the predictive power of the transcriptome as regards egg quality resides not in levels of individual genes, but rather in the collective, coordinated expression of a suite of transcripts constituting a transcriptomic "fingerprint". Correlation analyses of the corresponding candidate genes indicated that dysfunction of the ubiquitin-26S proteasome, COP9 signalosome, and subsequent control of the cell cycle engenders embryonic developmental incompetence. The affected gene networks are centrally involved in regulation of early development in all vertebrates, including humans. By assessing collective levels of the relevant ovarian transcripts via ANNs we were able, for the first time in any vertebrate, to accurately predict the subsequent embryo developmental potential of eggs from individual females. Our results show that the transcriptomic fingerprint evidencing developmental dysfunction is highly predictive of, and therefore likely to regulate, egg quality, a biologically complex trait crucial to reproductive

  7. Ovary Transcriptome Profiling via Artificial Intelligence Reveals a Transcriptomic Fingerprint Predicting Egg Quality in Striped Bass, Morone saxatilis

    PubMed Central

    2014-01-01

    Inherited gene transcripts deposited in oocytes direct early embryonic development in all vertebrates, but transcript profiles indicative of embryo developmental competence have not previously been identified. We employed artificial intelligence to model profiles of maternal ovary gene expression and their relationship to egg quality, evaluated as production of viable mid-blastula stage embryos, in the striped bass (Morone saxatilis), a farmed species with serious egg quality problems. In models developed using artificial neural networks (ANNs) and supervised machine learning, collective changes in the expression of a limited suite of genes (233) representing <2% of the queried ovary transcriptome explained >90% of the eventual variance in embryo survival. Egg quality related to minor changes in gene expression (<0.2-fold), with most individual transcripts making a small contribution (<1%) to the overall prediction of egg quality. These findings indicate that the predictive power of the transcriptome as regards egg quality resides not in levels of individual genes, but rather in the collective, coordinated expression of a suite of transcripts constituting a transcriptomic “fingerprint”. Correlation analyses of the corresponding candidate genes indicated that dysfunction of the ubiquitin-26S proteasome, COP9 signalosome, and subsequent control of the cell cycle engenders embryonic developmental incompetence. The affected gene networks are centrally involved in regulation of early development in all vertebrates, including humans. By assessing collective levels of the relevant ovarian transcripts via ANNs we were able, for the first time in any vertebrate, to accurately predict the subsequent embryo developmental potential of eggs from individual females. Our results show that the transcriptomic fingerprint evidencing developmental dysfunction is highly predictive of, and therefore likely to regulate, egg quality, a biologically complex trait crucial to

  8. DockQ: A Quality Measure for Protein-Protein Docking Models

    PubMed Central

    Basu, Sankar

    2016-01-01

    The state-of-the-art to assess the structural quality of docking models is currently based on three related yet independent quality measures: Fnat, LRMS, and iRMS as proposed and standardized by CAPRI. These quality measures quantify different aspects of the quality of a particular docking model and need to be viewed together to reveal the true quality, e.g. a model with relatively poor LRMS (>10Å) might still qualify as 'acceptable' with a descent Fnat (>0.50) and iRMS (<3.0Å). This is also the reason why the so called CAPRI criteria for assessing the quality of docking models is defined by applying various ad-hoc cutoffs on these measures to classify a docking model into the four classes: Incorrect, Acceptable, Medium, or High quality. This classification has been useful in CAPRI, but since models are grouped in only four bins it is also rather limiting, making it difficult to rank models, correlate with scoring functions or use it as target function in machine learning algorithms. Here, we present DockQ, a continuous protein-protein docking model quality measure derived by combining Fnat, LRMS, and iRMS to a single score in the range [0, 1] that can be used to assess the quality of protein docking models. By using DockQ on CAPRI models it is possible to almost completely reproduce the original CAPRI classification into Incorrect, Acceptable, Medium and High quality. An average PPV of 94% at 90% Recall demonstrating that there is no need to apply predefined ad-hoc cutoffs to classify docking models. Since DockQ recapitulates the CAPRI classification almost perfectly, it can be viewed as a higher resolution version of the CAPRI classification, making it possible to estimate model quality in a more quantitative way using Z-scores or sum of top ranked models, which has been so valuable for the CASP community. The possibility to directly correlate a quality measure to a scoring function has been crucial for the development of scoring functions for protein structure

  9. Modeling the Effects of Conservation Tillage on Water Quality at the Field Scale

    USDA-ARS?s Scientific Manuscript database

    The development and application of predictive tools to quantitatively assess the effects of tillage and related management activities should be carefully tested against high quality field data. This study reports on: 1) the calibration and validation of the Root Zone Water Quality Model (RZWQM) to a...

  10. Global Environmental Multiscale model - a platform for integrated environmental predictions

    NASA Astrophysics Data System (ADS)

    Kaminski, Jacek W.; Struzewska, Joanna; Neary, Lori; Dearden, Frank

    2017-04-01

    The Global Environmental Multiscale model was developed by the Government of Canada as an operational weather prediction model in the mid-1990s. Subsequently, it was used as the host meteorological model for an on-line implementation of air quality chemistry and aerosols from global to the meso-gamma scale. Further model developments led to the vertical extension of the modelling domain to include stratospheric chemistry, aerosols, and formation of polar stratospheric clouds. In parallel, the modelling platform was used for planetary applications where dynamical, radiative transfer and chemical processes in the atmosphere of Mars were successfully simulated. Undoubtedly, the developed modelling platform can be classified as an example capable of the seamless and coupled modelling of the dynamics and chemistry of planetary atmospheres. We will present modelling results for global, regional, and local air quality episodes and the long-term air quality trends. Upper troposphere and lower stratosphere modelling results will be presented in terms of climate change and subsonic aviation emissions modelling. Model results for the atmosphere of Mars will be presented in the context of the 2016 ExoMars mission and the anticipated observations from the NOMAD instrument. Also, we will present plans and the design to extend the GEM model to the F region with further coupling with a magnetospheric model that extends to 15 Re.

  11. An Innovative Model to Predict Pediatric Emergency Department Return Visits.

    PubMed

    Bergese, Ilaria; Frigerio, Simona; Clari, Marco; Castagno, Emanuele; De Clemente, Antonietta; Ponticelli, Elena; Scavino, Enrica; Berchialla, Paola

    2016-10-06

    Return visit (RV) to the emergency department (ED) is considered a benchmarking clinical indicator for health care quality. The purpose of this study was to develop a predictive model for early readmission risk in pediatric EDs comparing the performances of 2 learning machine algorithms. A retrospective study based on all children younger than 15 years spontaneously returning within 120 hours after discharge was conducted in an Italian university children's hospital between October 2012 and April 2013. Two predictive models, artificial neural network (ANN) and classification tree (CT), were used. Accuracy, specificity, and sensitivity were assessed. A total of 28,341 patient records were evaluated. Among them, 626 patients returned to the ED within 120 hours after their initial visit. Comparing ANN and CT, our analysis has shown that CT is the best model to predict RVs. The CT model showed an overall accuracy of 81%, slightly lower than the one achieved by the ANN (91.3%), but CT outperformed ANN with regard to sensitivity (79.8% vs 6.9%, respectively). The specificity was similar for the 2 models (CT, 97% vs ANN, 98.3%). In addition, the time of arrival and discharge along with the priority code assigned in triage, age, and diagnosis play a pivotal role to identify patients at high risk of RVs. These models provide a promising predictive tool for supporting the ED staff in preventing unnecessary RVs.

  12. Mathematical model for prediction of efficiency indicators of educational activity in high school

    NASA Astrophysics Data System (ADS)

    Tikhonova, O. M.; Kushnikov, V. A.; Fominykh, D. S.; Rezchikov, A. F.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.

    2018-05-01

    The quality of high school is a current problem all over the world. The paper presents the system dedicated to predicting the accreditation indicators of technical universities based on J. Forrester mechanism of system dynamics. The mathematical model is developed for prediction of efficiency indicators of the educational activity and is based on the apparatus of nonlinear differential equations.

  13. Prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy: a systematic review and external validation study.

    PubMed

    Hilkens, N A; Algra, A; Greving, J P

    2016-01-01

    ESSENTIALS: Prediction models may help to identify patients at high risk of bleeding on antiplatelet therapy. We identified existing prediction models for bleeding and validated them in patients with cerebral ischemia. Five prediction models were identified, all of which had some methodological shortcomings. Performance in patients with cerebral ischemia was poor. Background Antiplatelet therapy is widely used in secondary prevention after a transient ischemic attack (TIA) or ischemic stroke. Bleeding is the main adverse effect of antiplatelet therapy and is potentially life threatening. Identification of patients at increased risk of bleeding may help target antiplatelet therapy. This study sought to identify existing prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy and evaluate their performance in patients with cerebral ischemia. We systematically searched PubMed and Embase for existing prediction models up to December 2014. The methodological quality of the included studies was assessed with the CHARMS checklist. Prediction models were externally validated in the European Stroke Prevention Study 2, comprising 6602 patients with a TIA or ischemic stroke. We assessed discrimination and calibration of included prediction models. Five prediction models were identified, of which two were developed in patients with previous cerebral ischemia. Three studies assessed major bleeding, one studied intracerebral hemorrhage and one gastrointestinal bleeding. None of the studies met all criteria of good quality. External validation showed poor discriminative performance, with c-statistics ranging from 0.53 to 0.64 and poor calibration. A limited number of prediction models is available that predict intracranial hemorrhage or major bleeding in patients on antiplatelet therapy. The methodological quality of the models varied, but was generally low. Predictive performance in patients with cerebral ischemia was poor. In order to

  14. Big Data, Predictive Analytics, and Quality Improvement in Kidney Transplantation: A Proof of Concept.

    PubMed

    Srinivas, T R; Taber, D J; Su, Z; Zhang, J; Mour, G; Northrup, D; Tripathi, A; Marsden, J E; Moran, W P; Mauldin, P D

    2017-03-01

    We sought proof of concept of a Big Data Solution incorporating longitudinal structured and unstructured patient-level data from electronic health records (EHR) to predict graft loss (GL) and mortality. For a quality improvement initiative, GL and mortality prediction models were constructed using baseline and follow-up data (0-90 days posttransplant; structured and unstructured for 1-year models; data up to 1 year for 3-year models) on adult solitary kidney transplant recipients transplanted during 2007-2015 as follows: Model 1: United Network for Organ Sharing (UNOS) data; Model 2: UNOS & Transplant Database (Tx Database) data; Model 3: UNOS, Tx Database & EHR comorbidity data; and Model 4: UNOS, Tx Database, EHR data, Posttransplant trajectory data, and unstructured data. A 10% 3-year GL rate was observed among 891 patients (2007-2015). Layering of data sources improved model performance; Model 1: area under the curve (AUC), 0.66; (95% confidence interval [CI]: 0.60, 0.72); Model 2: AUC, 0.68; (95% CI: 0.61-0.74); Model 3: AUC, 0.72; (95% CI: 0.66-077); Model 4: AUC, 0.84, (95 % CI: 0.79-0.89). One-year GL (AUC, 0.87; Model 4) and 3-year mortality (AUC, 0.84; Model 4) models performed similarly. A Big Data approach significantly adds efficacy to GL and mortality prediction models and is EHR deployable to optimize outcomes. © 2016 The American Society of Transplantation and the American Society of Transplant Surgeons.

  15. A sampling approach for predicting the eating quality of apples using visible-near infrared spectroscopy.

    PubMed

    Martínez Vega, Mabel V; Sharifzadeh, Sara; Wulfsohn, Dvoralai; Skov, Thomas; Clemmensen, Line Harder; Toldam-Andersen, Torben B

    2013-12-01

    Visible-near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400-1100 nm. A total of 196 middle-early season and 219 late season apples (Malus domestica Borkh.) cvs 'Aroma' and 'Holsteiner Cox' samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming training and test sets ('smooth fractionator', by date of measurement after harvest and random). Using the 'smooth fractionator' sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of 'Aroma' apples, with a coefficient of variation CVSSC = 13%. The model showed consistently low errors and bias (PLS/EN: R(2) cal = 0.60/0.60; SEC = 0.88/0.88°Brix; Biascal = 0.00/0.00; R(2) val = 0.33/0.44; SEP = 1.14/1.03; Biasval = 0.04/0.03). However, the prediction acidity and for SSC (CV = 5%) of the late cultivar 'Holsteiner Cox' produced inferior results as compared with 'Aroma'. It was possible to construct local SSC and acidity calibration models for early season apple cultivars with CVs of SSC and acidity around 10%. The overall model performance of these data sets also depend on the proper selection of training and test sets. The 'smooth fractionator' protocol provided an objective method for obtaining training and test sets that capture the existing variability of the fruit samples for construction of visible-NIR prediction models. The implication

  16. Models to predict length of stay in the Intensive Care Unit after coronary artery bypass grafting: a systematic review.

    PubMed

    Atashi, Alireza; Verburg, Ilona W; Karim, Hesam; Miri, Mirmohammad; Abu-Hanna, Ameen; de Jonge, Evert; de Keizer, Nicolette F; Eslami, Saeid

    2018-06-01

    Intensive Care Units (ICU) length of stay (LoS) prediction models are used to compare different institutions and surgeons on their performance, and is useful as an efficiency indicator for quality control. There is little consensus about which prediction methods are most suitable to predict (ICU) length of stay. The aim of this study is to systematically review models for predicting ICU LoS after coronary artery bypass grafting and to assess the reporting and methodological quality of these models to apply them for benchmarking. A general search was conducted in Medline and Embase up to 31-12-2016. Three authors classified the papers for inclusion by reading their title, abstract and full text. All original papers describing development and/or validation of a prediction model for LoS in the ICU after CABG surgery were included. We used a checklist developed for critical appraisal and data extraction for systematic reviews of prediction modeling and extended it on handling specific patients subgroups. We also defined other items and scores to assess the methodological and reporting quality of the models. Of 5181 uniquely identified articles, fifteen studies were included of which twelve on development of new models and three on validation of existing models. All studies used linear or logistic regression as method for model development, and reported various performance measures based on the difference between predicted and observed ICU LoS. Most used a prospective (46.6%) or retrospective study design (40%). We found heterogeneity in patient inclusion/exclusion criteria; sample size; reported accuracy rates; and methods of candidate predictor selection. Most (60%) studies have not mentioned the handling of missing values and none compared the model outcome measure of survivors with non-survivors. For model development and validation studies respectively, the maximum reporting (methodological) scores were 66/78 and 62/62 (14/22 and 12/22). There are relatively few

  17. Calibration and verification of a rainfall-runoff model and a runoff-quality model for several urban basins in the Denver metropolitan area, Colorado

    USGS Publications Warehouse

    Lindner-Lunsford, J. B.; Ellis, S.R.

    1984-01-01

    The U.S. Geological Survey 's Distributed Routing Rainfall-Runoff Model--Version II was calibrated and verified for five urban basins in the Denver metropolitan area. Land-use types in the basins were light commerical, multifamily housing, single-family housing, and a shopping center. The overall accuracy of model predictions of peak flows and runoff volumes was about 15 percent for storms with rainfall intensities of less than 1 inch per hour and runoff volume of greater than 0.01 inch. Predictions generally were unsatisfactory for storm having a rainfall intensity of more than 1 inch per hour, or runoff of 0.01 inch or less. The Distributed Routing Rainfall-Runoff Model-Quality, a multievent runoff-quality model developed by the U.S. Geological Survey, was calibrated and verified on four basins. The model was found to be most useful in the prediction of seasonal loads of constituents in the runoff resulting from rainfall. The model was not very accurate in the prediction of runoff loads of individual constituents. (USGS)

  18. Evaluating the capability of regional-scale air quality models to capture the vertical distribution of pollutants

    EPA Science Inventory

    This study is conducted in the framework of the Air Quality Modelling Evaluation International Initiative (AQMEII) and aims at the operational evaluation of an ensemble of 12 regional-scale chemical transport models used to predict air quality over the North American (NA) and Eur...

  19. Predicting the Benefits of Percutaneous Coronary Intervention on 1-Year Angina and Quality of Life in Stable Ischemic Heart Disease: Risk Models From the COURAGE Trial (Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation).

    PubMed

    Zhang, Zugui; Jones, Philip; Weintraub, William S; Mancini, G B John; Sedlis, Steven; Maron, David J; Teo, Koon; Hartigan, Pamela; Kostuk, William; Berman, Daniel; Boden, William E; Spertus, John A

    2018-05-01

    Percutaneous coronary intervention (PCI) is a therapy to reduce angina and improve quality of life in patients with stable ischemic heart disease. However, it is unclear whether the quality of life after PCI is more dependent on the PCI or other patient-related factors. To address this question, we created models to predict angina and quality of life 1 year after PCI and medical therapy. Using data from the 2287 stable ischemic heart disease patients randomized in the COURAGE trial (Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation) to PCI plus optimal medical therapy (OMT) versus OMT alone, we built prediction models for 1-year Seattle Angina Questionnaire angina frequency, physical limitation, and quality of life scores, both as continuous outcomes and categorized by clinically desirable states, using multivariable techniques. Although most patients improved regardless of treatment, marked variability was observed in Seattle Angina Questionnaire scores 1 year after randomization. Adding PCI conferred a greater mean improvement (about 2 points) in Seattle Angina Questionnaire scores that were not affected by patient characteristics ( P values for all interactions >0.05). The proportion of patients free of angina or having very good/excellent physical limitation (physical function) or quality of life at 1 year was 57%, 58%, 66% with PCI+OMT and 50%, 55%, 59% with OMT alone group, respectively. However, other characteristics, such as baseline symptoms, age, diabetes mellitus, and the magnitude of myocardium subtended by narrowed coronary arteries were as, or more, important than revascularization in predicting symptoms (partial R 2 =0.07 versus 0.29, 0.03 versus 0.22, and 0.05 versus 0.24 in the domain of angina frequency, physical limitation, and quality of life, respectively). There was modest/good discrimination of the models (C statistic=0.72-0.82) and excellent calibration (coefficients of determination for predicted versus observed

  20. Water quality modeling in the systems impact assessment model for the Klamath River basin - Keno, Oregon to Seiad Valley, California

    USGS Publications Warehouse

    Hanna, R. Blair; Campbell, Sharon G.

    2000-01-01

    This report describes the water quality model developed for the Klamath River System Impact Assessment Model (SIAM). The Klamath River SIAM is a decision support system developed by the authors and other US Geological Survey (USGS), Midcontinent Ecological Science Center staff to study the effects of basin-wide water management decisions on anadromous fish in the Klamath River. The Army Corps of Engineersa?? HEC5Q water quality modeling software was used to simulate water temperature, dissolved oxygen and conductivity in 100 miles of the Klamath River Basin in Oregon and California. The water quality model simulated three reservoirs and the mainstem Klamath River influenced by the Shasta and Scott River tributaries. Model development, calibration and two validation exercises are described as well as the integration of the water quality model into the SIAM decision support system software. Within SIAM, data are exchanged between the water quantity model (MODSIM), the water quality model (HEC5Q), the salmon population model (SALMOD) and methods for evaluating ecosystem health. The overall predictive ability of the water quality model is described in the context of calibration and validation error statistics. Applications of SIAM and the water quality model are described.

  1. Urban Air Quality Modelling with AURORA: Prague and Bratislava

    NASA Astrophysics Data System (ADS)

    Veldeman, N.; Viaene, P.; De Ridder, K.; Peelaerts, W.; Lauwaet, D.; Muhammad, N.; Blyth, L.

    2012-04-01

    The European Commission, in its strategy to protect the health of the European citizens, states that in order to assess the impact of air pollution on public health, information on long-term exposure to air pollution should be available. Currently, indicators of air quality are often being generated using measured pollutant concentrations. While air quality monitoring stations data provide accurate time series information at specific locations, air quality models have the advantage of being able to assess the spatial variability of air quality (for different resolutions) and predict air quality in the future based on different scenarios. When running such air quality models at a high spatial and temporal resolution, one can simulate the actual situation as closely as possible, allowing for a detailed assessment of the risk of exposure to citizens from different pollutants. AURORA (Air quality modelling in Urban Regions using an Optimal Resolution Approach), a prognostic 3-dimensional Eulerian chemistry-transport model, is designed to simulate urban- to regional-scale atmospheric pollutant concentration and exposure fields. The AURORA model also allows to calculate the impact of changes in land use (e.g. planting of trees) or of emission reduction scenario's on air quality. AURORA is currently being applied within the ESA atmospheric GMES service, PASODOBLE (http://www.myair-eu.org), that delivers information on air quality, greenhouse gases, stratospheric ozone, … At present there are two operational AURORA services within PASODOBLE. Within the "Air quality forecast service" VITO delivers daily air quality forecasts for Belgium at a resolution of 5 km and for the major Belgian cities: Brussels, Ghent, Antwerp, Liege and Charleroi. Furthermore forecast services are provided for Prague, Czech Republic and Bratislava, Slovakia, both at a resolution of 1 km. The "Urban/regional air quality assessment service" provides urban- and regional-scale maps (hourly resolution

  2. Objective calibration of numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  3. Inverse and Predictive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, Ellen Marie

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less

  4. Archaeological predictive model set.

    DOT National Transportation Integrated Search

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  5. A systematic review of predictive models for asthma development in children.

    PubMed

    Luo, Gang; Nkoy, Flory L; Stone, Bryan L; Schmick, Darell; Johnson, Michael D

    2015-11-28

    Asthma is the most common pediatric chronic disease affecting 9.6 % of American children. Delay in asthma diagnosis is prevalent, resulting in suboptimal asthma management. To help avoid delay in asthma diagnosis and advance asthma prevention research, researchers have proposed various models to predict asthma development in children. This paper reviews these models. A systematic review was conducted through searching in PubMed, EMBASE, CINAHL, Scopus, the Cochrane Library, the ACM Digital Library, IEEE Xplore, and OpenGrey up to June 3, 2015. The literature on predictive models for asthma development in children was retrieved, with search results limited to human subjects and children (birth to 18 years). Two independent reviewers screened the literature, performed data extraction, and assessed article quality. The literature search returned 13,101 references in total. After manual review, 32 of these references were determined to be relevant and are discussed in the paper. We identify several limitations of existing predictive models for asthma development in children, and provide preliminary thoughts on how to address these limitations. Existing predictive models for asthma development in children have inadequate accuracy. Efforts to improve these models' performance are needed, but are limited by a lack of a gold standard for asthma development in children.

  6. Body odor quality predicts behavioral attractiveness in humans.

    PubMed

    Roberts, S Craig; Kralevich, Alexandra; Ferdenzi, Camille; Saxton, Tamsin K; Jones, Benedict C; DeBruine, Lisa M; Little, Anthony C; Havlicek, Jan

    2011-12-01

    Growing effort is being made to understand how different attractive physical traits co-vary within individuals, partly because this might indicate an underlying index of genetic quality. In humans, attention has focused on potential markers of quality such as facial attractiveness, axillary odor quality, the second-to-fourth digit (2D:4D) ratio and body mass index (BMI). Here we extend this approach to include visually-assessed kinesic cues (nonverbal behavior linked to movement) which are statistically independent of structural physical traits. The utility of such kinesic cues in mate assessment is controversial, particularly during everyday conversational contexts, as they could be unreliable and susceptible to deception. However, we show here that the attractiveness of nonverbal behavior, in 20 male participants, is predicted by perceived quality of their axillary body odor. This finding indicates covariation between two desirable traits in different sensory modalities. Depending on two different rating contexts (either a simple attractiveness rating or a rating for long-term partners by 10 female raters not using hormonal contraception), we also found significant relationships between perceived attractiveness of nonverbal behavior and BMI, and between axillary odor ratings and 2D:4D ratio. Axillary odor pleasantness was the single attribute that consistently predicted attractiveness of nonverbal behavior. Our results demonstrate that nonverbal kinesic cues could reliably reveal mate quality, at least in males, and could corroborate and contribute to mate assessment based on other physical traits.

  7. Predicting nucleic acid binding interfaces from structural models of proteins.

    PubMed

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2012-02-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.

  8. Predicting nucleic acid binding interfaces from structural models of proteins

    PubMed Central

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2011-01-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared to patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. PMID:22086767

  9. Comprehensive and critical review of the predictive properties of the various mass models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haustein, P.E.

    1984-01-01

    Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, theremore » is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models.« less

  10. Acute Brain Dysfunction: Development and Validation of a Daily Prediction Model.

    PubMed

    Marra, Annachiara; Pandharipande, Pratik P; Shotwell, Matthew S; Chandrasekhar, Rameela; Girard, Timothy D; Shintani, Ayumi K; Peelen, Linda M; Moons, Karl G M; Dittus, Robert S; Ely, E Wesley; Vasilevskis, Eduard E

    2018-03-24

    The goal of this study was to develop and validate a dynamic risk model to predict daily changes in acute brain dysfunction (ie, delirium and coma), discharge, and mortality in ICU patients. Using data from a multicenter prospective ICU cohort, a daily acute brain dysfunction-prediction model (ABD-pm) was developed by using multinomial logistic regression that estimated 15 transition probabilities (from one of three brain function states [normal, delirious, or comatose] to one of five possible outcomes [normal, delirious, comatose, ICU discharge, or died]) using baseline and daily risk factors. Model discrimination was assessed by using predictive characteristics such as negative predictive value (NPV). Calibration was assessed by plotting empirical vs model-estimated probabilities. Internal validation was performed by using a bootstrap procedure. Data were analyzed from 810 patients (6,711 daily transitions). The ABD-pm included individual risk factors: mental status, age, preexisting cognitive impairment, baseline and daily severity of illness, and daily administration of sedatives. The model yielded very high NPVs for "next day" delirium (NPV: 0.823), coma (NPV: 0.892), normal cognitive state (NPV: 0.875), ICU discharge (NPV: 0.905), and mortality (NPV: 0.981). The model demonstrated outstanding calibration when predicting the total number of patients expected to be in any given state across predicted risk. We developed and internally validated a dynamic risk model that predicts the daily risk for one of three cognitive states, ICU discharge, or mortality. The ABD-pm may be useful for predicting the proportion of patients for each outcome state across entire ICU populations to guide quality, safety, and care delivery activities. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  11. When a gold standard isn't so golden: Lack of prediction of subjective sleep quality from sleep polysomnography.

    PubMed

    Kaplan, Katherine A; Hirshman, Jason; Hernandez, Beatriz; Stefanick, Marcia L; Hoffman, Andrew R; Redline, Susan; Ancoli-Israel, Sonia; Stone, Katie; Friedman, Leah; Zeitzer, Jamie M

    2017-02-01

    Reports of subjective sleep quality are frequently collected in research and clinical practice. It is unclear, however, how well polysomnographic measures of sleep correlate with subjective reports of prior-night sleep quality in elderly men and women. Furthermore, the relative importance of various polysomnographic, demographic and clinical characteristics in predicting subjective sleep quality is not known. We sought to determine the correlates of subjective sleep quality in older adults using more recently developed machine learning algorithms that are suitable for selecting and ranking important variables. Community-dwelling older men (n=1024) and women (n=459), a subset of those participating in the Osteoporotic Fractures in Men study and the Study of Osteoporotic Fractures study, respectively, completed a single night of at-home polysomnographic recording of sleep followed by a set of morning questions concerning the prior night's sleep quality. Questionnaires concerning demographics and psychological characteristics were also collected prior to the overnight recording and entered into multivariable models. Two machine learning algorithms, lasso penalized regression and random forests, determined variable selection and the ordering of variable importance separately for men and women. Thirty-eight sleep, demographic and clinical correlates of sleep quality were considered. Together, these multivariable models explained only 11-17% of the variance in predicting subjective sleep quality. Objective sleep efficiency emerged as the strongest correlate of subjective sleep quality across all models, and across both sexes. Greater total sleep time and sleep stage transitions were also significant objective correlates of subjective sleep quality. The amount of slow wave sleep obtained was not determined to be important. Overall, the commonly obtained measures of polysomnographically-defined sleep contributed little to subjective ratings of prior-night sleep quality

  12. When a gold standard isn't so golden: Lack of prediction of subjective sleep quality from sleep polysomnography

    PubMed Central

    Kaplan, Katherine A.; Hirshman, Jason; Hernandez, Beatriz; Stefanick, Marcia L.; Hoffman, Andrew R.; Redline, Susan; Ancoli-Israel, Sonia; Stone, Katie; Friedman, Leah; Zeitzer, Jamie M.

    2016-01-01

    Background Reports of subjective sleep quality are frequently collected in research and clinical practice. It is unclear, however, how well polysomnographic measures of sleep correlate with subjective reports of prior-night sleep quality in elderly men and women. Furthermore, the relative importance of various polysomnographic, demographic and clinical characteristics in predicting subjective sleep quality is not known. We sought to determine the correlates of subjective sleep quality in in older adults using more recently developed machine learning algorithms that are suitable for selecting and ranking important variables. Methods Community-dwelling older men (n=1024) and women (n=459), a subset of those participating in the Osteoporotic Fractures in Men study and the Study of Osteoporotic Fractures study, respectively, completed a single night of at-home polysomnographic recording of sleep followed by a set of morning questions concerning the prior night's sleep quality. Questionnaires concerning demographics and psychological characteristics were also collected prior to the overnight recording and entered into multivariable models. Two machine learning algorithms, lasso penalized regression and random forests, determined variable selection and the ordering of variable importance separately for men and women. Results Thirty-eight sleep, demographic and clinical correlates of sleep quality were considered. Together, these multivariable models explained only 11-17% of the variance in predicting subjective sleep quality. Objective sleep efficiency emerged as the strongest correlate of subjective sleep quality across all models, and across both sexes. Greater total sleep time and sleep stage transitions were also significant objective correlates of subjective sleep quality. The amount of slow wave sleep obtained was not determined to be important. Conclusions Overall, the commonly obtained measures of polysomnographically-defined sleep contributed little to

  13. Quality control of the RMS US flood model

    NASA Astrophysics Data System (ADS)

    Jankowfsky, Sonja; Hilberts, Arno; Mortgat, Chris; Li, Shuangcai; Rafique, Farhat; Rajesh, Edida; Xu, Na; Mei, Yi; Tillmanns, Stephan; Yang, Yang; Tian, Ye; Mathur, Prince; Kulkarni, Anand; Kumaresh, Bharadwaj Anna; Chaudhuri, Chiranjib; Saini, Vishal

    2016-04-01

    The RMS US flood model predicts the flood risk in the US with a 30 m resolution for different return periods. The model is designed for the insurance industry to estimate the cost of flood risk for a given location. Different statistical, hydrological and hydraulic models are combined to develop the flood maps for different return periods. A rainfall-runoff and routing model, calibrated with observed discharge data, is run with 10 000 years of stochastic simulated precipitation to create time series of discharge and surface runoff. The 100, 250 and 500 year events are extracted from these time series as forcing for a two-dimensional pluvial and fluvial inundation model. The coupling of all the different models which are run on the large area of the US implies a certain amount of uncertainty. Therefore, special attention is paid to the final quality control of the flood maps. First of all, a thorough quality analysis of the Digital Terrain model and the river network was done, as the final quality of the flood maps depends heavily on the DTM quality. Secondly, the simulated 100 year discharge in the major river network (600 000 km) is compared to the 100 year discharge derived using extreme value distribution of all USGS gauges with more than 20 years of peak values (around 11 000 gauges). Thirdly, for each gauge the modelled flood depth is compared to the depth derived from the USGS rating curves. Fourthly, the modelled flood depth is compared to the base flood elevation given in the FEMA flood maps. Fifthly, the flood extent is compared to the FEMA flood extent. Then, for historic events we compare flood extents and flood depths at given locations. Finally, all the data and spatial layers are uploaded on geoserver to facilitate the manual investigation of outliers. The feedback from the quality control is used to improve the model and estimate its uncertainty.

  14. Seizure threshold increases can be predicted by EEG quality in right unilateral ultrabrief ECT.

    PubMed

    Gálvez, Verònica; Hadzi-Pavlovic, Dusan; Waite, Susan; Loo, Colleen K

    2017-12-01

    Increases in seizure threshold (ST) over a course of brief pulse ECT can be predicted by decreases in EEG quality, informing ECT dose adjustment to maintain adequate supra-threshold dosing. ST increases also occur over a course of right unilateral ultrabrief (RUL UB) ECT, but no data exist on the relationship between ST increases and EEG indices. This study (n = 35) investigated if increases in ST over RUL UB ECT treatments could be predicted by a decline in seizure quality. ST titration was performed at ECT session one and seven, with treatment dosing maintained stable (at 6-8 times ST) in intervening sessions. Seizure quality indices (slow-wave onset, mid-ictal amplitude, regularity, stereotypy, and post-ictal suppression) were manually rated at the first supra-threshold treatment, and last supra-threshold treatment before re-titration, using a structured rating scale, by a single trained rater blinded to the ECT session being rated. Twenty-one subjects (60%) had a ST increase. The association between ST changes and EEG quality indices was analysed by logistic regression, yielding a significant model (p < 0.001). Initial ST (p < 0.05) and percentage change in mid-ictal amplitude (p < 0.05) were significant predictors of change in ST. Percentage change in post-ictal suppression reached trend level significance (p = 0.065). Increases in ST over a RUL UB ECT course may be predicted by decreases in seizure quality, specifically decline in mid-ictal amplitude and potentially in post-ictal suppression. Such EEG indices may be able to inform when dose adjustments are necessary to maintain adequate supra-threshold dosing in RUL UB ECT.

  15. Advanced Water Quality Modelling in Marine Systems: Application to the Wadden Sea, the Netherlands

    NASA Astrophysics Data System (ADS)

    Boon, J.; Smits, J. G.

    2006-12-01

    There is an increasing demand for knowledge and models that arise from water management in relation to water quality, sediment quality (ecology) and sediment accumulation (ecomorphology). Recently, models for sediment diagenesis and erosion developed or incorporated by Delft Hydraulics integrates the relevant physical, (bio)chemical and biological processes for the sediment-water exchange of substances. The aim of the diagenesis models is the prediction of both sediment quality and the return fluxes of substances such as nutrients and micropollutants to the overlying water. The resulting so-called DELWAQ-G model is a new, generic version of the water and sediment quality model of the DELFT3D framework. One set of generic water quality process formulations is used to calculate process rates in both water and sediment compartments. DELWAQ-G involves the explicit simulation of sediment layers in the water quality model with state-of-the-art process kinetics. The local conditions in a water layer or sediment layer such as the dissolved oxygen concentration determine if and how individual processes come to expression. New processes were added for sulphate, sulphide, methane and the distribution of the electron-acceptor demand over dissolved oxygen, nitrate, sulphate and carbon dioxide. DELWAQ-G also includes the dispersive and advective transport processes in the sediment and across the sediment-water interface. DELWAQ-G has been applied for the Wadden Sea. A very dynamic tidal and ecologically active estuary with a complex hydrodynamic behaviour located at the north of the Netherlands. The predicted profiles in the sediment reflect the typical interactions of diagenesis processes.

  16. Foveated model observers to predict human performance in 3D images

    NASA Astrophysics Data System (ADS)

    Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.

    2017-03-01

    We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.

  17. Pragmatic estimation of a spatio-temporal air quality model with irregular monitoring data

    NASA Astrophysics Data System (ADS)

    Sampson, Paul D.; Szpiro, Adam A.; Sheppard, Lianne; Lindström, Johan; Kaufman, Joel D.

    2011-11-01

    Statistical analyses of health effects of air pollution have increasingly used GIS-based covariates for prediction of ambient air quality in "land use" regression models. More recently these spatial regression models have accounted for spatial correlation structure in combining monitoring data with land use covariates. We present a flexible spatio-temporal modeling framework and pragmatic, multi-step estimation procedure that accommodates essentially arbitrary patterns of missing data with respect to an ideally complete space by time matrix of observations on a network of monitoring sites. The methodology incorporates a model for smooth temporal trends with coefficients varying in space according to Partial Least Squares regressions on a large set of geographic covariates and nonstationary modeling of spatio-temporal residuals from these regressions. This work was developed to provide spatial point predictions of PM 2.5 concentrations for the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) using irregular monitoring data derived from the AQS regulatory monitoring network and supplemental short-time scale monitoring campaigns conducted to better predict intra-urban variation in air quality. We demonstrate the interpretation and accuracy of this methodology in modeling data from 2000 through 2006 in six U.S. metropolitan areas and establish a basis for likelihood-based estimation.

  18. Modelling and analysis of ozone concentration by artificial intelligent techniques for estimating air quality

    NASA Astrophysics Data System (ADS)

    Taylan, Osman

    2017-02-01

    High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.

  19. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2011-01-01

    Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.

  20. Predicting readmission risk with institution-specific prediction models.

    PubMed

    Yu, Shipeng; Farooq, Faisal; van Esbroeck, Alexander; Fung, Glenn; Anand, Vikram; Krishnapuram, Balaji

    2015-10-01

    The ability to predict patient readmission risk is extremely valuable for hospitals, especially under the Hospital Readmission Reduction Program of the Center for Medicare and Medicaid Services which went into effect starting October 1, 2012. There is a plethora of work in the literature that deals with developing readmission risk prediction models, but most of them do not have sufficient prediction accuracy to be deployed in a clinical setting, partly because different hospitals may have different characteristics in their patient populations. We propose a generic framework for institution-specific readmission risk prediction, which takes patient data from a single institution and produces a statistical risk prediction model optimized for that particular institution and, optionally, for a specific condition. This provides great flexibility in model building, and is also able to provide institution-specific insights in its readmitted patient population. We have experimented with classification methods such as support vector machines, and prognosis methods such as the Cox regression. We compared our methods with industry-standard methods such as the LACE model, and showed the proposed framework is not only more flexible but also more effective. We applied our framework to patient data from three hospitals, and obtained some initial results for heart failure (HF), acute myocardial infarction (AMI), pneumonia (PN) patients as well as patients with all conditions. On Hospital 2, the LACE model yielded AUC 0.57, 0.56, 0.53 and 0.55 for AMI, HF, PN and All Cause readmission prediction, respectively, while the proposed model yielded 0.66, 0.65, 0.63, 0.74 for the corresponding conditions, all significantly better than the LACE counterpart. The proposed models that leverage all features at discharge time is more accurate than the models that only leverage features at admission time (0.66 vs. 0.61 for AMI, 0.65 vs. 0.61 for HF, 0.63 vs. 0.56 for PN, 0.74 vs. 0.60 for All

  1. Linear regression models for solvent accessibility prediction in proteins.

    PubMed

    Wagner, Michael; Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław

    2005-04-01

    The relative solvent accessibility (RSA) of an amino acid residue in a protein structure is a real number that represents the solvent exposed surface area of this residue in relative terms. The problem of predicting the RSA from the primary amino acid sequence can therefore be cast as a regression problem. Nevertheless, RSA prediction has so far typically been cast as a classification problem. Consequently, various machine learning techniques have been used within the classification framework to predict whether a given amino acid exceeds some (arbitrary) RSA threshold and would thus be predicted to be "exposed," as opposed to "buried." We have recently developed novel methods for RSA prediction using nonlinear regression techniques which provide accurate estimates of the real-valued RSA and outperform classification-based approaches with respect to commonly used two-class projections. However, while their performance seems to provide a significant improvement over previously published approaches, these Neural Network (NN) based methods are computationally expensive to train and involve several thousand parameters. In this work, we develop alternative regression models for RSA prediction which are computationally much less expensive, involve orders-of-magnitude fewer parameters, and are still competitive in terms of prediction quality. In particular, we investigate several regression models for RSA prediction using linear L1-support vector regression (SVR) approaches as well as standard linear least squares (LS) regression. Using rigorously derived validation sets of protein structures and extensive cross-validation analysis, we compare the performance of the SVR with that of LS regression and NN-based methods. In particular, we show that the flexibility of the SVR (as encoded by metaparameters such as the error insensitivity and the error penalization terms) can be very beneficial to optimize the prediction accuracy for buried residues. We conclude that the simple

  2. Predictive models for fish assemblages in eastern USA streams: implications for assessing biodiversity

    USGS Publications Warehouse

    Meador, Michael R.; Carlisle, Daren M.

    2009-01-01

    Management and conservation of aquatic systems require the ability to assess biological conditions and identify changes in biodiversity. Predictive models for fish assemblages were constructed to assess biological condition and changes in biodiversity for streams sampled in the eastern United States as part of the U.S. Geological Survey's National Water Quality Assessment Program. Separate predictive models were developed for northern and southern regions. Reference sites were designated using land cover and local professional judgment. Taxonomic completeness was quantified based on the ratio of the number of observed native fish species expected to occur to the number of expected native fish species. Models for both regions accurately predicted fish species composition at reference sites with relatively high precision and low bias. In general, species that occurred less frequently than expected (decreasers) tended to prefer riffle areas and larger substrates, such as gravel and cobble, whereas increaser species (occurring more frequently than expected) tended to prefer pools, backwater areas, and vegetated and sand substrates. In the north, the percentage of species identified as increasers and the percentage identified as decreasers were equal, whereas in the south nearly two-thirds of the species examined were identified as decreasers. Predictive models of fish species can provide a standardized indicator for consistent assessments of biological condition at varying spatial scales and critical information for an improved understanding of fish species that are potentially at risk of loss with changing water quality conditions.

  3. An examination of data quality on QSAR Modeling in regards to the environmental sciences (UNC-CH talk)

    EPA Science Inventory

    The development of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available to...

  4. Rocket exhaust effluent modeling for tropospheric air quality and environmental assessments

    NASA Technical Reports Server (NTRS)

    Stephens, J. B.; Stewart, R. B.

    1977-01-01

    The various techniques for diffusion predictions to support air quality predictions and environmental assessments for aerospace applications are discussed in terms of limitations imposed by atmospheric data. This affords an introduction to the rationale behind the selection of the National Aeronautics and Space Administration (NASA)/Marshall Space Flight Center (MSFC) Rocket Exhaust Effluent Diffusion (REED) program. The models utilized in the NASA/MSFC REED program are explained. This program is then evaluated in terms of some results from a joint MSFC/Langley Research Center/Kennedy Space Center Titan Exhaust Effluent Prediction and Monitoring Program.

  5. Autoregressive spatially varying coefficients model for predicting daily PM2.5 using VIIRS satellite AOT

    NASA Astrophysics Data System (ADS)

    Schliep, E. M.; Gelfand, A. E.; Holland, D. M.

    2015-12-01

    There is considerable demand for accurate air quality information in human health analyses. The sparsity of ground monitoring stations across the United States motivates the need for advanced statistical models to predict air quality metrics, such as PM2.5, at unobserved sites. Remote sensing technologies have the potential to expand our knowledge of PM2.5 spatial patterns beyond what we can predict from current PM2.5 monitoring networks. Data from satellites have an additional advantage in not requiring extensive emission inventories necessary for most atmospheric models that have been used in earlier data fusion models for air pollution. Statistical models combining monitoring station data with satellite-obtained aerosol optical thickness (AOT), also referred to as aerosol optical depth (AOD), have been proposed in the literature with varying levels of success in predicting PM2.5. The benefit of using AOT is that satellites provide complete gridded spatial coverage. However, the challenges involved with using it in fusion models are (1) the correlation between the two data sources varies both in time and in space, (2) the data sources are temporally and spatially misaligned, and (3) there is extensive missingness in the monitoring data and also in the satellite data due to cloud cover. We propose a hierarchical autoregressive spatially varying coefficients model to jointly model the two data sources, which addresses the foregoing challenges. Additionally, we offer formal model comparison for competing models in terms of model fit and out of sample prediction of PM2.5. The models are applied to daily observations of PM2.5 and AOT in the summer months of 2013 across the conterminous United States. Most notably, during this time period, we find small in-sample improvement incorporating AOT into our autoregressive model but little out-of-sample predictive improvement.

  6. Analysis of Free Modeling Predictions by RBO Aleph in CASP11

    PubMed Central

    Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver

    2015-01-01

    The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue–residue contact prediction by EPC-map and contact–guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. PMID:26492194

  7. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  8. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  9. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  10. Predictive Modeling of Risk Factors and Complications of Cataract Surgery

    PubMed Central

    Gaskin, Gregory L; Pershing, Suzann; Cole, Tyler S; Shah, Nigam H

    2016-01-01

    Purpose To quantify the relationship between aggregated preoperative risk factors and cataract surgery complications, as well as to build a model predicting outcomes on an individual-level—given a constellation of demographic, baseline, preoperative, and intraoperative patient characteristics. Setting Stanford Hospital and Clinics between 1994 and 2013. Design Retrospective cohort study Methods Patients age 40 or older who received cataract surgery between 1994 and 2013. Risk factors, complications, and demographic information were extracted from the Electronic Health Record (EHR), based on International Classification of Diseases, 9th edition (ICD-9) codes, Current Procedural Terminology (CPT) codes, drug prescription information, and text data mining using natural language processing. We used a bootstrapped least absolute shrinkage and selection operator (LASSO) model to identify highly-predictive variables. We built random forest classifiers for each complication to create predictive models. Results Our data corroborated existing literature on postoperative complications—including the association of intraoperative complications, complex cataract surgery, black race, and/or prior eye surgery with an increased risk of any postoperative complications. We also found a number of other, less well-described risk factors, including systemic diabetes mellitus, young age (<60 years old), and hyperopia as risk factors for complex cataract surgery and intra- and post-operative complications. Our predictive models based on aggregated outperformed existing published models. Conclusions The constellations of risk factors and complications described here can guide new avenues of research and provide specific, personalized risk assessment for a patient considering cataract surgery. The predictive capacity of our models can enable risk stratification of patients, which has utility as a teaching tool as well as informing quality/value-based reimbursements. PMID:26692059

  11. ProQ3: Improved model quality assessments using Rosetta energy terms

    PubMed Central

    Uziela, Karolis; Shu, Nanjiang; Wallner, Björn; Elofsson, Arne

    2016-01-01

    Quality assessment of protein models using no other information than the structure of the model itself has been shown to be useful for structure prediction. Here, we introduce two novel methods, ProQRosFA and ProQRosCen, inspired by the state-of-art method ProQ2, but using a completely different description of a protein model. ProQ2 uses contacts and other features calculated from a model, while the new predictors are based on Rosetta energies: ProQRosFA uses the full-atom energy function that takes into account all atoms, while ProQRosCen uses the coarse-grained centroid energy function. The two new predictors also include residue conservation and terms corresponding to the agreement of a model with predicted secondary structure and surface area, as in ProQ2. We show that the performance of these predictors is on par with ProQ2 and significantly better than all other model quality assessment programs. Furthermore, we show that combining the input features from all three predictors, the resulting predictor ProQ3 performs better than any of the individual methods. ProQ3, ProQRosFA and ProQRosCen are freely available both as a webserver and stand-alone programs at http://proq3.bioinfo.se/. PMID:27698390

  12. CodingQuarry: highly accurate hidden Markov model gene prediction in fungal genomes using RNA-seq transcripts.

    PubMed

    Testa, Alison C; Hane, James K; Ellwood, Simon R; Oliver, Richard P

    2015-03-11

    The impact of gene annotation quality on functional and comparative genomics makes gene prediction an important process, particularly in non-model species, including many fungi. Sets of homologous protein sequences are rarely complete with respect to the fungal species of interest and are often small or unreliable, especially when closely related species have not been sequenced or annotated in detail. In these cases, protein homology-based evidence fails to correctly annotate many genes, or significantly improve ab initio predictions. Generalised hidden Markov models (GHMM) have proven to be invaluable tools in gene annotation and, recently, RNA-seq has emerged as a cost-effective means to significantly improve the quality of automated gene annotation. As these methods do not require sets of homologous proteins, improving gene prediction from these resources is of benefit to fungal researchers. While many pipelines now incorporate RNA-seq data in training GHMMs, there has been relatively little investigation into additionally combining RNA-seq data at the point of prediction, and room for improvement in this area motivates this study. CodingQuarry is a highly accurate, self-training GHMM fungal gene predictor designed to work with assembled, aligned RNA-seq transcripts. RNA-seq data informs annotations both during gene-model training and in prediction. Our approach capitalises on the high quality of fungal transcript assemblies by incorporating predictions made directly from transcript sequences. Correct predictions are made despite transcript assembly problems, including those caused by overlap between the transcripts of adjacent gene loci. Stringent benchmarking against high-confidence annotation subsets showed CodingQuarry predicted 91.3% of Schizosaccharomyces pombe genes and 90.4% of Saccharomyces cerevisiae genes perfectly. These results are 4-5% better than those of AUGUSTUS, the next best performing RNA-seq driven gene predictor tested. Comparisons against

  13. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD)

    PubMed Central

    Reitsma, Johannes B.; Altman, Douglas G.; Moons, Karel G.M.

    2015-01-01

    Background— Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. Methods— The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. Results— The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. Conclusions— To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25561516

  14. Pre-operative prediction of surgical morbidity in children: comparison of five statistical models.

    PubMed

    Cooper, Jennifer N; Wei, Lai; Fernandez, Soledad A; Minneci, Peter C; Deans, Katherine J

    2015-02-01

    The accurate prediction of surgical risk is important to patients and physicians. Logistic regression (LR) models are typically used to estimate these risks. However, in the fields of data mining and machine-learning, many alternative classification and prediction algorithms have been developed. This study aimed to compare the performance of LR to several data mining algorithms for predicting 30-day surgical morbidity in children. We used the 2012 National Surgical Quality Improvement Program-Pediatric dataset to compare the performance of (1) a LR model that assumed linearity and additivity (simple LR model) (2) a LR model incorporating restricted cubic splines and interactions (flexible LR model) (3) a support vector machine, (4) a random forest and (5) boosted classification trees for predicting surgical morbidity. The ensemble-based methods showed significantly higher accuracy, sensitivity, specificity, PPV, and NPV than the simple LR model. However, none of the models performed better than the flexible LR model in terms of the aforementioned measures or in model calibration or discrimination. Support vector machines, random forests, and boosted classification trees do not show better performance than LR for predicting pediatric surgical morbidity. After further validation, the flexible LR model derived in this study could be used to assist with clinical decision-making based on patient-specific surgical risks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Implementation of Cyber-Physical Production Systems for Quality Prediction and Operation Control in Metal Casting.

    PubMed

    Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin

    2018-05-04

    The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry.

  16. Implementation of Cyber-Physical Production Systems for Quality Prediction and Operation Control in Metal Casting

    PubMed Central

    Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin

    2018-01-01

    The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry. PMID:29734699

  17. Acid-base accounting to predict post-mining drainage quality on surface mines.

    PubMed

    Skousen, J; Simmons, J; McDonald, L M; Ziemkiewicz, P

    2002-01-01

    Acid-base accounting (ABA) is an analytical procedure that provides values to help assess the acid-producing and acid-neutralizing potential of overburden rocks prior to coal mining and other large-scale excavations. This procedure was developed by West Virginia University scientists during the 1960s. After the passage of laws requiring an assessment of surface mining on water quality, ABA became a preferred method to predict post-mining water quality, and permitting decisions for surface mines are largely based on the values determined by ABA. To predict the post-mining water quality, the amount of acid-producing rock is compared with the amount of acid-neutralizing rock, and a prediction of the water quality at the site (whether acid or alkaline) is obtained. We gathered geologic and geographic data for 56 mined sites in West Virginia, which allowed us to estimate total overburden amounts, and values were determined for maximum potential acidity (MPA), neutralization potential (NP), net neutralization potential (NNP), and NP to MPA ratios for each site based on ABA. These values were correlated to post-mining water quality from springs or seeps on the mined property. Overburden mass was determined by three methods, with the method used by Pennsylvania researchers showing the most accurate results for overburden mass. A poor relationship existed between MPA and post-mining water quality, NP was intermediate, and NNP and the NP to MPA ratio showed the best prediction accuracy. In this study, NNP and the NP to MPA ratio gave identical water quality prediction results. Therefore, with NP to MPA ratios, values were separated into categories: <1 should produce acid drainage, between 1 and 2 can produce either acid or alkaline water conditions, and >2 should produce alkaline water. On our 56 surface mined sites, NP to MPA ratios varied from 0.1 to 31, and six sites (11%) did not fit the expected pattern using this category approach. Two sites with ratios <1 did not

  18. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    PubMed

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  19. Models that predict standing crop of stream fish from habitat variables: 1950-85.

    Treesearch

    K.D. Fausch; C.L. Hawkes; M.G. Parsons

    1988-01-01

    We reviewed mathematical models that predict standing crop of stream fish (number or biomass per unit area or length of stream) from measurable habitat variables and classified them by the types of independent habitat variables found significant, by mathematical structure, and by model quality. Habitat variables were of three types and were measured on different scales...

  20. Remote Sensing Characterization of the Urban Landscape for Improvement of Air Quality Modeling

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Estes, Maurice G., Jr.; Khan, Maudood

    2005-01-01

    The urban landscape is inherently complex and this complexity is not adequately captured in air quality models, particularly the Community Multiscale Air Quality (CMAQ) model that is used to assess whether urban areas are in attainment of EPA air quality standards, primarily for ground level ozone. This inadequacy of the CMAQ model to sufficiently respond to the heterogeneous nature of the urban landscape can impact how well the model predicts ozone pollutant levels over metropolitan areas and ultimately, whether cities exceed EPA ozone air quality standards. We are exploring the utility of high-resolution remote sensing data and urban growth projections as improved inputs to the meteorology component of the CMAQ model focusing on the Atlanta, Georgia metropolitan area as a case study. These growth projections include "business as usual" and "smart growth" scenarios out to 2030. The growth projections illustrate the effects of employing urban heat island mitigation strategies, such as increasing tree canopy and albedo across the Atlanta metro area, in moderating ground-level ozone and air temperature, compared to "business as usual" simulations in which heat island mitigation strategies are not applied. The National Land Cover Dataset at 30m resolution is being used as the land use/land cover input and aggregated to the 4km scale for the MM5 mesoscale meteorological model and the (CMAQ) modeling schemes. Use of these data has been found to better characterize low densityhburban development as compared with USGS 1 km land use/land cover data that have traditionally been used in modeling. Air quality prediction for fiture scenarios to 2030 is being facilitated by land use projections using a spatial growth model. Land use projections were developed using the 2030 Regional Transportation Plan developed by the Atlanta Regional Commission, the regional planning agency for the area. This allows the state Environmental Protection agency to evaluate how these

  1. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model.

    PubMed

    Wang, Sheng; Sun, Siqi; Li, Zhen; Zhang, Renyu; Xu, Jinbo

    2017-01-01

    Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have

  2. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model

    PubMed Central

    Li, Zhen; Zhang, Renyu

    2017-01-01

    Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact

  3. Development of models to predict early post-transplant recurrence of hepatocellular carcinoma that also integrate the quality and characteristics of the liver graft: A national registry study in China.

    PubMed

    Ling, Qi; Liu, Jimin; Zhuo, Jianyong; Zhuang, Runzhou; Huang, Haitao; He, Xiangxiang; Xu, Xiao; Zheng, Shusen

    2018-04-27

    Donor characteristics and graft quality were recently reported to play an important role in the recurrence of hepatocellular carcinoma after liver transplantation. Our aim was to establish a prognostic model by using both donor and recipient variables. Data of 1,010 adult patients (training/validation: 2/1) undergoing primary liver transplantation for hepatocellular carcinoma were extracted from the China Liver Transplant Registry database and analyzed retrospectively. A multivariate competing risk regression model was developed and used to generate a nomogram predicting the likelihood of post-transplant hepatocellular carcinoma recurrence. Of 673 patients in the training cohort, 70 (10.4%) had hepatocellular carcinoma recurrence with a median recurrence time of 6 months (interquartile range: 4-25 months). Cold ischemia time was the only independent donor prognostic factor for predicting hepatocellular carcinoma recurrence (hazard ratio = 2.234, P = .007). The optimal cutoff value was 12 hours when patients were grouped according to cold ischemia time at 2-hour intervals. Integrating cold ischemia time into the Milan criteria (liver transplantation candidate selection criteria) improved the accuracy for predicting hepatocellular carcinoma recurrence in both training and validation sets (P < .05). A nomogram composed of cold ischemia time, tumor burden, differentiation, and α-fetoprotein level proved to be accurate and reliable in predicting the likelihood of 1-year hepatocellular carcinoma recurrence after liver transplantation. Additionally, donor anti-hepatitis B core antibody positivity, prolonged cold ischemia time, and anhepatic time were linked to the intrahepatic recurrence, whereas older donor age, prolonged donor warm ischemia time, cold ischemia time, and ABO incompatibility were relevant to the extrahepatic recurrence. The graft quality integrated models exhibited considerable predictive accuracy in early hepatocellular carcinoma recurrence risk

  4. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  5. Highway runoff quality models for the protection of environmentally sensitive areas

    NASA Astrophysics Data System (ADS)

    Trenouth, William R.; Gharabaghi, Bahram

    2016-11-01

    This paper presents novel highway runoff quality models using artificial neural networks (ANN) which take into account site-specific highway traffic and seasonal storm event meteorological factors to predict the event mean concentration (EMC) statistics and mean daily unit area load (MDUAL) statistics of common highway pollutants for the design of roadside ditch treatment systems (RDTS) to protect sensitive receiving environs. A dataset of 940 monitored highway runoff events from fourteen sites located in five countries (Canada, USA, Australia, New Zealand, and China) was compiled and used to develop ANN models for the prediction of highway runoff suspended solids (TSS) seasonal EMC statistical distribution parameters, as well as the MDUAL statistics for four different heavy metal species (Cu, Zn, Cr and Pb). TSS EMCs are needed to estimate the minimum required removal efficiency of the RDTS needed in order to improve highway runoff quality to meet applicable standards and MDUALs are needed to calculate the minimum required capacity of the RDTS to ensure performance longevity.

  6. Multi-model ensemble hydrologic prediction using Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh

    2007-05-01

    Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of

  7. DRAINMOD-GIS: a lumped parameter watershed scale drainage and water quality model

    Treesearch

    G.P. Fernandez; G.M. Chescheir; R.W. Skaggs; D.M. Amatya

    2006-01-01

    A watershed scale lumped parameter hydrology and water quality model that includes an uncertainty analysis component was developed and tested on a lower coastal plain watershed in North Carolina. Uncertainty analysis was used to determine the impacts of uncertainty in field and network parameters of the model on the predicted outflows and nitrate-nitrogen loads at the...

  8. Computational intelligence models to predict porosity of tablets using minimum features.

    PubMed

    Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander

    2017-01-01

    The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space.

  9. Cost-effective water quality assessment through the integration of monitoring data and modeling results

    NASA Astrophysics Data System (ADS)

    Lobuglio, Joseph N.; Characklis, Gregory W.; Serre, Marc L.

    2007-03-01

    Sparse monitoring data and error inherent in water quality models make the identification of waters not meeting regulatory standards uncertain. Additional monitoring can be implemented to reduce this uncertainty, but it is often expensive. These costs are currently a major concern, since developing total maximum daily loads, as mandated by the Clean Water Act, will require assessing tens of thousands of water bodies across the United States. This work uses the Bayesian maximum entropy (BME) method of modern geostatistics to integrate water quality monitoring data together with model predictions to provide improved estimates of water quality in a cost-effective manner. This information includes estimates of uncertainty and can be used to aid probabilistic-based decisions concerning the status of a water (i.e., impaired or not impaired) and the level of monitoring needed to characterize the water for regulatory purposes. This approach is applied to the Catawba River reservoir system in western North Carolina as a means of estimating seasonal chlorophyll a concentration. Mean concentration and confidence intervals for chlorophyll a are estimated for 66 reservoir segments over an 11-year period (726 values) based on 219 measured seasonal averages and 54 model predictions. Although the model predictions had a high degree of uncertainty, integration of modeling results via BME methods reduced the uncertainty associated with chlorophyll estimates compared with estimates made solely with information from monitoring efforts. Probabilistic predictions of future chlorophyll levels on one reservoir are used to illustrate the cost savings that can be achieved by less extensive and rigorous monitoring methods within the BME framework. While BME methods have been applied in several environmental contexts, employing these methods as a means of integrating monitoring and modeling results, as well as application of this approach to the assessment of surface water monitoring networks

  10. A Load-Based Temperature Prediction Model for Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Sobhani, Masoud

    Electric load forecasting, as a basic requirement for the decision-making in power utilities, has been improved in various aspects in the past decades. Many factors may affect the accuracy of the load forecasts, such as data quality, goodness of the underlying model and load composition. Due to the strong correlation between the input variables (e.g., weather and calendar variables) and the load, the quality of input data plays a vital role in forecasting practices. Even if the forecasting model were able to capture most of the salient features of the load, a low quality input data may result in inaccurate forecasts. Most of the data cleansing efforts in the load forecasting literature have been devoted to the load data. Few studies focused on weather data cleansing for load forecasting. This research proposes an anomaly detection method for the temperature data. The method consists of two components: a load-based temperature prediction model and a detection technique. The effectiveness of the proposed method is demonstrated through two case studies: one based on the data from the Global Energy Forecasting Competition 2014, and the other based on the data published by ISO New England. The results show that by removing the detected observations from the original input data, the final load forecast accuracy is enhanced.

  11. Application of empirical predictive modeling using conventional and alternative fecal indicator bacteria in eastern North Carolina waters

    USGS Publications Warehouse

    Gonzalez, Raul; Conn, Kathleen E.; Crosswell, Joey; Noble, Rachel

    2012-01-01

    Coastal and estuarine waters are the site of intense anthropogenic influence with concomitant use for recreation and seafood harvesting. Therefore, coastal and estuarine water quality has a direct impact on human health. In eastern North Carolina (NC) there are over 240 recreational and 1025 shellfish harvesting water quality monitoring sites that are regularly assessed. Because of the large number of sites, sampling frequency is often only on a weekly basis. This frequency, along with an 18–24 h incubation time for fecal indicator bacteria (FIB) enumeration via culture-based methods, reduces the efficiency of the public notification process. In states like NC where beach monitoring resources are limited but historical data are plentiful, predictive models may offer an improvement for monitoring and notification by providing real-time FIB estimates. In this study, water samples were collected during 12 dry (n = 88) and 13 wet (n = 66) weather events at up to 10 sites. Statistical predictive models for Escherichiacoli (EC), enterococci (ENT), and members of the Bacteroidales group were created and subsequently validated. Our results showed that models for EC and ENT (adjusted R2 were 0.61 and 0.64, respectively) incorporated a range of antecedent rainfall, climate, and environmental variables. The most important variables for EC and ENT models were 5-day antecedent rainfall, dissolved oxygen, and salinity. These models successfully predicted FIB levels over a wide range of conditions with a 3% (EC model) and 9% (ENT model) overall error rate for recreational threshold values and a 0% (EC model) overall error rate for shellfish threshold values. Though modeling of members of the Bacteroidales group had less predictive ability (adjusted R2 were 0.56 and 0.53 for fecal Bacteroides spp. and human Bacteroides spp., respectively), the modeling approach and testing provided information on Bacteroidales ecology. This is the first example of a set of successful statistical

  12. An Internet of Things System for Underground Mine Air Quality Pollutant Prediction Based on Azure Machine Learning

    PubMed Central

    Jo, ByungWan

    2018-01-01

    The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH4, CO, SO2, and H2S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R2 and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality. PMID:29561777

  13. An Internet of Things System for Underground Mine Air Quality Pollutant Prediction Based on Azure Machine Learning.

    PubMed

    Jo, ByungWan; Khan, Rana Muhammad Asad

    2018-03-21

    The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH₄, CO, SO₂, and H₂S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R ² and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality.

  14. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    DOE PAGES

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less

  15. Predictive time-series modeling using artificial neural networks for Linac beam symmetry: an empirical study.

    PubMed

    Li, Qiongge; Chan, Maria F

    2017-01-01

    Over half of cancer patients receive radiotherapy (RT) as partial or full cancer treatment. Daily quality assurance (QA) of RT in cancer treatment closely monitors the performance of the medical linear accelerator (Linac) and is critical for continuous improvement of patient safety and quality of care. Cumulative longitudinal QA measurements are valuable for understanding the behavior of the Linac and allow physicists to identify trends in the output and take preventive actions. In this study, artificial neural networks (ANNs) and autoregressive moving average (ARMA) time-series prediction modeling techniques were both applied to 5-year daily Linac QA data. Verification tests and other evaluations were then performed for all models. Preliminary results showed that ANN time-series predictive modeling has more advantages over ARMA techniques for accurate and effective applicability in the dosimetry and QA field. © 2016 New York Academy of Sciences.

  16. Comparison of the Mortality Probability Admission Model III, National Quality Forum, and Acute Physiology and Chronic Health Evaluation IV hospital mortality models: implications for national benchmarking*.

    PubMed

    Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E

    2014-03-01

    To examine the accuracy of the original Mortality Probability Admission Model III, ICU Outcomes Model/National Quality Forum modification of Mortality Probability Admission Model III, and Acute Physiology and Chronic Health Evaluation IVa models for comparing observed and risk-adjusted hospital mortality predictions. Retrospective paired analyses of day 1 hospital mortality predictions using three prognostic models. Fifty-five ICUs at 38 U.S. hospitals from January 2008 to December 2012. Among 174,001 intensive care admissions, 109,926 met model inclusion criteria and 55,304 had data for mortality prediction using all three models. None. We compared patient exclusions and the discrimination, calibration, and accuracy for each model. Acute Physiology and Chronic Health Evaluation IVa excluded 10.7% of all patients, ICU Outcomes Model/National Quality Forum 20.1%, and Mortality Probability Admission Model III 24.1%. Discrimination of Acute Physiology and Chronic Health Evaluation IVa was superior with area under receiver operating curve (0.88) compared with Mortality Probability Admission Model III (0.81) and ICU Outcomes Model/National Quality Forum (0.80). Acute Physiology and Chronic Health Evaluation IVa was better calibrated (lowest Hosmer-Lemeshow statistic). The accuracy of Acute Physiology and Chronic Health Evaluation IVa was superior (adjusted Brier score = 31.0%) to that for Mortality Probability Admission Model III (16.1%) and ICU Outcomes Model/National Quality Forum (17.8%). Compared with observed mortality, Acute Physiology and Chronic Health Evaluation IVa overpredicted mortality by 1.5% and Mortality Probability Admission Model III by 3.1%; ICU Outcomes Model/National Quality Forum underpredicted mortality by 1.2%. Calibration curves showed that Acute Physiology and Chronic Health Evaluation performed well over the entire risk range, unlike the Mortality Probability Admission Model and ICU Outcomes Model/National Quality Forum models. Acute

  17. The influence of data curation on QSAR Modeling – examining issues of quality versus quantity of data (SOT)

    EPA Science Inventory

    The construction of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available ...

  18. Learning Instance-Specific Predictive Models

    PubMed Central

    Visweswaran, Shyam; Cooper, Gregory F.

    2013-01-01

    This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325

  19. Predictive Model for the Meniscus-Guided Coating of High-Quality Organic Single-Crystalline Thin Films.

    PubMed

    Janneck, Robby; Vercesi, Federico; Heremans, Paul; Genoe, Jan; Rolin, Cedric

    2016-09-01

    A model that describes solvent evaporation dynamics in meniscus-guided coating techniques is developed. In combination with a single fitting parameter, it is shown that this formula can accurately predict a processing window for various coating conditions. Organic thin-film transistors (OTFTs), fabricated by a zone-casting setup, indeed show the best performance at the predicted coating speeds with mobilities reaching 7 cm 2 V -1 s -1 . © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. PREDICTIVE UNCERTAINTY IN HYDROLOGIC AND WATER QUALITY MODELING: APPROACHES, APPLICATION TO ENVIRONMENTAL MANAGEMENT, AND FUTURE CHALLENGES

    EPA Science Inventory

    Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...

  1. Predicting human chronically paralyzed muscle force: a comparison of three mathematical models.

    PubMed

    Frey Law, Laura A; Shields, Richard K

    2006-03-01

    Chronic spinal cord injury (SCI) induces detrimental musculoskeletal adaptations that adversely affect health status, ranging from muscle paralysis and skin ulcerations to osteoporosis. SCI rehabilitative efforts may increasingly focus on preserving the integrity of paralyzed extremities to maximize health quality using electrical stimulation for isometric training and/or functional activities. Subject-specific mathematical muscle models could prove valuable for predicting the forces necessary to achieve therapeutic loading conditions in individuals with paralyzed limbs. Although numerous muscle models are available, three modeling approaches were chosen that can accommodate a variety of stimulation input patterns. To our knowledge, no direct comparisons between models using paralyzed muscle have been reported. The three models include 1) a simple second-order linear model with three parameters and 2) two six-parameter nonlinear models (a second-order nonlinear model and a Hill-derived nonlinear model). Soleus muscle forces from four individuals with complete, chronic SCI were used to optimize each model's parameters (using an increasing and decreasing frequency ramp) and to assess the models' predictive accuracies for constant and variable (doublet) stimulation trains at 5, 10, and 20 Hz in each individual. Despite the large differences in modeling approaches, the mean predicted force errors differed only moderately (8-15% error; P=0.0042), suggesting physiological force can be adequately represented by multiple mathematical constructs. The two nonlinear models predicted specific force characteristics better than the linear model in nearly all stimulation conditions, with minimal differences between the two nonlinear models. Either nonlinear mathematical model can provide reasonable force estimates; individual application needs may dictate the preferred modeling strategy.

  2. Human-model hybrid Korean air quality forecasting system.

    PubMed

    Chang, Lim-Seok; Cho, Ara; Park, Hyunju; Nam, Kipyo; Kim, Deokrae; Hong, Ji-Hyoung; Song, Chang-Keun

    2016-09-01

    The Korean national air quality forecasting system, consisting of the Weather Research and Forecasting, the Sparse Matrix Operator Kernel Emissions, and the Community Modeling and Analysis (CMAQ), commenced from August 31, 2013 with target pollutants of particulate matters (PM) and ozone. Factors contributing to PM forecasting accuracy include CMAQ inputs of meteorological field and emissions, forecasters' capacity, and inherent CMAQ limit. Four numerical experiments were conducted including two global meteorological inputs from the Global Forecast System (GFS) and the Unified Model (UM), two emissions from the Model Intercomparison Study Asia (MICS-Asia) and the Intercontinental Chemical Transport Experiment (INTEX-B) for the Northeast Asia with Clear Air Policy Support System (CAPSS) for South Korea, and data assimilation of the Monitoring Atmospheric Composition and Climate (MACC). Significant PM underpredictions by using both emissions were found for PM mass and major components (sulfate and organic carbon). CMAQ predicts PM2.5 much better than PM10 (NMB of PM2.5: -20~-25%, PM10: -43~-47%). Forecasters' error usually occurred at the next day of high PM event. Once CMAQ fails to predict high PM event the day before, forecasters are likely to dismiss the model predictions on the next day which turns out to be true. The best combination of CMAQ inputs is the set of UM global meteorological field, MICS-Asia and CAPSS 2010 emissions with the NMB of -12.3%, the RMSE of 16.6μ/m(3) and the R(2) of 0.68. By using MACC data as an initial and boundary condition, the performance skill of CMAQ would be improved, especially in the case of undefined coarse emission. A variety of methods such as ensemble and data assimilation are considered to improve further the accuracy of air quality forecasting, especially for high PM events to be comparable to for all cases. The growing utilization of the air quality forecast induced the public strongly to demand that the accuracy of the

  3. Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions

    NASA Astrophysics Data System (ADS)

    Van Hooidonk, R. J.

    2011-12-01

    Future widespread coral bleaching and subsequent mortality has been projected with sea surface temperature (SST) data from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. These model weaknesses likely reduce the skill of coral bleaching predictions, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends and their propagation in predictions. To analyze the relative importance of various types of model errors and biases on coral reef bleaching predictive skill, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from GCMs 20th century simulations to be included in the Intergovernmental Panel on Climate Change (IPCC) 5th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate skill using an objective measure of forecast quality, the Peirce Skill Score (PSS). This methodology will identify frequency bands that are important to predicting coral bleaching and it will highlight deficiencies in these bands in models. The methodology we describe can be used to improve future climate model derived predictions of coral reef bleaching and it can be used to better characterize the errors and uncertainty in predictions.

  4. Predicting rheological behavior and baking quality of wheat flour using a GlutoPeak test.

    PubMed

    Rakita, Slađana; Dokić, Ljubica; Dapčević Hadnađev, Tamara; Hadnađev, Miroslav; Torbica, Aleksandra

    2018-06-01

    The purpose of this research was to gain an insight into the ability of the GlutoPeak instrument to predict flour functionality for bread making, as well as to determine which of the GlutoPeak parameters show the best potential in predicting dough rheological behavior and baking performance. Obtained results showed that GlutoPeak parameters correlated better with the indices of extensional rheological tests which consider constant dough hydration than with those which were performed at constant dough consistency. The GlutoPeak test showed that it is suitable for discriminating wheat varieties of good quality from those of poor quality, while the most discriminating index was maximum torque (MT). Moreover, MT value of 50 BU and aggregation energy value of 1,300 GPU were set as limits of wheat flour quality. The backward stepwise regression analysis revealed that a high-level prediction of indices which are highly affected by protein content (gluten content, flour water absorption, and dough tenacity) was achieved by using the GlutoPeak indices. Concerning bread quality, a moderate prediction of specific loaf volume and an intense level prediction of breadcrumb textural properties were accomplished by using the GlutoPeak parameters. The presented results indicated that the application of this quick test in wheat transformation chain for the assessment of baking quality would be useful. Baking test is considered as the most reliable method for assessing wheat-baking quality. However, baking test requires trained stuff, time, and large sample amount. These disadvantages have led to a growing demand to develop new rapid tests which would enable prediction of baked product quality with a limited flour size. Therefore, we tested the possibility of using a GlutoPeak tester to predict loaf volume and breadcrumb textural properties. Discrimination of wheat varieties according to quality with a restricted flour amount was also examined. Furthermore, we proposed the limit

  5. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    NASA Astrophysics Data System (ADS)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with

  6. Use of predictive models and rapid methods to nowcast bacteria levels at coastal beaches

    USGS Publications Warehouse

    Francy, Donna S.

    2009-01-01

    The need for rapid assessments of recreational water quality to better protect public health is well accepted throughout the research and regulatory communities. Rapid analytical methods, such as quantitative polymerase chain reaction (qPCR) and immunomagnetic separation/adenosine triphosphate (ATP) analysis, are being tested but are not yet ready for widespread use.Another solution is the use of predictive models, wherein variable(s) that are easily and quickly measured are surrogates for concentrations of fecal-indicator bacteria. Rainfall-based alerts, the simplest type of model, have been used by several communities for a number of years. Deterministic models use mathematical representations of the processes that affect bacteria concentrations; this type of model is being used for beach-closure decisions at one location in the USA. Multivariable statistical models are being developed and tested in many areas of the USA; however, they are only used in three areas of the Great Lakes to aid in notifications of beach advisories or closings. These “operational” statistical models can result in more accurate assessments of recreational water quality than use of the previous day's Escherichia coli (E. coli)concentration as determined by traditional culture methods. The Ohio Nowcast, at Huntington Beach, Bay Village, Ohio, is described in this paper as an example of an operational statistical model. Because predictive modeling is a dynamic process, water-resource managers continue to collect additional data to improve the predictive ability of the nowcast and expand the nowcast to other Ohio beaches and a recreational river. Although predictive models have been shown to work well at some beaches and are becoming more widely accepted, implementation in many areas is limited by funding, lack of coordinated technical leadership, and lack of supporting epidemiological data.

  7. Model-based predictions for dopamine.

    PubMed

    Langdon, Angela J; Sharpe, Melissa J; Schoenbaum, Geoffrey; Niv, Yael

    2018-04-01

    Phasic dopamine responses are thought to encode a prediction-error signal consistent with model-free reinforcement learning theories. However, a number of recent findings highlight the influence of model-based computations on dopamine responses, and suggest that dopamine prediction errors reflect more dimensions of an expected outcome than scalar reward value. Here, we review a selection of these recent results and discuss the implications and complications of model-based predictions for computational theories of dopamine and learning. Copyright © 2017. Published by Elsevier Ltd.

  8. Three-dimensional numerical modeling of water quality and sediment-associated processes in natural lakes

    USDA-ARS?s Scientific Manuscript database

    This chapter presents the development and application of a three-dimensional water quality model for predicting the distributions of nutrients, phytoplankton, dissolved oxygen, etc., in natural lakes. In this model, the computational domain was divided into two parts: the water column and the bed se...

  9. Diet Quality Scores and Prediction of All-Cause, Cardiovascular and Cancer Mortality in a Pan-European Cohort Study

    PubMed Central

    Lassale, Camille; Gunter, Marc J.; Romaguera, Dora; Peelen, Linda M.; Van der Schouw, Yvonne T.; Beulens, Joline W. J.; Freisling, Heinz; Muller, David C.; Ferrari, Pietro; Huybrechts, Inge; Fagherazzi, Guy; Boutron-Ruault, Marie-Christine; Affret, Aurélie; Overvad, Kim; Dahm, Christina C.; Olsen, Anja; Roswall, Nina; Tsilidis, Konstantinos K.; Katzke, Verena A.; Kühn, Tilman; Buijsse, Brian; Quirós, José-Ramón; Sánchez-Cantalejo, Emilio; Etxezarreta, Nerea; Huerta, José María; Barricarte, Aurelio; Bonet, Catalina; Khaw, Kay-Tee; Key, Timothy J.; Trichopoulou, Antonia; Bamia, Christina; Lagiou, Pagona; Palli, Domenico; Agnoli, Claudia; Tumino, Rosario; Fasanelli, Francesca; Panico, Salvatore; Bueno-de-Mesquita, H. Bas; Boer, Jolanda M. A.; Sonestedt, Emily; Nilsson, Lena Maria; Renström, Frida; Weiderpass, Elisabete; Skeie, Guri; Lund, Eiliv; Moons, Karel G. M.; Riboli, Elio; Tzoulaki, Ioanna

    2016-01-01

    Scores of overall diet quality have received increasing attention in relation to disease aetiology; however, their value in risk prediction has been little examined. The objective was to assess and compare the association and predictive performance of 10 diet quality scores on 10-year risk of all-cause, CVD and cancer mortality in 451,256 healthy participants to the European Prospective Investigation into Cancer and Nutrition, followed-up for a median of 12.8y. All dietary scores studied showed significant inverse associations with all outcomes. The range of HRs (95% CI) in the top vs. lowest quartile of dietary scores in a composite model including non-invasive factors (age, sex, smoking, body mass index, education, physical activity and study centre) was 0.75 (0.72–0.79) to 0.88 (0.84–0.92) for all-cause, 0.76 (0.69–0.83) to 0.84 (0.76–0.92) for CVD and 0.78 (0.73–0.83) to 0.91 (0.85–0.97) for cancer mortality. Models with dietary scores alone showed low discrimination, but composite models also including age, sex and other non-invasive factors showed good discrimination and calibration, which varied little between different diet scores examined. Mean C-statistic of full models was 0.73, 0.80 and 0.71 for all-cause, CVD and cancer mortality. Dietary scores have poor predictive performance for 10-year mortality risk when used in isolation but display good predictive ability in combination with other non-invasive common risk factors. PMID:27409582

  10. APPLICATION OF A WATER QUALITY ASSESSMENT MODELING SYSTEM AT A SUPERFUND SITE

    EPA Science Inventory

    Water quality modeling and related exposure assessments at a Superfund site, Silver Bow Creek-Clark Fork River in Montana, demonstrate the capability to predict the fate of mining waste pollutants in the environment. inked assessment system--consisting of hydrology and erosion, r...

  11. EVALUATING THE PERFORMANCE OF REGIONAL-SCALE PHOTOCHEMICAL MODELING SYSTEMS: PART I--METEOROLOGICAL PREDICTIONS. (R825260)

    EPA Science Inventory

    In this study, the concept of scale analysis is applied to evaluate two state-of-science meteorological models, namely MM5 and RAMS3b, currently being used to drive regional-scale air quality models. To this end, seasonal time series of observations and predictions for temperatur...

  12. Predicting health-related quality of life in cancer patients receiving chemotherapy: a structural equation approach using the self-control model.

    PubMed

    Park, Yu-Ri; Park, Eun-Young; Kim, Jung-Hee

    2017-11-09

    According to the self-control model, self-control works as a protective factor and a psychological resource. Although an understanding of the effect(s) of peripheral neuropathy on quality of life is important to healthcare professionals, previous studies do not facilitate broad comprehension in this regard. The purpose of this cross-sectional study was to test the multidimensional assumptions of quality of life of patients with cancer, with focus on their self-control. A structural equation model was tested on patients with cancer at the oncology clinic of a university hospital where patients received chemotherapy. A model was tested using structural equation modeling, which allows the researcher to find the empirical evidence by testing a measurement model and a structural model. The model comprised three variables, self-control, health related quality of life, and chemotherapy-induced peripheral neuropathy. Among the variables, self-control was the endogenous and mediating variable. The proposed models showed good fit indices. Self-control partially mediated chemotherapy-induced peripheral neuropathy and quality of life. It was found that the physical symptoms of peripheral neuropathy influenced health-related quality of life both indirectly and directly. Self-control plays a significant role in the protection and promotion of physical and mental health in various stressful situations, and thus, as a psychological resource, it plays a significant role in quality of life. Our results can be used to develop a quality of life model for patients receiving chemotherapy and as a theoretical foundation for the development of appropriate nursing interventions.

  13. Development of Techniques and Data for Evaluating Ride Quality, Volume III : Guidelines for Development of Ride-Quality Models and Their Applications

    DOT National Transportation Integrated Search

    1978-02-01

    Ride-quality models for city buses and intercity trains are presented and discussed in terms of their ability to predict passenger comfort and ride acceptability. The report, the last of three volumes, contains procedural guidelines to be employed by...

  14. Worldwide multi-model intercomparison of clear-sky solar irradiance predictions

    NASA Astrophysics Data System (ADS)

    Ruiz-Arias, Jose A.; Gueymard, Christian A.; Cebecauer, Tomas

    2017-06-01

    Accurate modeling of solar radiation in the absence of clouds is highly important because solar power production peaks during cloud-free situations. The conventional validation approach of clear-sky solar radiation models relies on the comparison between model predictions and ground observations. Therefore, this approach is limited to locations with availability of high-quality ground observations, which are scarce worldwide. As a consequence, many areas of in-terest for, e.g., solar energy development, still remain sub-validated. Here, a worldwide inter-comparison of the global horizontal irradiance (GHI) and direct normal irradiance (DNI) calculated by a number of appropriate clear-sky solar ra-diation models is proposed, without direct intervention of any weather or solar radiation ground-based observations. The model inputs are all gathered from atmospheric reanalyses covering the globe. The model predictions are compared to each other and only their relative disagreements are quantified. The largest differences between model predictions are found over central and northern Africa, the Middle East, and all over Asia. This coincides with areas of high aerosol optical depth and highly varying aerosol distribution size. Overall, the differences in modeled DNI are found about twice larger than for GHI. It is argued that the prevailing weather regimes (most importantly, aerosol conditions) over regions exhibiting substantial divergences are not adequately parameterized by all models. Further validation and scrutiny using conventional methods based on ground observations should be pursued in priority over those specific regions to correctly evaluate the performance of clear-sky models, and select those that can be recommended for solar concentrating applications in particular.

  15. Seminal quality prediction using data mining methods.

    PubMed

    Sahoo, Anoop J; Kumar, Yugal

    2014-01-01

    Now-a-days, some new classes of diseases have come into existences which are known as lifestyle diseases. The main reasons behind these diseases are changes in the lifestyle of people such as alcohol drinking, smoking, food habits etc. After going through the various lifestyle diseases, it has been found that the fertility rates (sperm quantity) in men has considerably been decreasing in last two decades. Lifestyle factors as well as environmental factors are mainly responsible for the change in the semen quality. The objective of this paper is to identify the lifestyle and environmental features that affects the seminal quality and also fertility rate in man using data mining methods. The five artificial intelligence techniques such as Multilayer perceptron (MLP), Decision Tree (DT), Navie Bayes (Kernel), Support vector machine+Particle swarm optimization (SVM+PSO) and Support vector machine (SVM) have been applied on fertility dataset to evaluate the seminal quality and also to predict the person is either normal or having altered fertility rate. While the eight feature selection techniques such as support vector machine (SVM), neural network (NN), evolutionary logistic regression (LR), support vector machine plus particle swarm optimization (SVM+PSO), principle component analysis (PCA), chi-square test, correlation and T-test methods have been used to identify more relevant features which affect the seminal quality. These techniques are applied on fertility dataset which contains 100 instances with nine attribute with two classes. The experimental result shows that SVM+PSO provides higher accuracy and area under curve (AUC) rate (94% & 0.932) among multi-layer perceptron (MLP) (92% & 0.728), Support Vector Machines (91% & 0.758), Navie Bayes (Kernel) (89% & 0.850) and Decision Tree (89% & 0.735) for some of the seminal parameters. This paper also focuses on the feature selection process i.e. how to select the features which are more important for prediction of

  16. [Risk Prediction Using Routine Data: Development and Validation of Multivariable Models Predicting 30- and 90-day Mortality after Surgical Treatment of Colorectal Cancer].

    PubMed

    Crispin, Alexander; Strahwald, Brigitte; Cheney, Catherine; Mansmann, Ulrich

    2018-06-04

    Quality control, benchmarking, and pay for performance (P4P) require valid indicators and statistical models allowing adjustment for differences in risk profiles of the patient populations of the respective institutions. Using hospital remuneration data for measuring quality and modelling patient risks has been criticized by clinicians. Here we explore the potential of prediction models for 30- and 90-day mortality after colorectal cancer surgery based on routine data. Full census of a major statutory health insurer. Surgical departments throughout the Federal Republic of Germany. 4283 and 4124 insurants with major surgery for treatment of colorectal cancer during 2013 and 2014, respectively. Age, sex, primary and secondary diagnoses as well as tumor locations as recorded in the hospital remuneration data according to §301 SGB V. 30- and 90-day mortality. Elixhauser comorbidities, Charlson conditions, and Charlson scores were generated from the ICD-10 diagnoses. Multivariable prediction models were developed using a penalized logistic regression approach (logistic ridge regression) in a derivation set (patients treated in 2013). Calibration and discrimination of the models were assessed in an internal validation sample (patients treated in 2014) using calibration curves, Brier scores, receiver operating characteristic curves (ROC curves) and the areas under the ROC curves (AUC). 30- and 90-day mortality rates in the learning-sample were 5.7 and 8.4%, respectively. The corresponding values in the validation sample were 5.9% and once more 8.4%. Models based on Elixhauser comorbidities exhibited the highest discriminatory power with AUC values of 0.804 (95% CI: 0.776 -0.832) and 0.805 (95% CI: 0.782-0.828) for 30- and 90-day mortality. The Brier scores for these models were 0.050 (95% CI: 0.044-0.056) and 0.067 (95% CI: 0.060-0.074) and similar to the models based on Charlson conditions. Regardless of the model, low predicted probabilities were well calibrated, while

  17. Implementation of a WRF-CMAQ Air Quality Modeling System in Bogotá, Colombia

    NASA Astrophysics Data System (ADS)

    Nedbor-Gross, R.; Henderson, B. H.; Pachon, J. E.; Davis, J. R.; Baublitz, C. B.; Rincón, A.

    2014-12-01

    Due to a continuous economic growth Bogotá, Colombia has experienced air pollution issues in recent years. The local environmental authority has implemented several strategies to curb air pollution that have resulted in the decrease of PM10 concentrations since 2010. However, more activities are necessary in order to meet international air quality standards in the city. The University of Florida Air Quality and Climate group is collaborating with the Universidad de La Salle to prioritize regulatory strategies for Bogotá using air pollution simulations. To simulate pollution, we developed a modeling platform that combines the Weather Research and Forecasting Model (WRF), local emissions, and the Community Multi-scale Air Quality model (CMAQ). This platform is the first of its kind to be implemented in the megacity of Bogota, Colombia. The presentation will discuss development and evaluation of the air quality modeling system, highlight initial results characterizing photochemical conditions in Bogotá, and characterize air pollution under proposed regulatory strategies. The WRF model has been configured and applied to Bogotá, which resides in a tropical climate with complex mountainous topography. Developing the configuration included incorporation of local topography and land-use data, a physics sensitivity analysis, review, and systematic evaluation. The threshold, however, was set based on synthesis of model performance under less mountainous conditions. We will evaluate the impact that differences in autocorrelation contribute to the non-ideal performance. Air pollution predictions are currently under way. CMAQ has been configured with WRF meteorology, global boundary conditions from GEOS-Chem, and a locally produced emission inventory. Preliminary results from simulations show promising performance of CMAQ in Bogota. Anticipated results include a systematic performance evaluation of ozone and PM10, characterization of photochemical sensitivity, and air

  18. Micro Finite Element models of the vertebral body: Validation of local displacement predictions.

    PubMed

    Costa, Maria Cristiana; Tozzi, Gianluca; Cristofolini, Luca; Danesi, Valentina; Viceconti, Marco; Dall'Ara, Enrico

    2017-01-01

    The estimation of local and structural mechanical properties of bones with micro Finite Element (microFE) models based on Micro Computed Tomography images depends on the quality bone geometry is captured, reconstructed and modelled. The aim of this study was to validate microFE models predictions of local displacements for vertebral bodies and to evaluate the effect of the elastic tissue modulus on model's predictions of axial forces. Four porcine thoracic vertebrae were axially compressed in situ, in a step-wise fashion and scanned at approximately 39μm resolution in preloaded and loaded conditions. A global digital volume correlation (DVC) approach was used to compute the full-field displacements. Homogeneous, isotropic and linear elastic microFE models were generated with boundary conditions assigned from the interpolated displacement field measured from the DVC. Measured and predicted local displacements were compared for the cortical and trabecular compartments in the middle of the specimens. Models were run with two different tissue moduli defined from microindentation data (12.0GPa) and a back-calculation procedure (4.6GPa). The predicted sum of axial reaction forces was compared to the experimental values for each specimen. MicroFE models predicted more than 87% of the variation in the displacement measurements (R2 = 0.87-0.99). However, model predictions of axial forces were largely overestimated (80-369%) for a tissue modulus of 12.0GPa, whereas differences in the range 10-80% were found for a back-calculated tissue modulus. The specimen with the lowest density showed a large number of elements strained beyond yield and the highest predictive errors. This study shows that the simplest microFE models can accurately predict quantitatively the local displacements and qualitatively the strain distribution within the vertebral body, independently from the considered bone types.

  19. Analysis of free modeling predictions by RBO aleph in CASP11.

    PubMed

    Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver

    2016-09-01

    The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue-residue contact prediction by EPC-map and contact-guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. Proteins 2016; 84(Suppl 1):87-104. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Sentinel-2A: Orbit Modelling Improvements and their Impact on the Orbit Prediction

    NASA Astrophysics Data System (ADS)

    Peter, Heike; Otten, Michiel; Fernández Sánchez, Jaime; Fernández Martín, Carlos; Féménias, Pierre

    2016-07-01

    Sentinel-2A is the second satellite of the European Copernicus Programme. The satellite has been launched on 23rd June 2015 and it is operational since mid October 2015. This optical mission carries a GPS receiver for precise orbit determination. The Copernicus POD (Precise Orbit Determination) Service is in charge of generating precise orbital products and auxiliary files for Sentinel-2A as well as for the Sentinel-1 and -3 missions. The accuracy requirements for the Sentinel-2A orbit products are not very stringent with 3 m in 3D (3 sigma) for the near real-time (NRT) orbit and 10 m in 2D (3 sigma) for the predicted orbit. The fulfilment of the orbit accuracy requirements is normally not an issue. The Copernicus POD Service aims, however, to provide the best possible orbits for all three Sentinel missions. Therefore, a sophisticated box-wing model is generated for the Sentinel-2 satellite as it is done for the other two missions as well. Additionally, the solar wing of the satellite is rewound during eclipse, which has to be modelled accordingly. The quality of the orbit prediction is dependent on the results of the orbit estimation performed before it. The values of the last estimation of each parameter is taken for the orbit propagation, i.e. estimating ten atmospheric drag coefficients per 24h, the value of the last coefficient is used as a fix parameter for the subsequent orbit prediction. The question is whether the prediction might be stabilised by, e.g. using an average value of all ten coefficients. This paper presents the status and the quality of the Sentinel-2 orbit determination in the operational environment of the Copernicus POD Service. The impact of the orbit model improvements on the NRT and predicted orbits is studied in detail. Changes in the orbit parametrization as well as in the settings for the orbit propagation are investigated. In addition, the impact of the quality of the input GPS orbit and clock product on the Sentinel-2A orbit

  1. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models: 2. Laboratory earthquakes

    NASA Astrophysics Data System (ADS)

    Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.

    2012-02-01

    The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.

  2. Tools for beach health data management, data processing, and predictive model implementation

    USGS Publications Warehouse

    ,

    2013-01-01

    This fact sheet describes utilities created for management of recreational waters to provide efficient data management, data aggregation, and predictive modeling as well as a prototype geographic information system (GIS)-based tool for data visualization and summary. All of these utilities were developed to assist beach managers in making decisions to protect public health. The Environmental Data Discovery and Transformation (EnDDaT) Web service identifies, compiles, and sorts environmental data from a variety of sources that help to define climatic, hydrologic, and hydrodynamic characteristics including multiple data sources within the U.S. Geological Survey and the National Oceanic and Atmospheric Administration. The Great Lakes Beach Health Database (GLBH-DB) and Web application was designed to provide a flexible input, export, and storage platform for beach water quality and sanitary survey monitoring data to compliment beach monitoring programs within the Great Lakes. A real-time predictive modeling strategy was implemented by combining the capabilities of EnDDaT and the GLBH-DB for timely, automated prediction of beach water quality. The GIS-based tool was developed to map beaches based on their physical and biological characteristics, which was shared with multiple partners to provide concepts and information for future Web-accessible beach data outlets.

  3. Local environmental quality positively predicts breastfeeding in the UK's Millennium Cohort Study.

    PubMed

    Brown, Laura J; Sear, Rebecca

    2017-01-01

    Background and Objectives: Breastfeeding is an important form of parental investment with clear health benefits. Despite this, rates remain low in the UK; understanding variation can therefore help improve interventions. Life history theory suggests that environmental quality may pattern maternal investment, including breastfeeding. We analyse a nationally representative dataset to test two predictions: (i) higher local environmental quality predicts higher likelihood of breastfeeding initiation and longer duration; (ii) higher socioeconomic status (SES) provides a buffer against the adverse influences of low local environmental quality. Methodology: We ran factor analysis on a wide range of local-level environmental variables. Two summary measures of local environmental quality were generated by this analysis-one 'objective' (based on an independent assessor's neighbourhood scores) and one 'subjective' (based on respondent's scores). We used mixed-effects regression techniques to test our hypotheses. Results: Higher objective, but not subjective, local environmental quality predicts higher likelihood of starting and maintaining breastfeeding over and above individual SES and area-level measures of environmental quality. Higher individual SES is protective, with women from high-income households having relatively high breastfeeding initiation rates and those with high status jobs being more likely to maintain breastfeeding, even in poor environmental conditions. Conclusions and Implications: Environmental quality is often vaguely measured; here we present a thorough investigation of environmental quality at the local level, controlling for individual- and area-level measures. Our findings support a shift in focus away from individual factors and towards altering the landscape of women's decision making contexts when considering behaviours relevant to public health.

  4. EVALUATION OF THE COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODEL VERSION 4.5: UNCERTAINTIES AND SENSITIVITIES IMPACTING MODEL PERFORMANCE: PART I - OZONE

    EPA Science Inventory

    This study examines ozone (O3) predictions from the Community Multiscale Air Quality (CMAQ) model version 4.5 and discusses potential factors influencing the model results. Daily maximum 8-hr average O3 levels are largely underpredicted when observed O...

  5. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Computational intelligence models to predict porosity of tablets using minimum features

    PubMed Central

    Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander

    2017-01-01

    The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space. PMID:28138223

  7. Objective Quality and Intelligibility Prediction for Users of Assistive Listening Devices

    PubMed Central

    Falk, Tiago H.; Parsa, Vijay; Santos, João F.; Arehart, Kathryn; Hazrati, Oldooz; Huber, Rainer; Kates, James M.; Scollie, Susan

    2015-01-01

    This article presents an overview of twelve existing objective speech quality and intelligibility prediction tools. Two classes of algorithms are presented, namely intrusive and non-intrusive, with the former requiring the use of a reference signal, while the latter does not. Investigated metrics include both those developed for normal hearing listeners, as well as those tailored particularly for hearing impaired (HI) listeners who are users of assistive listening devices (i.e., hearing aids, HAs, and cochlear implants, CIs). Representative examples of those optimized for HI listeners include the speech-to-reverberation modulation energy ratio, tailored to hearing aids (SRMR-HA) and to cochlear implants (SRMR-CI); the modulation spectrum area (ModA); the hearing aid speech quality (HASQI) and perception indices (HASPI); and the PErception MOdel - hearing impairment quality (PEMO-Q-HI). The objective metrics are tested on three subjectively-rated speech datasets covering reverberation-alone, noise-alone, and reverberation-plus-noise degradation conditions, as well as degradations resultant from nonlinear frequency compression and different speech enhancement strategies. The advantages and limitations of each measure are highlighted and recommendations are given for suggested uses of the different tools under specific environmental and processing conditions. PMID:26052190

  8. The Copernicus Atmosphere Monitoring Service: facilitating the prediction of air quality from global to local scales

    NASA Astrophysics Data System (ADS)

    Engelen, R. J.; Peuch, V. H.

    2017-12-01

    The European Copernicus Atmosphere Monitoring Service (CAMS) operationally provides daily forecasts of global atmospheric composition and regional air quality. The global forecasting system is using ECMWF's Integrated Forecasting System (IFS), which is used for numerical weather prediction and which has been extended with modules for atmospheric chemistry, aerosols and greenhouse gases. The regional forecasts are produced by an ensemble of seven operational European air quality models that take their boundary conditions from the global system and provide an ensemble median with ensemble spread as their main output. Both the global and regional forecasting systems are feeding their output into air quality models on a variety of scales in various parts of the world. We will introduce the CAMS service chain and provide illustrations of its use in downstream applications. Both the usage of the daily forecasts and the usage of global and regional reanalyses will be addressed.

  9. Dietary self-efficacy predicts AHEI diet quality in women with previous gestational diabetes.

    PubMed

    Ferranti, Erin Poe; Narayan, K M Venkat; Reilly, Carolyn M; Foster, Jennifer; McCullough, Marjorie; Ziegler, Thomas R; Guo, Ying; Dunbar, Sandra B

    2014-01-01

    The purpose of this study was to examine the association of intrapersonal influences of diet quality as defined by the Health Belief Model constructs in women with recent histories of gestational diabetes. A descriptive, correlational, cross-sectional design was used to analyze relationships between diet quality and intrapersonal variables, including perceptions of threat of type 2 diabetes mellitus development, benefits and barriers of healthy eating, and dietary self-efficacy, in a convenience sample of 75 community-dwelling women (55% minority; mean age, 35.5 years; SD, 5.5 years) with previous gestational diabetes mellitus. Diet quality was defined by the Alternative Healthy Eating Index (AHEI). Multiple regression was used to identify predictors of AHEI diet quality. Women had moderate AHEI diet quality (mean score, 47.6; SD, 14.3). Only higher levels of education and self-efficacy significantly predicted better AHEI diet quality, controlling for other contributing variables. There is a significant opportunity to improve diet quality in women with previous gestational diabetes mellitus. Improving self-efficacy may be an important component to include in nutrition interventions. In addition to identifying other important individual components, future studies of diet quality in women with previous gestational diabetes mellitus are needed to investigate the scope of influence beyond the individual to potential family, social, and environmental factors. © 2014 The Author(s).

  10. Micro Finite Element models of the vertebral body: Validation of local displacement predictions

    PubMed Central

    Costa, Maria Cristiana; Tozzi, Gianluca; Cristofolini, Luca; Danesi, Valentina; Viceconti, Marco

    2017-01-01

    The estimation of local and structural mechanical properties of bones with micro Finite Element (microFE) models based on Micro Computed Tomography images depends on the quality bone geometry is captured, reconstructed and modelled. The aim of this study was to validate microFE models predictions of local displacements for vertebral bodies and to evaluate the effect of the elastic tissue modulus on model’s predictions of axial forces. Four porcine thoracic vertebrae were axially compressed in situ, in a step-wise fashion and scanned at approximately 39μm resolution in preloaded and loaded conditions. A global digital volume correlation (DVC) approach was used to compute the full-field displacements. Homogeneous, isotropic and linear elastic microFE models were generated with boundary conditions assigned from the interpolated displacement field measured from the DVC. Measured and predicted local displacements were compared for the cortical and trabecular compartments in the middle of the specimens. Models were run with two different tissue moduli defined from microindentation data (12.0GPa) and a back-calculation procedure (4.6GPa). The predicted sum of axial reaction forces was compared to the experimental values for each specimen. MicroFE models predicted more than 87% of the variation in the displacement measurements (R2 = 0.87–0.99). However, model predictions of axial forces were largely overestimated (80–369%) for a tissue modulus of 12.0GPa, whereas differences in the range 10–80% were found for a back-calculated tissue modulus. The specimen with the lowest density showed a large number of elements strained beyond yield and the highest predictive errors. This study shows that the simplest microFE models can accurately predict quantitatively the local displacements and qualitatively the strain distribution within the vertebral body, independently from the considered bone types. PMID:28700618

  11. Quality models for audiovisual streaming

    NASA Astrophysics Data System (ADS)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  12. Personalized Modeling for Prediction with Decision-Path Models

    PubMed Central

    Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.

    2015-01-01

    Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570

  13. Differences in aquatic habitat quality as an impact of one- and two-dimensional hydrodynamic model simulated flow variables

    NASA Astrophysics Data System (ADS)

    Benjankar, R. M.; Sohrabi, M.; Tonina, D.; McKean, J. A.

    2013-12-01

    Aquatic habitat models utilize flow variables which may be predicted with one-dimensional (1D) or two-dimensional (2D) hydrodynamic models to simulate aquatic habitat quality. Studies focusing on the effects of hydrodynamic model dimensionality on predicted aquatic habitat quality are limited. Here we present the analysis of the impact of flow variables predicted with 1D and 2D hydrodynamic models on simulated spatial distribution of habitat quality and Weighted Usable Area (WUA) for fall-spawning Chinook salmon. Our study focuses on three river systems located in central Idaho (USA), which are a straight and pool-riffle reach (South Fork Boise River), small pool-riffle sinuous streams in a large meadow (Bear Valley Creek) and a steep-confined plane-bed stream with occasional deep forced pools (Deadwood River). We consider low and high flows in simple and complex morphologic reaches. Results show that 1D and 2D modeling approaches have effects on both the spatial distribution of the habitat and WUA for both discharge scenarios, but we did not find noticeable differences between complex and simple reaches. In general, the differences in WUA were small, but depended on stream type. Nevertheless, spatially distributed habitat quality difference is considerable in all streams. The steep-confined plane bed stream had larger differences between aquatic habitat quality defined with 1D and 2D flow models compared to results for streams with well defined macro-topographies, such as pool-riffle bed forms. KEY WORDS: one- and two-dimensional hydrodynamic models, habitat modeling, weighted usable area (WUA), hydraulic habitat suitability, high and low discharges, simple and complex reaches

  14. RAQ–A Random Forest Approach for Predicting Air Quality in Urban Sensing Systems

    PubMed Central

    Yu, Ruiyun; Yang, Yu; Yang, Leyou; Han, Guangjie; Move, Oguti Ann

    2016-01-01

    Air quality information such as the concentration of PM2.5 is of great significance for human health and city management. It affects the way of traveling, urban planning, government policies and so on. However, in major cities there is typically only a limited number of air quality monitoring stations. In the meantime, air quality varies in the urban areas and there can be large differences, even between closely neighboring regions. In this paper, a random forest approach for predicting air quality (RAQ) is proposed for urban sensing systems. The data generated by urban sensing includes meteorology data, road information, real-time traffic status and point of interest (POI) distribution. The random forest algorithm is exploited for data training and prediction. The performance of RAQ is evaluated with real city data. Compared with three other algorithms, this approach achieves better prediction precision. Exciting results are observed from the experiments that the air quality can be inferred with amazingly high accuracy from the data which are obtained from urban sensing. PMID:26761008

  15. Reduced order models for prediction of groundwater quality impacts from CO₂ and brine leakage

    DOE PAGES

    Zheng, Liange; Carroll, Susan; Bianchi, Marco; ...

    2014-12-31

    A careful assessment of the risk associated with geologic CO₂ storage is critical to the deployment of large-scale storage projects. A potential risk is the deterioration of groundwater quality caused by the leakage of CO₂ and brine leakage from deep subsurface reservoirs. In probabilistic risk assessment studies, numerical modeling is the primary tool employed to assess risk. However, the application of traditional numerical models to fully evaluate the impact of CO₂ leakage on groundwater can be computationally complex, demanding large processing times and resources, and involving large uncertainties. As an alternative, reduced order models (ROMs) can be used as highlymore » efficient surrogates for the complex process-based numerical models. In this study, we represent the complex hydrogeological and geochemical conditions in a heterogeneous aquifer and subsequent risk by developing and using two separate ROMs. The first ROM is derived from a model that accounts for the heterogeneous flow and transport conditions in the presence of complex leakage functions for CO₂ and brine. The second ROM is obtained from models that feature similar, but simplified flow and transport conditions, and allow for a more complex representation of all relevant geochemical reactions. To quantify possible impacts to groundwater aquifers, the basic risk metric is taken as the aquifer volume in which the water quality of the aquifer may be affected by an underlying CO₂ storage project. The integration of the two ROMs provides an estimate of the impacted aquifer volume taking into account uncertainties in flow, transport and chemical conditions. These two ROMs can be linked in a comprehensive system level model for quantitative risk assessment of the deep storage reservoir, wellbore leakage, and shallow aquifer impacts to assess the collective risk of CO₂ storage projects.« less

  16. Do treatment quality indicators predict cardiovascular outcomes in patients with diabetes?

    PubMed

    Sidorenkov, Grigory; Voorham, Jaco; de Zeeuw, Dick; Haaijer-Ruskamp, Flora M; Denig, Petra

    2013-01-01

    Landmark clinical trials have led to optimal treatment recommendations for patients with diabetes. Whether optimal treatment is actually delivered in practice is even more important than the efficacy of the drugs tested in trials. To this end, treatment quality indicators have been developed and tested against intermediate outcomes. No studies have tested whether these treatment quality indicators also predict hard patient outcomes. A cohort study was conducted using data collected from >10.000 diabetes patients in the Groningen Initiative to Analyze Type 2 Treatment (GIANTT) database and Dutch Hospital Data register. Included quality indicators measured glucose-, lipid-, blood pressure- and albuminuria-lowering treatment status and treatment intensification. Hard patient outcome was the composite of cardiovascular events and all-cause death. Associations were tested using Cox regression adjusting for confounding, reporting hazard ratios (HR) with 95% confidence intervals. Lipid and albuminuria treatment status, but not blood pressure lowering treatment status, were associated with the composite outcome (HR = 0.77, 0.67-0.88; HR = 0.75, 0.59-0.94). Glucose lowering treatment status was associated with the composite outcome only in patients with an elevated HbA1c level (HR = 0.72, 0.56-0.93). Treatment intensification with glucose-lowering but not with lipid-, blood pressure- and albuminuria-lowering drugs was associated with the outcome (HR = 0.73, 0.60-0.89). Treatment quality indicators measuring lipid- and albuminuria-lowering treatment status are valid quality measures, since they predict a lower risk of cardiovascular events and mortality in patients with diabetes. The quality indicators for glucose-lowering treatment should only be used for restricted populations with elevated HbA1c levels. Intriguingly, the tested indicators for blood pressure-lowering treatment did not predict patient outcomes. These results question whether all treatment

  17. A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions

    NASA Astrophysics Data System (ADS)

    Gide, Milind S.; Karam, Lina J.

    2016-08-01

    With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.

  18. A systematic review of breast cancer incidence risk prediction models with meta-analysis of their performance.

    PubMed

    Meads, Catherine; Ahmed, Ikhlaaq; Riley, Richard D

    2012-04-01

    A risk prediction model is a statistical tool for estimating the probability that a currently healthy individual with specific risk factors will develop a condition in the future such as breast cancer. Reliably accurate prediction models can inform future disease burdens, health policies and individual decisions. Breast cancer prediction models containing modifiable risk factors, such as alcohol consumption, BMI or weight, condom use, exogenous hormone use and physical activity, are of particular interest to women who might be considering how to reduce their risk of breast cancer and clinicians developing health policies to reduce population incidence rates. We performed a systematic review to identify and evaluate the performance of prediction models for breast cancer that contain modifiable factors. A protocol was developed and a sensitive search in databases including MEDLINE and EMBASE was conducted in June 2010. Extensive use was made of reference lists. Included were any articles proposing or validating a breast cancer prediction model in a general female population, with no language restrictions. Duplicate data extraction and quality assessment were conducted. Results were summarised qualitatively, and where possible meta-analysis of model performance statistics was undertaken. The systematic review found 17 breast cancer models, each containing a different but often overlapping set of modifiable and other risk factors, combined with an estimated baseline risk that was also often different. Quality of reporting was generally poor, with characteristics of included participants and fitted model results often missing. Only four models received independent validation in external data, most notably the 'Gail 2' model with 12 validations. None of the models demonstrated consistently outstanding ability to accurately discriminate between those who did and those who did not develop breast cancer. For example, random-effects meta-analyses of the performance of the

  19. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  20. The Grand Challenge of Basin-Scale Groundwater Quality Management Modelling

    NASA Astrophysics Data System (ADS)

    Fogg, G. E.

    2017-12-01

    The last 50+ years of agricultural, urban and industrial land and water use practices have accelerated the degradation of groundwater quality in the upper portions of many major aquifer systems upon which much of the world relies for water supply. In the deepest and most extensive systems (e.g., sedimentary basins) that typically have the largest groundwater production rates and hold fresh groundwaters on decadal to millennial time scales, most of the groundwater is not yet contaminated. Predicting the long-term future groundwater quality in such basins is a grand scientific challenge. Moreover, determining what changes in land and water use practices would avert future, irreversible degradation of these massive freshwater stores is a grand challenge both scientifically and societally. It is naïve to think that the problem can be solved by eliminating or reducing enough of the contaminant sources, for human exploitation of land and water resources will likely always result in some contamination. The key lies in both reducing the contaminant sources and more proactively managing recharge in terms of both quantity and quality, such that the net influx of contaminants is sufficiently moderate and appropriately distributed in space and time to reverse ongoing groundwater quality degradation. Just as sustainable groundwater quantity management is greatly facilitated with groundwater flow management models, sustainable groundwater quality management will require the use of groundwater quality management models. This is a new genre of hydrologic models do not yet exist, partly because of the lack of modeling tools and the supporting research to model non-reactive as well as reactive transport on large space and time scales. It is essential that the contaminant hydrogeology community, which has heretofore focused almost entirely on point-source plume-scale problems, direct it's efforts toward the development of process-based transport modeling tools and analyses capable

  1. Quality of Education Predicts Performance on the Wide Range Achievement Test-4th Edition Word Reading Subtest

    PubMed Central

    Sayegh, Philip; Arentoft, Alyssa; Thaler, Nicholas S.; Dean, Andy C.; Thames, April D.

    2014-01-01

    The current study examined whether self-rated education quality predicts Wide Range Achievement Test-4th Edition (WRAT-4) Word Reading subtest and neurocognitive performance, and aimed to establish this subtest's construct validity as an educational quality measure. In a community-based adult sample (N = 106), we tested whether education quality both increased the prediction of Word Reading scores beyond demographic variables and predicted global neurocognitive functioning after adjusting for WRAT-4. As expected, race/ethnicity and education predicted WRAT-4 reading performance. Hierarchical regression revealed that when including education quality, the amount of WRAT-4's explained variance increased significantly, with race/ethnicity and both education quality and years as significant predictors. Finally, WRAT-4 scores, but not education quality, predicted neurocognitive performance. Results support WRAT-4 Word Reading as a valid proxy measure for education quality and a key predictor of neurocognitive performance. Future research should examine these findings in larger, more diverse samples to determine their robust nature. PMID:25404004

  2. A comparison of two types of neural network for weld quality prediction in small scale resistance spot welding

    NASA Astrophysics Data System (ADS)

    Wan, Xiaodong; Wang, Yuanxun; Zhao, Dawei; Huang, YongAn

    2017-09-01

    Our study aims at developing an effective quality monitoring system in small scale resistance spot welding of titanium alloy. The measured electrical signals were interpreted in combination with the nugget development. Features were extracted from the dynamic resistance and electrode voltage curve. A higher welding current generally indicated a lower overall dynamic resistance level. A larger electrode voltage peak and higher change rate of electrode voltage could be detected under a smaller electrode force or higher welding current condition. Variation of the extracted features and weld quality was found more sensitive to the change of welding current than electrode force. Different neural network model were proposed for weld quality prediction. The back propagation neural network was more proper in failure load estimation. The probabilistic neural network model was more appropriate to be applied in quality level classification. A real-time and on-line weld quality monitoring system may be developed by taking advantages of both methods.

  3. “Transference Ratios” to Predict Total Oxidized Sulfur and Nitrogen Deposition – Part II, Modeling Results

    EPA Science Inventory

    The current study examines predictions of transference ratios and related modeled parameters for oxidized sulfur and oxidized nitrogen using five years (2002-2006) of 12-km grid cell-specific annual estimates from EPA’s Community Air Quality Model (CMAQ) for five selected sub-re...

  4. Dynamics and Predictability of The Eta Regional Model: The Role of Domain Size

    NASA Astrophysics Data System (ADS)

    Vannitsem, S.; Chomé, F.; Nicolis, C.

    This paper investigates the dynamical properties of the Eta model, a state-of-the- art nested limited-area model, following the approach previously developed by the present authors. It is first shown that the intrinsic dynamics of the model depends crucially on the size of the domain, with a non-chaotic behavior for small domains, supporting earlier findings on the absence of sensitivity to the initial conditions in these models. The quality of the predictions of several Eta model versions differing by their domain size is next evaluated and compared with the Avn analyses on a targeted region, centered on France. Contrary to what is usually taken for granted, a non-trivial relation between predictability and domain size is found, the best model versions be- ing the ones integrated on the smallest and the largest domain sizes. An explanation in connection with the intrinsic dynamics of the model is advanced.

  5. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  6. Effects of temporal and spatial resolution of calibration data on integrated hydrologic water quality model identification

    NASA Astrophysics Data System (ADS)

    Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael

    2014-05-01

    Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global

  7. Integrating microbial physiology and enzyme traits in the quality model

    NASA Astrophysics Data System (ADS)

    Sainte-Marie, Julien; Barrandon, Matthieu; Martin, Francis; Saint-André, Laurent; Derrien, Delphine

    2017-04-01

    Microbe activity plays an undisputable role in soil carbon storage and there have been many calls to integrate microbial ecology in soil carbon (C) models. With regard to this challenge, a few trait-based microbial models of C dynamics have emerged during the past decade. They parameterize specific traits related to decomposer physiology (substrate use efficiency, growth and mortality rates...) and enzyme properties (enzyme production rate, catalytic properties of enzymes…). But these models are built on the premise that organic matter (OM) can be represented as one single entity or are divided into a few pools, while organic matter exists as a continuum of many different compounds spanning from intact plant molecules to highly oxidised microbial metabolites. In addition, a given molecule may also exist in different forms, depending on its stage of polymerization or on its interactions with other organic compounds or mineral phases of the soil. Here we develop a general theoretical model relating the evolution of soil organic matter, as a continuum of progressively decomposing compounds, with decomposer activity and enzyme traits. The model is based on the notion of quality developed by Agren and Bosatta (1998), which is a measure of molecule accessibility to degradation. The model integrates three major processes: OM depolymerisation by enzyme action, OM assimilation and OM biotransformation. For any enzyme, the model reports the quality range where this enzyme selectively operates and how the initial quality distribution of the OM subset evolves into another distribution of qualities under the enzyme action. The model also defines the quality range where the OM can be uptaken and assimilated by microbes. It finally describes how the quality of the assimilated molecules is transformed into another quality distribution, corresponding to the decomposer metabolites signature. Upon decomposer death, these metabolites return to the substrate. We explore here the how

  8. Joint space-time geostatistical model for air quality surveillance

    NASA Astrophysics Data System (ADS)

    Russo, A.; Soares, A.; Pereira, M. J.

    2009-04-01

    Air pollution and peoples' generalized concern about air quality are, nowadays, considered to be a global problem. Although the introduction of rigid air pollution regulations has reduced pollution from industry and power stations, the growing number of cars on the road poses a new pollution problem. Considering the characteristics of the atmospheric circulation and also the residence times of certain pollutants in the atmosphere, a generalized and growing interest on air quality issues led to research intensification and publication of several articles with quite different levels of scientific depth. As most natural phenomena, air quality can be seen as a space-time process, where space-time relationships have usually quite different characteristics and levels of uncertainty. As a result, the simultaneous integration of space and time is not an easy task to perform. This problem is overcome by a variety of methodologies. The use of stochastic models and neural networks to characterize space-time dispersion of air quality is becoming a common practice. The main objective of this work is to produce an air quality model which allows forecasting critical concentration episodes of a certain pollutant by means of a hybrid approach, based on the combined use of neural network models and stochastic simulations. A stochastic simulation of the spatial component with a space-time trend model is proposed to characterize critical situations, taking into account data from the past and a space-time trend from the recent past. To identify near future critical episodes, predicted values from neural networks are used at each monitoring station. In this paper, we describe the design of a hybrid forecasting tool for ambient NO2 concentrations in Lisbon, Portugal.

  9. Predicting Hybrid Performances for Quality Traits through Genomic-Assisted Approaches in Central European Wheat

    PubMed Central

    Liu, Guozheng; Zhao, Yusheng; Gowda, Manje; Longin, C. Friedrich H.; Reif, Jochen C.; Mette, Michael F.

    2016-01-01

    Bread-making quality traits are central targets for wheat breeding. The objectives of our study were to (1) examine the presence of major effect QTLs for quality traits in a Central European elite wheat population, (2) explore the optimal strategy for predicting the hybrid performance for wheat quality traits, and (3) investigate the effects of marker density and the composition and size of the training population on the accuracy of prediction of hybrid performance. In total 135 inbred lines of Central European bread wheat (Triticum aestivum L.) and 1,604 hybrids derived from them were evaluated for seven quality traits in up to six environments. The 135 parental lines were genotyped using a 90k single-nucleotide polymorphism array. Genome-wide association mapping initially suggested presence of several quantitative trait loci (QTLs), but cross-validation rather indicated the absence of major effect QTLs for all quality traits except of 1000-kernel weight. Genomic selection substantially outperformed marker-assisted selection in predicting hybrid performance. A resampling study revealed that increasing the effective population size in the estimation set of hybrids is relevant to boost the accuracy of prediction for an unrelated test population. PMID:27383841

  10. Looking beyond patients: Can parents' quality of life predict asthma control in children?

    PubMed

    Cano-Garcinuño, Alfredo; Mora-Gandarillas, Isabel; Bercedo-Sanz, Alberto; Callén-Blecua, María Teresa; Castillo-Laita, José Antonio; Casares-Alonso, Irene; Forns-Serrallonga, Dolors; Tauler-Toro, Eulàlia; Alonso-Bernardo, Luz María; García-Merino, Águeda; Moneo-Hernández, Isabel; Cortés-Rico, Olga; Carvajal-Urueña, Ignacio; Morell-Bernabé, Juan José; Martín-Ibáñez, Itziar; Rodríguez-Fernández-Oliva, Carmen Rosa; Asensi-Monzó, María Teresa; Fernández-Carazo, Carmen; Murcia-García, José; Durán-Iglesias, Catalina; Montón-Álvarez, José Luis; Domínguez-Aurrecoechea, Begoña; Praena-Crespo, Manuel

    2016-07-01

    Social and family factors may influence the probability of achieving asthma control in children. Parents' quality of life has been insufficiently explored as a predictive factor linked to the probability of achieving disease control in asthmatic children. Determine whether the parents' quality of life predicts medium-term asthma control in children. Longitudinal study of children between 4 and 14 years of age, with active asthma. The parents' quality of life was evaluated using the specific IFABI-R instrument, in which scores were higher for poorer quality of life. Its association with asthma control measures in the child 16 weeks later was analyzed using multivariate methods, adjusting the effect for disease, child and family factors. The data from 452 children were analyzed (median age 9.6 years, 63.3% males). The parents' quality of life was predictive for asthma control; each point increase on the initial IFABI-R score was associated with an adjusted odds ratio (95% confidence interval) of 0.56 (0.37-0.86) for good control of asthma on the second visit, 2.58 (1.62-4.12) for asthma exacerbation, 2.12 (1.33-3.38) for an unscheduled visit to the doctor, and 2.46 (1.18-5.13) for going to the emergency room. The highest quartile for the IFABI-R score had a sensitivity of 34.5% and a specificity of 82.2% to predict poorly controlled asthma. Parents' poorer quality of life is related to poor, medium-term asthma control in children. Assessing the parents' quality of life could aid disease management decisions. Pediatr Pulmonol. 2016;51:670-677. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  11. Spatiotemporal models for predicting high pollen concentration level of Corylus, Alnus, and Betula.

    PubMed

    Nowosad, Jakub

    2016-06-01

    Corylus, Alnus, and Betula trees are among the most important sources of allergic pollen in the temperate zone of the Northern Hemisphere and have a large impact on the quality of life and productivity of allergy sufferers. Therefore, it is important to predict high pollen concentrations, both in time and space. The aim of this study was to create and evaluate spatiotemporal models for predicting high Corylus, Alnus, and Betula pollen concentration levels, based on gridded meteorological data. Aerobiological monitoring was carried out in 11 cities in Poland and gathered, depending on the site, between 2 and 16 years of measurements. According to the first allergy symptoms during exposure, a high pollen count level was established for each taxon. An optimizing probability threshold technique was used for mitigation of the problem of imbalance in the pollen concentration levels. For each taxon, the model was built using a random forest method. The study revealed the possibility of moderately reliable prediction of Corylus and highly reliable prediction of Alnus and Betula high pollen concentration levels, using preprocessed gridded meteorological data. Cumulative growing degree days and potential evaporation proved to be two of the most important predictor variables in the models. The final models predicted not only for single locations but also for continuous areas. Furthermore, the proposed modeling framework could be used to predict high pollen concentrations of Corylus, Alnus, Betula, and other taxa, and in other countries.

  12. Spatiotemporal models for predicting high pollen concentration level of Corylus, Alnus, and Betula

    NASA Astrophysics Data System (ADS)

    Nowosad, Jakub

    2016-06-01

    Corylus, Alnus, and Betula trees are among the most important sources of allergic pollen in the temperate zone of the Northern Hemisphere and have a large impact on the quality of life and productivity of allergy sufferers. Therefore, it is important to predict high pollen concentrations, both in time and space. The aim of this study was to create and evaluate spatiotemporal models for predicting high Corylus, Alnus, and Betula pollen concentration levels, based on gridded meteorological data. Aerobiological monitoring was carried out in 11 cities in Poland and gathered, depending on the site, between 2 and 16 years of measurements. According to the first allergy symptoms during exposure, a high pollen count level was established for each taxon. An optimizing probability threshold technique was used for mitigation of the problem of imbalance in the pollen concentration levels. For each taxon, the model was built using a random forest method. The study revealed the possibility of moderately reliable prediction of Corylus and highly reliable prediction of Alnus and Betula high pollen concentration levels, using preprocessed gridded meteorological data. Cumulative growing degree days and potential evaporation proved to be two of the most important predictor variables in the models. The final models predicted not only for single locations but also for continuous areas. Furthermore, the proposed modeling framework could be used to predict high pollen concentrations of Corylus, Alnus, Betula, and other taxa, and in other countries.

  13. 3D QSAR studies on protein tyrosine phosphatase 1B inhibitors: comparison of the quality and predictivity among 3D QSAR models obtained from different conformer-based alignments.

    PubMed

    Pandey, Gyanendra; Saxena, Anil K

    2006-01-01

    A set of 65 flexible peptidomimetic competitive inhibitors (52 in the training set and 13 in the test set) of protein tyrosine phosphatase 1B (PTP1B) has been used to compare the quality and predictive power of 3D quantitative structure-activity relationship (QSAR) comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) models for the three most commonly used conformer-based alignments, namely, cocrystallized conformer-based alignment (CCBA), docked conformer-based alignment (DCBA), and global minima energy conformer-based alignment (GMCBA). These three conformers of 5-[(2S)-2-({(2S)-2-[(tert-butoxycarbonyl)amino]-3-phenylpropanoyl}amino)3-oxo-3-pentylamino)propyl]-2-(carboxymethoxy)benzoic acid (compound number 66) were obtained from the X-ray structure of its cocrystallized complex with PTP1B (PDB ID: 1JF7), its docking studies, and its global minima by simulated annealing. Among the 3D QSAR models developed using the above three alignments, the CCBA provided the optimal predictive CoMFA model for the training set with cross-validated r2 (q2)=0.708, non-cross-validated r2=0.902, standard error of estimate (s)=0.165, and F=202.553 and the optimal CoMSIA model with q2=0.440, r2=0.799, s=0.192, and F=117.782. These models also showed the best test set prediction for the 13 compounds with predictive r2 values of 0.706 and 0.683, respectively. Though the QSAR models derived using the other two alignments also produced statistically acceptable models in the order DCBA>GMCBA in terms of the values of q2, r2, and predictive r2, they were inferior to the corresponding models derived using CCBA. Thus, the order of preference for the alignment selection for 3D QSAR model development may be CCBA>DCBA>GMCBA, and the information obtained from the CoMFA and CoMSIA contour maps may be useful in designing specific PTP1B inhibitors.

  14. Who will have Sustainable Employment After a Back Injury? The Development of a Clinical Prediction Model in a Cohort of Injured Workers.

    PubMed

    Shearer, Heather M; Côté, Pierre; Boyle, Eleanor; Hayden, Jill A; Frank, John; Johnson, William G

    2017-09-01

    Purpose Our objective was to develop a clinical prediction model to identify workers with sustainable employment following an episode of work-related low back pain (LBP). Methods We used data from a cohort study of injured workers with incident LBP claims in the USA to predict employment patterns 1 and 6 months following a workers' compensation claim. We developed three sequential models to determine the contribution of three domains of variables: (1) basic demographic/clinical variables; (2) health-related variables; and (3) work-related factors. Multivariable logistic regression was used to develop the predictive models. We constructed receiver operator curves and used the c-index to measure predictive accuracy. Results Seventy-nine percent and 77 % of workers had sustainable employment at 1 and 6 months, respectively. Sustainable employment at 1 month was predicted by initial back pain intensity, mental health-related quality of life, claim litigation and employer type (c-index = 0.77). At 6 months, sustainable employment was predicted by physical and mental health-related quality of life, claim litigation and employer type (c-index = 0.77). Adding health-related and work-related variables to models improved predictive accuracy by 8.5 and 10 % at 1 and 6 months respectively. Conclusion We developed clinically-relevant models to predict sustainable employment in injured workers who made a workers' compensation claim for LBP. Inquiring about back pain intensity, physical and mental health-related quality of life, claim litigation and employer type may be beneficial in developing programs of care. Our models need to be validated in other populations.

  15. Predicting surgical site infection after spine surgery: a validated model using a prospective surgical registry.

    PubMed

    Lee, Michael J; Cizik, Amy M; Hamilton, Deven; Chapman, Jens R

    2014-09-01

    The impact of surgical site infection (SSI) is substantial. Although previous study has determined relative risk and odds ratio (OR) values to quantify risk factors, these values may be difficult to translate to the patient during counseling of surgical options. Ideally, a model that predicts absolute risk of SSI, rather than relative risk or OR values, would greatly enhance the discussion of safety of spine surgery. To date, there is no risk stratification model that specifically predicts the risk of medical complication. The purpose of this study was to create and validate a predictive model for the risk of SSI after spine surgery. This study performs a multivariate analysis of SSI after spine surgery using a large prospective surgical registry. Using the results of this analysis, this study will then create and validate a predictive model for SSI after spine surgery. The patient sample is from a high-quality surgical registry from our two institutions with prospectively collected, detailed demographic, comorbidity, and complication data. An SSI that required return to the operating room for surgical debridement. Using a prospectively collected surgical registry of more than 1,532 patients with extensive demographic, comorbidity, surgical, and complication details recorded for 2 years after the surgery, we identified several risk factors for SSI after multivariate analysis. Using the beta coefficients from those regression analyses, we created a model to predict the occurrence of SSI after spine surgery. We split our data into two subsets for internal and cross-validation of our model. We created a predictive model based on our beta coefficients from our multivariate analysis. The final predictive model for SSI had a receiver-operator curve characteristic of 0.72, considered to be a fair measure. The final model has been uploaded for use on SpineSage.com. We present a validated model for predicting SSI after spine surgery. The value in this model is that it gives

  16. Satellite data driven modeling system for predicting air quality and visibility during wildfire and prescribed burn events

    NASA Astrophysics Data System (ADS)

    Nair, U. S.; Keiser, K.; Wu, Y.; Maskey, M.; Berendes, D.; Glass, P.; Dhakal, A.; Christopher, S. A.

    2012-12-01

    The Alabama Forestry Commission (AFC) is responsible for wildfire control and also prescribed burn management in the state of Alabama. Visibility and air quality degradation resulting from smoke are two pieces of information that are crucial for this activity. Currently the tools available to AFC are the dispersion index available from the National Weather Service and also surface smoke concentrations. The former provides broad guidance for prescribed burning activities but does not provide specific information regarding smoke transport, areas affected and quantification of air quality and visibility degradation. While the NOAA operational air quality guidance includes surface smoke concentrations from existing fire events, it does not account for contributions from background aerosols, which are important for the southeastern region including Alabama. Also lacking is the quantification of visibility. The University of Alabama in Huntsville has developed a state-of-the-art integrated modeling system to address these concerns. This system based on the Community Air Quality Modeling System (CMAQ) that ingests satellite derived smoke emissions and also assimilates NASA MODIS derived aerosol optical thickness. In addition, this operational modeling system also simulates the impact of potential prescribed burn events based on location information derived from the AFC prescribed burn permit database. A lagrangian model is used to simulate smoke plumes for the prescribed burns requests. The combined air quality and visibility degradation resulting from these smoke plumes and background aerosols is computed and the information is made available through a web based decision support system utilizing open source GIS components. This system provides information regarding intersections between highways and other critical facilities such as old age homes, hospitals and schools. The system also includes satellite detected fire locations and other satellite derived datasets

  17. Power flow prediction in vibrating systems via model reduction

    NASA Astrophysics Data System (ADS)

    Li, Xianhui

    This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.

  18. Incorporating uncertainty in predictive species distribution modelling.

    PubMed

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  19. Predicting the effects of nanoscale cerium additives in diesel fuel on regional-scale air quality.

    PubMed

    Erdakos, Garnet B; Bhave, Prakash V; Pouliot, George A; Simon, Heather; Mathur, Rohit

    2014-11-04

    Diesel vehicles are a major source of air pollutant emissions. Fuel additives containing nanoparticulate cerium (nCe) are currently being used in some diesel vehicles to improve fuel efficiency. These fuel additives also reduce fine particulate matter (PM2.5) emissions and alter the emissions of carbon monoxide (CO), nitrogen oxides (NOx), and hydrocarbon (HC) species, including several hazardous air pollutants (HAPs). To predict their net effect on regional air quality, we review the emissions literature and develop a multipollutant inventory for a hypothetical scenario in which nCe additives are used in all on-road and nonroad diesel vehicles. We apply the Community Multiscale Air Quality (CMAQ) model to a domain covering the eastern U.S. for a summer and a winter period. Model calculations suggest modest decreases of average PM2.5 concentrations and relatively larger decreases in particulate elemental carbon. The nCe additives also have an effect on 8 h maximum ozone in summer. Variable effects on HAPs are predicted. The total U.S. emissions of fine-particulate cerium are estimated to increase 25-fold and result in elevated levels of airborne cerium (up to 22 ng/m3), which might adversely impact human health and the environment.

  20. Modeling to Predict Escherichia coli at Presque Isle Beach 2, City of Erie, Erie County, Pennsylvania

    USGS Publications Warehouse

    Zimmerman, Tammy M.

    2008-01-01

    The Lake Erie beaches in Pennsylvania are a valuable recreational resource for Erie County. Concentrations of Escherichia coli (E. coli) at monitored beaches in Presque Isle State Park in Erie, Pa., occasionally exceed the single-sample bathing-water standard of 235 colonies per 100 milliliters resulting in potentially unsafe swimming conditions and prompting beach managers to post public advisories or to close beaches to recreation. To supplement the current method for assessing recreational water quality (E. coli concentrations from the previous day), a predictive regression model for E. coli concentrations at Presque Isle Beach 2 was developed from data collected during the 2004 and 2005 recreational seasons. Model output included predicted E. coli concentrations and exceedance probabilities--the probability that E. coli concentrations would exceed the standard. For this study, E. coli concentrations and other water-quality and environmental data were collected during the 2006 recreational season at Presque Isle Beach 2. The data from 2006, an independent year, were used to test (validate) the 2004-2005 predictive regression model and compare the model performance to the current method. Using 2006 data, the 2004-2005 model yielded more correct responses and better predicted exceedances of the standard than the use of E. coli concentrations from the previous day. The differences were not pronounced, however, and more data are needed. For example, the model correctly predicted exceedances of the standard 11 percent of the time (1 out of 9 exceedances that occurred in 2006) whereas using the E. coli concentrations from the previous day did not result in any correctly predicted exceedances. After validation, new models were developed by adding the 2006 data to the 2004-2005 dataset and by analyzing the data in 2- and 3-year combinations. Results showed that excluding the 2004 data (using 2005 and 2006 data only) yielded the best model. Explanatory variables in the

  1. Template-based and free modeling of I-TASSER and QUARK pipelines using predicted contact maps in CASP12.

    PubMed

    Zhang, Chengxin; Mortuza, S M; He, Baoji; Wang, Yanting; Zhang, Yang

    2018-03-01

    We develop two complementary pipelines, "Zhang-Server" and "QUARK", based on I-TASSER and QUARK pipelines for template-based modeling (TBM) and free modeling (FM), and test them in the CASP12 experiment. The combination of I-TASSER and QUARK successfully folds three medium-size FM targets that have more than 150 residues, even though the interplay between the two pipelines still awaits further optimization. Newly developed sequence-based contact prediction by NeBcon plays a critical role to enhance the quality of models, particularly for FM targets, by the new pipelines. The inclusion of NeBcon predicted contacts as restraints in the QUARK simulations results in an average TM-score of 0.41 for the best in top five predicted models, which is 37% higher than that by the QUARK simulations without contacts. In particular, there are seven targets that are converted from non-foldable to foldable (TM-score >0.5) due to the use of contact restraints in the simulations. Another additional feature in the current pipelines is the local structure quality prediction by ResQ, which provides a robust residue-level modeling error estimation. Despite the success, significant challenges still remain in ab initio modeling of multi-domain proteins and folding of β-proteins with complicated topologies bound by long-range strand-strand interactions. Improvements on domain boundary and long-range contact prediction, as well as optimal use of the predicted contacts and multiple threading alignments, are critical to address these issues seen in the CASP12 experiment. © 2017 Wiley Periodicals, Inc.

  2. The Phyre2 web portal for protein modelling, prediction and analysis

    PubMed Central

    Kelley, Lawrence A; Mezulis, Stefans; Yates, Christopher M; Wass, Mark N; Sternberg, Michael JE

    2017-01-01

    Summary Phyre2 is a suite of tools available on the web to predict and analyse protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a protocol. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites, and analyse the effect of amino-acid variants (e.g. nsSNPs) for a user’s protein sequence. Users are guided through results by a simple interface at a level of detail determined by them. This protocol will guide a user from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30mins and 2 hours after submission. PMID:25950237

  3. Prediction Models for Dynamic Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D 2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D 2R, which we address inmore » this paper. Our first contribution is the formal definition of D 2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D 2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D 2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D 2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D 2R. Also, prediction models require just few days’ worth of data indicating that small amounts of historical training data can be used to make reliable predictions

  4. Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions

    NASA Astrophysics Data System (ADS)

    van Hooidonk, R.; Huber, M.

    2012-03-01

    Future widespread coral bleaching and subsequent mortality has been projected using sea surface temperature (SST) data derived from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. Such weaknesses most likely reduce the accuracy of predicting coral bleaching, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends, and their propagation in predictions. To analyze the relative importance of various types of model errors and biases in predicting coral bleaching, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from 24 GCMs 20th century simulations included in the Intergovernmental Panel on Climate Change (IPCC) 4th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate accuracy using an objective measure of forecast quality, the Peirce skill score (PSS). Major findings are that: (1) predictions are most sensitive to the seasonal cycle and inter-annual variability in the ENSO 24-60 months frequency band and (2) because models tend to understate the seasonal cycle at reef locations, they systematically underestimate future bleaching. The methodology we describe can be used to improve the accuracy of bleaching predictions by characterizing the errors and uncertainties involved in the predictions.

  5. PREDICTIVE UNCERTAINTY IN HYDROLOGIC AND WATER QUALITY MODELING: APPROACHES, APPLICATION TO ENVIRONMENTAL MANAGEMENT, AND FUTURE CHALLENGES (PRESENTATION)

    EPA Science Inventory

    Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...

  6. Model of Auctioneer Estimation of Swordtip Squid (Loligo edulis) Quality

    NASA Astrophysics Data System (ADS)

    Nakamura, Makoto; Matsumoto, Keisuke; Morimoto, Eiji; Ezoe, Satoru; Maeda, Toshimichi; Hirano, Takayuki

    The knowledge of experienced auctioneers regarding the circulation of marine products is an essential skill and is necessary for evaluating product quality and managing aspects such as freshness. In the present study, the ability of an auctioneer to quickly evaluate the freshness of swordtip squid (Loligo edulis) at fish markets was analyzed. Evaluation characteristics used by an auctioneer were analyzed and developed using a fuzzy logic model. Forty boxes containing 247 swordtip squid with mantles measuring 220 mm that had been evaluated and assigned to one of five quality categories by an auctioneer were used for the analysis and the modeling. The relationships between the evaluations of appearance, body color, and muscle freshness were statistically analyzed. It was found that a total of four indexes of the epidermis color strongly reflected evaluations of appearance: dispersion ratio of the head, chroma on the head-end mantle and the difference in the chroma and brightness of the mantle. The fuzzy logic model used these indexes for the antecedent-part of the linguistic rules. The results of both simulation and evaluations demonstrate that the model is robust, with the predicted results corresponding with more than 96% of the quality assignments of the auctioneers.

  7. Predictive validity evidence for medical education research study quality instrument scores: quality of submissions to JGIM's Medical Education Special Issue.

    PubMed

    Reed, Darcy A; Beckman, Thomas J; Wright, Scott M; Levine, Rachel B; Kern, David E; Cook, David A

    2008-07-01

    Deficiencies in medical education research quality are widely acknowledged. Content, internal structure, and criterion validity evidence support the use of the Medical Education Research Study Quality Instrument (MERSQI) to measure education research quality, but predictive validity evidence has not been explored. To describe the quality of manuscripts submitted to the 2008 Journal of General Internal Medicine (JGIM) medical education issue and determine whether MERSQI scores predict editorial decisions. Cross-sectional study of original, quantitative research studies submitted for publication. Study quality measured by MERSQI scores (possible range 5-18). Of 131 submitted manuscripts, 100 met inclusion criteria. The mean (SD) total MERSQI score was 9.6 (2.6), range 5-15.5. Most studies used single-group cross-sectional (54%) or pre-post designs (32%), were conducted at one institution (78%), and reported satisfaction or opinion outcomes (56%). Few (36%) reported validity evidence for evaluation instruments. A one-point increase in MERSQI score was associated with editorial decisions to send manuscripts for peer review versus reject without review (OR 1.31, 95%CI 1.07-1.61, p = 0.009) and to invite revisions after review versus reject after review (OR 1.29, 95%CI 1.05-1.58, p = 0.02). MERSQI scores predicted final acceptance versus rejection (OR 1.32; 95% CI 1.10-1.58, p = 0.003). The mean total MERSQI score of accepted manuscripts was significantly higher than rejected manuscripts (10.7 [2.5] versus 9.0 [2.4], p = 0.003). MERSQI scores predicted editorial decisions and identified areas of methodological strengths and weaknesses in submitted manuscripts. Researchers, reviewers, and editors might use this instrument as a measure of methodological quality.

  8. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    NASA Astrophysics Data System (ADS)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  9. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    NASA Astrophysics Data System (ADS)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  10. COMPARISON OF DATA FROM AN IAQ TEST HOUSE WITH PREDICTIONS OF AN IAQ COMPUTER MODEL

    EPA Science Inventory

    The paper describes several experiments to evaluate the impact of indoor air pollutant sources on indoor air quality (IAQ). Measured pollutant concentrations are compared with concentrations predicted by an IAQ model. The measured concentrations are in excellent agreement with th...

  11. Air quality modeling for effective environmental management in the mining region.

    PubMed

    Asif, Zunaira; Chen, Zhi; Han, Yi

    2018-04-18

    Air quality in the mining sector is a serious environmental concern and associated with many health issues. The air quality management in mining region has been facing many challenges due to lack of understanding of atmospheric factors and physical removal mechanism. A modeling approach called mining air dispersion model (MADM) is developed to predict air pollutants concentration in the mining region while considering the deposition effect. The model is taken into account through the planet's boundary conditions and assuming that the eddy diffusivity depends on the downwind distance. The developed MADM is applied to a mining site in Canada. The model provides values as the predicted concentrations of PM 10 , PM 2.5 , TSP, NO 2 and six heavy metals (As, Pb, Hg, Cd, Zn, Cr) at various receptor locations. The model shows that neutral stability conditions are dominant for the study site. The maximum mixing height is achieved (1280 m) during the evening of summer, and minimum mixing height (380 m) is attained during the evening of winter. The dust fall (PM coarse) deposition flux is maximum during February and March with the deposition velocity of 4.67 cm/s. The results are evaluated with the monitoring field values, revealing a good agreement for the target air pollutants with R-squared ranging from 0.72 to 0.96 for PM 2.5 ; 0.71 to 0.82 for PM 10 and from 0.71 to 0.89 for NO 2 . The analyses illustrate that presented algorithm in this model can be used to assess air quality for the mining site in a systematic way. The comparison of MADM and CALPUFF modeling values are made for four different pollutants (PM 2.5 , PM 10 , TSP, and NO 2 ) under three different atmospheric stability classes (stable, neutral and unstable). Further, MADM results are statistically tested against CALPUFF for the air pollutants and model performance is found satisfactory.

  12. Data worth and prediction uncertainty for pesticide transport and fate models in Nebraska and Maryland, United States

    USDA-ARS?s Scientific Manuscript database

    Few studies have attempted to quantify mass balances of both pesticides and degradates in multiple agricultural settings of the United States. We used inverse modeling to calibrate the Root Zone Water Quality Model (RZWQM) for predicting the unsaturated-zone transport and fate of metolachlor, metola...

  13. Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool

    NASA Astrophysics Data System (ADS)

    Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.

    2018-06-01

    Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.

  14. Predictive model for CO2 generation and decay in building envelopes

    NASA Astrophysics Data System (ADS)

    Aglan, Heshmat A.

    2003-01-01

    Understanding carbon dioxide generation and decay patterns in buildings with high occupancy levels is useful to identify their indoor air quality, air change rates, percent fresh air makeup, occupancy pattern, and how a variable air volume system to off-set undesirable CO2 level can be modulated. A mathematical model governing the generation and decay of CO2 in building envelopes with forced ventilation due to high occupancy is developed. The model has been verified experimentally in a newly constructed energy efficient healthy house. It was shown that the model accurately predicts the CO2 concentration at any time during the generation and decay processes.

  15. Failure of Colorectal Surgical Site Infection Predictive Models Applied to an Independent Dataset: Do They Add Value or Just Confusion?

    PubMed

    Bergquist, John R; Thiels, Cornelius A; Etzioni, David A; Habermann, Elizabeth B; Cima, Robert R

    2016-04-01

    Colorectal surgical site infections (C-SSIs) are a major source of postoperative morbidity. Institutional C-SSI rates are modeled and scrutinized, and there is increasing movement in the direction of public reporting. External validation of C-SSI risk prediction models is lacking. Factors governing C-SSI occurrence are complicated and multifactorial. We hypothesized that existing C-SSI prediction models have limited ability to accurately predict C-SSI in independent data. Colorectal resections identified from our institutional ACS-NSQIP dataset (2006 to 2014) were reviewed. The primary outcome was any C-SSI according to the ACS-NSQIP definition. Emergency cases were excluded. Published C-SSI risk scores: the National Nosocomial Infection Surveillance (NNIS), Contamination, Obesity, Laparotomy, and American Society of Anesthesiologists (ASA) class (COLA), Preventie Ziekenhuisinfecties door Surveillance (PREZIES), and NSQIP-based models were compared with receiver operating characteristic (ROC) analysis to evaluate discriminatory quality. There were 2,376 cases included, with an overall C-SSI rate of 9% (213 cases). None of the models produced reliable and high quality C-SSI predictions. For any C-SSI, the NNIS c-index was 0.57 vs 0.61 for COLA, 0.58 for PREZIES, and 0.62 for NSQIP: all well below the minimum "reasonably" predictive c-index of 0.7. Predictions for superficial, deep, and organ space SSI were similarly poor. Published C-SSI risk prediction models do not accurately predict C-SSI in our independent institutional dataset. Application of externally developed prediction models to any individual practice must be validated or modified to account for institution and case-mix specific factors. This questions the validity of using externally or nationally developed models for "expected" outcomes and interhospital comparisons. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Using biotic ligand models to predict metal toxicity in mineralized systems

    USGS Publications Warehouse

    Smith, Kathleen S.; Balistrieri, Laurie S.; Todd, Andrew S.

    2015-01-01

    The biotic ligand model (BLM) is a numerical approach that couples chemical speciation calculations with toxicological information to predict the toxicity of aquatic metals. This approach was proposed as an alternative to expensive toxicological testing, and the U.S. Environmental Protection Agency incorporated the BLM into the 2007 revised aquatic life ambient freshwater quality criteria for Cu. Research BLMs for Ag, Ni, Pb, and Zn are also available, and many other BLMs are under development. Current BLMs are limited to ‘one metal, one organism’ considerations. Although the BLM generally is an improvement over previous approaches to determining water quality criteria, there are several challenges in implementing the BLM, particularly at mined and mineralized sites. These challenges include: (1) historically incomplete datasets for BLM input parameters, especially dissolved organic carbon (DOC), (2) several concerns about DOC, such as DOC fractionation in Fe- and Al-rich systems and differences in DOC quality that result in variations in metal-binding affinities, (3) water-quality parameters and resulting metal-toxicity predictions that are temporally and spatially dependent, (4) additional influences on metal bioavailability, such as multiple metal toxicity, dietary metal toxicity, and competition among organisms or metals, (5) potential importance of metal interactions with solid or gas phases and/or kinetically controlled reactions, and (6) tolerance to metal toxicity observed for aquatic organisms living in areas with elevated metal concentrations.

  17. Predicting water quality by relating secchi-disk transparency and chlorophyll a measurements to satellite imagery for Michigan Inland Lakes, August 2002

    USGS Publications Warehouse

    Fuller, L.M.; Aichele, Stephen S.; Minnerick, R.J.

    2004-01-01

    Inland lakes are an important economic and environmental resource for Michigan. The U.S. Geological Survey and the Michigan Department of Environmental Quality have been cooperatively monitoring the quality of selected lakes in Michigan through the Lake Water Quality Assessment program. Through this program, approximately 730 of Michigan's 11,000 inland lakes will be monitored once during this 15-year study. Targeted lakes will be sampled during spring turnover and again in late summer to characterize water quality. Because more extensive and more frequent sampling is not economically feasible in the Lake Water Quality Assessment program, the U.S. Geological Survey and Michigan Department of Environmental Quality investigate the use of satellite imagery as a means of estimating water quality in unsampled lakes. Satellite imagery has been successfully used in Minnesota, Wisconsin, and elsewhere to compute the trophic state of inland lakes from predicted secchi-disk measurements. Previous attempts of this kind in Michigan resulted in a poorer fit between observed and predicted data than was found for Minnesota or Wisconsin. This study tested whether estimates could be improved by using atmospherically corrected satellite imagery, whether a more appropriate regression model could be obtained for Michigan, and whether chlorophyll a concentrations could be reliably predicted from satellite imagery in order to compute trophic state of inland lakes. Although the atmospheric-correction did not significantly improve estimates of lake-water quality, a new regression equation was identified that consistently yielded better results than an equation obtained from the literature. A stepwise regression was used to determine an equation that accurately predicts chlorophyll a concentrations in northern Lower Michigan.

  18. On the Selection of Models for Runtime Prediction of System Resources

    NASA Astrophysics Data System (ADS)

    Casolari, Sara; Colajanni, Michele

    Applications and services delivered through large Internet Data Centers are now feasible thanks to network and server improvement, but also to virtualization, dynamic allocation of resources and dynamic migrations. The large number of servers and resources involved in these systems requires autonomic management strategies because no amount of human administrators would be capable of cloning and migrating virtual machines in time, as well as re-distributing or re-mapping the underlying hardware. At the basis of most autonomic management decisions, there is the need of evaluating own global behavior and change it when the evaluation indicates that they are not accomplishing what they were intended to do or some relevant anomalies are occurring. Decisions algorithms have to satisfy different time scales constraints. In this chapter we are interested to short-term contexts where runtime prediction models work on the basis of time series coming from samples of monitored system resources, such as disk, CPU and network utilization. In similar environments, we have to address two main issues. First, original time series are affected by limited predictability because measurements are characterized by noises due to system instability, variable offered load, heavy-tailed distributions, hardware and software interactions. Moreover, there is no existing criteria that can help us to choose a suitable prediction model and related parameters with the purpose of guaranteeing an adequate prediction quality. In this chapter, we evaluate the impact that different choices on prediction models have on different time series, and we suggest how to treat input data and whether it is convenient to choose the parameters of a prediction model in a static or dynamic way. Our conclusions are supported by a large set of analyses on realistic and synthetic data traces.

  19. Hydrologic modeling as a predictive basis for ecological restoration of salt marshes

    USGS Publications Warehouse

    Roman, C.T.; Garvine, R.W.; Portnoy, J.W.

    1995-01-01

    Roads, bridges, causeways, impoundments, and dikes in the coastal zone often restrict tidal flow to salt marsh ecosystems. A dike with tide control structures, located at the mouth of the Herring River salt marsh estuarine system (Wellfleet, Massachusetts) since 1908, has effectively restricted tidal exchange, causing changes in marsh vegetation composition, degraded water quality, and reduced abundance of fish and macroinvertebrate communities. Restoration of this estuary by reintroduction of tidal exchange is a feasible management alternative. However, restoration efforts must proceed with caution as residential dwellings and a golf course are located immediately adjacent to and in places within the tidal wetland. A numerical model was developed to predict tide height levels for numerous alternative openings through the Herring River dike. Given these model predictions and knowledge of elevations of flood-prone areas, it becomes possible to make responsible decisions regarding restoration. Moreover, tidal flooding elevations relative to the wetland surface must be known to predict optimum conditions for ecological recovery. The tide height model has a universal role, as demonstrated by successful application at a nearby salt marsh restoration site in Provincetown, Massachusetts. Salt marsh restoration is a valuable management tool toward maintaining and enhancing coastal zone habitat diversity. The tide height model presented in this paper will enable both scientists and resource professionals to assign a degree of predictability when designing salt marsh restoration programs.

  20. Longitudinal Prediction of Quality-of-Life Scores and Locomotion in Individuals With Traumatic Spinal Cord Injury.

    PubMed

    Hiremath, Shivayogi V; Hogaboom, Nathan S; Roscher, Melissa R; Worobey, Lynn A; Oyster, Michelle L; Boninger, Michael L

    2017-12-01

    To examine (1) differences in quality-of-life scores for groups based on transitions in locomotion status at 1, 5, and 10 years postdischarge in a sample of people with spinal cord injury (SCI); and (2) whether demographic factors and transitions in locomotion status can predict quality-of-life measures at these time points. Retrospective case study of the National SCI Database. Model SCI Systems Centers. Individuals with SCI (N=10,190) from 21 SCI Model Systems Centers, identified through the National SCI Model Systems Centers database between the years 1985 and 2012. Subjects had FIM (locomotion mode) data at discharge and at least 1 of the following: 1, 5, or 10 years postdischarge. Not applicable. FIM-locomotion mode; Severity of Depression Scale; Satisfaction With Life Scale; and Craig Handicap Assessment and Reporting Technique. Participants who transitioned from ambulation to wheelchair use reported lower participation and life satisfaction, and higher depression levels (P<.05) than those who maintained their ambulatory status. Participants who transitioned from ambulation to wheelchair use reported higher depression levels (P<.05) and no difference for participation (P>.05) or life satisfaction (P>.05) compared with those who transitioned from wheelchair to ambulation. Demographic factors and locomotion transitions predicted quality-of-life scores at all time points (P<.05). The results of this study indicate that transitioning from ambulation to wheelchair use can negatively impact psychosocial health 10 years after SCI. Clinicians should be aware of this when deciding on ambulation training. Further work to characterize who may be at risk for these transitions is needed. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  1. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    NASA Astrophysics Data System (ADS)

    Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem

    2017-11-01

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.

  2. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE PAGES

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...

    2017-10-24

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  3. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  4. Hydrometeorological model for streamflow prediction

    USGS Publications Warehouse

    Tangborn, Wendell V.

    1979-01-01

    The hydrometeorological model described in this manual was developed to predict seasonal streamflow from water in storage in a basin using streamflow and precipitation data. The model, as described, applies specifically to the Skokomish, Nisqually, and Cowlitz Rivers, in Washington State, and more generally to streams in other regions that derive seasonal runoff from melting snow. Thus the techniques demonstrated for these three drainage basins can be used as a guide for applying this method to other streams. Input to the computer program consists of daily averages of gaged runoff of these streams, and daily values of precipitation collected at Longmire, Kid Valley, and Cushman Dam. Predictions are based on estimates of the absolute storage of water, predominately as snow: storage is approximately equal to basin precipitation less observed runoff. A pre-forecast test season is used to revise the storage estimate and improve the prediction accuracy. To obtain maximum prediction accuracy for operational applications with this model , a systematic evaluation of several hydrologic and meteorologic variables is first necessary. Six input options to the computer program that control prediction accuracy are developed and demonstrated. Predictions of streamflow can be made at any time and for any length of season, although accuracy is usually poor for early-season predictions (before December 1) or for short seasons (less than 15 days). The coefficient of prediction (CP), the chief measure of accuracy used in this manual, approaches zero during the late autumn and early winter seasons and reaches a maximum of about 0.85 during the spring snowmelt season. (Kosco-USGS)

  5. Survival Regression Modeling Strategies in CVD Prediction.

    PubMed

    Barkhordari, Mahnaz; Padyab, Mojgan; Sardarinia, Mahsa; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza

    2016-04-01

    A fundamental part of prevention is prediction. Potential predictors are the sine qua non of prediction models. However, whether incorporating novel predictors to prediction models could be directly translated to added predictive value remains an area of dispute. The difference between the predictive power of a predictive model with (enhanced model) and without (baseline model) a certain predictor is generally regarded as an indicator of the predictive value added by that predictor. Indices such as discrimination and calibration have long been used in this regard. Recently, the use of added predictive value has been suggested while comparing the predictive performances of the predictive models with and without novel biomarkers. User-friendly statistical software capable of implementing novel statistical procedures is conspicuously lacking. This shortcoming has restricted implementation of such novel model assessment methods. We aimed to construct Stata commands to help researchers obtain the aforementioned statistical indices. We have written Stata commands that are intended to help researchers obtain the following. 1, Nam-D'Agostino X 2 goodness of fit test; 2, Cut point-free and cut point-based net reclassification improvement index (NRI), relative absolute integrated discriminatory improvement index (IDI), and survival-based regression analyses. We applied the commands to real data on women participating in the Tehran lipid and glucose study (TLGS) to examine if information relating to a family history of premature cardiovascular disease (CVD), waist circumference, and fasting plasma glucose can improve predictive performance of Framingham's general CVD risk algorithm. The command is adpredsurv for survival models. Herein we have described the Stata package "adpredsurv" for calculation of the Nam-D'Agostino X 2 goodness of fit test as well as cut point-free and cut point-based NRI, relative and absolute IDI, and survival-based regression analyses. We hope this

  6. Prediction of blood-brain partitioning: a model based on molecular electronegativity distance vector descriptors.

    PubMed

    Zhang, Yong-Hong; Xia, Zhi-Ning; Qin, Li-Tang; Liu, Shu-Shen

    2010-09-01

    The objective of this paper is to build a reliable model based on the molecular electronegativity distance vector (MEDV) descriptors for predicting the blood-brain barrier (BBB) permeability and to reveal the effects of the molecular structural segments on the BBB permeability. Using 70 structurally diverse compounds, the partial least squares regression (PLSR) models between the BBB permeability and the MEDV descriptors were developed and validated by the variable selection and modeling based on prediction (VSMP) technique. The estimation ability, stability, and predictive power of a model are evaluated by the estimated correlation coefficient (r), leave-one-out (LOO) cross-validation correlation coefficient (q), and predictive correlation coefficient (R(p)). It has been found that PLSR model has good quality, r=0.9202, q=0.7956, and R(p)=0.6649 for M1 model based on the training set of 57 samples. To search the most important structural factors affecting the BBB permeability of compounds, we performed the values of the variable importance in projection (VIP) analysis for MEDV descriptors. It was found that some structural fragments in compounds, such as -CH(3), -CH(2)-, =CH-, =C, triple bond C-, -CH<, =C<, =N-, -NH-, =O, and -OH, are the most important factors affecting the BBB permeability. (c) 2010. Published by Elsevier Inc.

  7. Prediction modelling for trauma using comorbidity and 'true' 30-day outcome.

    PubMed

    Bouamra, Omar; Jacques, Richard; Edwards, Antoinette; Yates, David W; Lawrence, Thomas; Jenks, Tom; Woodford, Maralyn; Lecky, Fiona

    2015-12-01

    Prediction models for trauma outcome routinely control for age but there is uncertainty about the need to control for comorbidity and whether the two interact. This paper describes recent revisions to the Trauma Audit and Research Network (TARN) risk adjustment model designed to take account of age and comorbidities. In addition linkage between TARN and the Office of National Statistics (ONS) database allows patient's outcome to be accurately identified up to 30 days after injury. Outcome at discharge within 30 days was previously used. Prospectively collected data between 2010 and 2013 from the TARN database were analysed. The data for modelling consisted of 129 786 hospital trauma admissions. Three models were compared using the area under the receiver operating curve (AuROC) for assessing the ability of the models to predict outcome, the Akaike information criteria to measure the quality between models and test for goodness-of-fit and calibration. Model 1 is the current TARN model, Model 2 is Model 1 augmented by a modified Charlson comorbidity index and Model 3 is Model 2 with ONS data on 30 day outcome. The values of the AuROC curve for Model 1 were 0.896 (95% CI 0.893 to 0.899), for Model 2 were 0.904 (0.900 to 0.907) and for Model 3 0.897 (0.896 to 0.902). No significant interaction was found between age and comorbidity in Model 2 or in Model 3. The new model includes comorbidity and this has improved outcome prediction. There was no interaction between age and comorbidity, suggesting that both independently increase vulnerability to mortality after injury. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  8. Air Quality Modeling | Air Quality Planning & Standards | US ...

    EPA Pesticide Factsheets

    2016-06-08

    The basic mission of the Office of Air Quality Planning and Standards is to preserve and improve the quality of our nation's air. One facet of accomplishing this goal requires that new and existing air pollution sources be modeled for compliance with the National Ambient Air Quality Standards (NAAQS).

  9. An approach to predict water quality in data-sparse catchments using hydrological catchment similarity

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Glendell, Miriam; Stutter, Marc I.; Helliwell, Rachel C.

    2017-04-01

    An understanding of catchment response to climate and land use change at a regional scale is necessary for the assessment of mitigation and adaptation options addressing diffuse nutrient pollution. It is well documented that the physicochemical properties of a river ecosystem respond to change in a non-linear fashion. This is particularly important when threshold water concentrations, relevant to national and EU legislation, are exceeded. Large scale (regional) model assessments required for regulatory purposes must represent the key processes and mechanisms that are more readily understood in catchments with water quantity and water quality data monitored at high spatial and temporal resolution. While daily discharge data are available for most catchments in Scotland, nitrate and phosphorus are mostly available on a monthly basis only, as typified by regulatory monitoring. However, high resolution (hourly to daily) water quantity and water quality data exist for a limited number of research catchments. To successfully implement adaptation measures across Scotland, an upscaling from data-rich to data-sparse catchments is required. In addition, the widespread availability of spatial datasets affecting hydrological and biogeochemical responses (e.g. soils, topography/geomorphology, land use, vegetation etc.) provide an opportunity to transfer predictions between data-rich and data-sparse areas by linking processes and responses to catchment attributes. Here, we develop a framework of catchment typologies as a prerequisite for transferring information from data-rich to data-sparse catchments by focusing on how hydrological catchment similarity can be used as an indicator of grouped behaviours in water quality response. As indicators of hydrological catchment similarity we use flow indices derived from observed discharge data across Scotland as well as hydrological model parameters. For the latter, we calibrated the lumped rainfall-runoff model TUWModel using multiple

  10. Estimating and Predicting Metal Concentration Using Online Turbidity Values and Water Quality Models in Two Rivers of the Taihu Basin, Eastern China.

    PubMed

    Yao, Hong; Zhuang, Wei; Qian, Yu; Xia, Bisheng; Yang, Yang; Qian, Xin

    2016-01-01

    Turbidity (T) has been widely used to detect the occurrence of pollutants in surface water. Using data collected from January 2013 to June 2014 at eleven sites along two rivers feeding the Taihu Basin, China, the relationship between the concentration of five metals (aluminum (Al), titanium (Ti), nickel (Ni), vanadium (V), lead (Pb)) and turbidity was investigated. Metal concentration was determined using inductively coupled plasma mass spectrometry (ICP-MS). The linear regression of metal concentration and turbidity provided a good fit, with R(2) = 0.86-0.93 for 72 data sets collected in the industrial river and R(2) = 0.60-0.85 for 60 data sets collected in the cleaner river. All the regression presented good linear relationship, leading to the conclusion that the occurrence of the five metals are directly related to suspended solids, and these metal concentration could be approximated using these regression equations. Thus, the linear regression equations were applied to estimate the metal concentration using online turbidity data from January 1 to June 30 in 2014. In the prediction, the WASP 7.5.2 (Water Quality Analysis Simulation Program) model was introduced to interpret the transport and fates of total suspended solids; in addition, metal concentration downstream of the two rivers was predicted. All the relative errors between the estimated and measured metal concentration were within 30%, and those between the predicted and measured values were within 40%. The estimation and prediction process of metals' concentration indicated that exploring the relationship between metals and turbidity values might be one effective technique for efficient estimation and prediction of metal concentration to facilitate better long-term monitoring with high temporal and spatial density.

  11. Predicting Causes of Data Quality Issues in a Clinical Data Research Network.

    PubMed

    Khare, Ritu; Ruth, Byron J; Miller, Matthew; Tucker, Joshua; Utidjian, Levon H; Razzaghi, Hanieh; Patibandla, Nandan; Burrows, Evanette K; Bailey, L Charles

    2018-01-01

    Clinical data research networks (CDRNs) invest substantially in identifying and investigating data quality problems. While identification is largely automated, the investigation and resolution are carried out manually at individual institutions. In the PEDSnet CDRN, we found that only approximately 35% of the identified data quality issues are resolvable as they are caused by errors in the extract-transform-load (ETL) code. Nonetheless, with no prior knowledge of issue causes, partner institutions end up spending significant time investigating issues that represent either inherent data characteristics or false alarms. This work investigates whether the causes (ETL, Characteristic, or False alarm) can be predicted before spending time investigating issues. We trained a classifier on the metadata from 10,281 real-world data quality issues, and achieved a cause prediction F1-measure of up to 90%. While initially tested on PEDSnet, the proposed methodology is applicable to other CDRNs facing similar bottlenecks in handling data quality results.

  12. An expert system for water quality modelling.

    PubMed

    Booty, W G; Lam, D C; Bobba, A G; Wong, I; Kay, D; Kerby, J P; Bowen, G S

    1992-12-01

    The RAISON-micro (Regional Analysis by Intelligent System ON a micro-computer) expert system is being used to predict the effects of mine effluents on receiving waters in Ontario. The potential of this system to assist regulatory agencies and mining industries to define more acceptable effluent limits was shown in an initial study. This system has been further developed so that the expert system helps the model user choose the most appropriate model for a particular application from a hierarchy of models. The system currently contains seven models which range from steady state to time dependent models, for both conservative and nonconservative substances in rivers and lakes. The menu driven expert system prompts the model user for information such as the nature of the receiving water system, the type of effluent being considered, and the range of background data available for use as input to the models. The system can also be used to determine the nature of the environmental conditions at the site which are not available in the textual information database, such as the components of river flow. Applications of the water quality expert system are presented for representative mine sites in the Timmins area of Ontario.

  13. Higher Education Quality Assessment Model: Towards Achieving Educational Quality Standard

    ERIC Educational Resources Information Center

    Noaman, Amin Y.; Ragab, Abdul Hamid M.; Madbouly, Ayman I.; Khedra, Ahmed M.; Fayoumi, Ayman G.

    2017-01-01

    This paper presents a developed higher education quality assessment model (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…

  14. [Improving apple fruit quality predictions by effective correction of Vis-NIR laser diffuse reflecting images].

    PubMed

    Qing, Zhao-shen; Ji, Bao-ping; Shi, Bo-lin; Zhu, Da-zhou; Tu, Zhen-hua; Zude, Manuela

    2008-06-01

    In the present study, improved laser-induced light backscattering imaging was studied regarding its potential for analyzing apple SSC and fruit flesh firmness. Images of the diffuse reflection of light on the fruit surface were obtained from Fuji apples using laser diodes emitting at five wavelength bands (680, 780, 880, 940 and 980 nm). Image processing algorithms were tested to correct for dissimilar equator and shape of fruit, and partial least squares (PLS) regression analysis was applied to calibrate on the fruit quality parameter. In comparison to the calibration based on corrected frequency with the models built by raw data, the former improved r from 0. 78 to 0.80 and from 0.87 to 0.89 for predicting SSC and firmness, respectively. Comparing models based on mean value of intensities with results obtained by frequency of intensities, the latter gave higher performance for predicting Fuji SSC and firmness. Comparing calibration for predicting SSC based on the corrected frequency of intensities and the results obtained from raw data set, the former improved root mean of standard error of prediction (RMSEP) from 1.28 degrees to 0.84 degrees Brix. On the other hand, in comparison to models for analyzing flesh firmness built by means of corrected frequency of intensities with the calibrations based on raw data, the former gave the improvement in RMSEP from 8.23 to 6.17 N x cm(-2).

  15. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  16. Calculating Path-Dependent Travel Time Prediction Variance and Covariance fro a Global Tomographic P-Velocity Model

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.

    2012-12-01

    Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  17. Mental models accurately predict emotion transitions.

    PubMed

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  18. Mental models accurately predict emotion transitions

    PubMed Central

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  19. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  20. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  1. A Global Model for Bankruptcy Prediction

    PubMed Central

    Alaminos, David; del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy. PMID:27880810

  2. Shelf-life prediction models for ready-to-eat fresh cut salads: Testing in real cold chain.

    PubMed

    Tsironi, Theofania; Dermesonlouoglou, Efimia; Giannoglou, Marianna; Gogou, Eleni; Katsaros, George; Taoukis, Petros

    2017-01-02

    The aim of the study was to develop and test the applicability of predictive models for shelf-life estimation of ready-to-eat (RTE) fresh cut salads in realistic distribution temperature conditions in the food supply chain. A systematic kinetic study of quality loss of RTE mixed salad (lollo rosso lettuce-40%, lollo verde lettuce-45%, rocket-15%) packed under modified atmospheres (3% O 2 , 10% CO 2 , 87% N 2 ) was conducted. Microbial population (total viable count, Pseudomonas spp., lactic acid bacteria), vitamin C, colour and texture were the measured quality parameters. Kinetic models for these indices were developed to determine the quality loss and calculate product remaining shelf-life (SL R ). Storage experiments were conducted at isothermal (2.5-15°C) and non-isothermal temperature conditions (T eff =7.8°C defined as the constant temperature that results in the same quality value as the variable temperature distribution) for validation purposes. Pseudomonas dominated spoilage, followed by browning and chemical changes. The end of shelf-life correlated with a Pseudomonas spp. level of 8 log(cfu/g), and 20% loss of the initial vitamin C content. The effect of temperature on these quality parameters was expressed by the Arrhenius equation; activation energy (E a ) value was 69.1 and 122.6kJ/mol for Pseudomonas spp. growth and vitamin C loss rates, respectively. Shelf-life prediction models were also validated in real cold chain conditions (including the stages of transport to and storage at retail distribution center, transport to and display at 7 retail stores, transport to and storage in domestic refrigerators). The quality level and SL R estimated after 2-3days of domestic storage (time of consumption) ranged between 1 and 8days at 4°C and was predicted within satisfactory statistical error by the kinetic models. T eff in the cold chain ranged between 3.7 and 8.3°C. Using the validated models, SL R of RTE fresh cut salad can be estimated at any point of

  3. Cost prediction model for various payloads and instruments for the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Hoffman, F. E.

    1984-01-01

    The following cost parameters of the space shuttle were undertaken: (1) to develop a cost prediction model for various payload classes of instruments and experiments for the Space Shuttle Orbiter; and (2) to show the implications of various payload classes on the cost of: reliability analysis, quality assurance, environmental design requirements, documentation, parts selection, and other reliability enhancing activities.

  4. What Air Quality Models Tell Us About Sources and Sinks of Atmospheric Aldehydes

    NASA Astrophysics Data System (ADS)

    Luecken, D.; Hutzell, W. T.; Phillips, S.

    2010-12-01

    Atmospheric aldehydes play important roles in several aspects of air quality: they are critical radical sources that drive ozone formation, they are hazardous air pollutants that are national drivers for cancer risk, they participate in aqueous chemistry and potentially aerosol formation, and are key species for evaluating the accuracy of isoprene emissions. For these reasons, it is important to accurately understand their sources and sinks, and the sensitivity of their concentrations to emission controls. While both compounds have been included in air quality modeling for many years, current, state-of-the-science chemical mechanisms have difficulty reproducing measured values of aldehydes, which calls into question the robustness of ozone, HAPs and aerosol predictions. In the past, we have attributed discrepancies to measurement errors, inventory errors, or the focus on high-NOx urban regimes. Despite improvements in all of these areas, the measurements still diverge from model predictions, with formaldehyde often underpredicted by 50% and acetaldehyde showing a large degree of scatter - from 20% overprediction to 50% underprediction. To better examine the sources of aldehydes, we implemented the new SAPRC07T mechanism in the Community Multi-Scale Air Quality (CMAQ) model. This mechanism incorporates current recommendations for kinetic data and has the most detailed representation of product formation under a wide variety of conditions of any mechanism used in regional air quality models. We use model simulations to pinpoint where and when aldehyde concentrations tend to deviate from measurements. We demonstrate the role of secondary production versus primary emissions in aldehdye concentrations and find that secondary sources produce the largest deviations from measurements. We identify which VOCs are most responsible for aldehyde secondary production in the areas of the U.S. where the largest health effects are seen, and discuss how this affects consideration of

  5. RECURSIVE PROTEIN MODELING: A DIVIDE AND CONQUER STRATEGY FOR PROTEIN STRUCTURE PREDICTION AND ITS CASE STUDY IN CASP9

    PubMed Central

    CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN

    2013-01-01

    After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379

  6. Using connectome-based predictive modeling to predict individual behavior from brain connectivity

    PubMed Central

    Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd

    2017-01-01

    Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017

  7. Evaluation of a new CNRM-CM6 model version for seasonal climate predictions

    NASA Astrophysics Data System (ADS)

    Volpi, Danila; Ardilouze, Constantin; Batté, Lauriane; Dorel, Laurant; Guérémy, Jean-François; Déqué, Michel

    2017-04-01

    This work presents the quality assessment of a new version of the Météo-France coupled climate prediction system, which has been developed in the EU COPERNICUS Climate Change Services framework to carry out seasonal forecast. The system is based on the CNRM-CM6 model, with Arpege-Surfex 6.2.2 as atmosphere/land component and Nemo 3.2 as ocean component, which has directly embedded the sea-ice component Gelato 6.0. In order to have a robust diagnostic, the experiment is composed by 60 ensemble members generated with stochastic dynamic perturbations. The experiment has been performed over a 37-year re-forecast period from 1979 to 2015, with two start dates per year, respectively in May 1st and November 1st. The evaluation of the predictive skill of the model is shown under two perspectives: on the one hand, the ability of the model to faithfully respond to positive or negative ENSO, NAO and QBO events, independently of the predictability of these events. Such assessment is carried out through a composite analysis, and shows that the model succeeds in reproducing the main patterns for 2-meter temperature, precipitation and geopotential height at 500 hPa during the winter season. On the other hand, the model predictive skill of the same events (positive and negative ENSO, NAO and QBO) is evaluated.

  8. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Pilsner, B. H.; Hillery, R. V.; Mcknight, R. L.; Cook, T. S.; Kim, K. S.; Duderstadt, E. C.

    1986-01-01

    The objectives of this program are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system, and then to develop and verify life prediction models accounting for these degradation modes. The program is divided into two phases, each consisting of several tasks. The work in Phase 1 is aimed at identifying the relative importance of the various failure modes, and developing and verifying life prediction model(s) for the predominant model for a thermal barrier coating system. Two possible predominant failure mechanisms being evaluated are bond coat oxidation and bond coat creep. The work in Phase 2 will develop design-capable, causal, life prediction models for thermomechanical and thermochemical failure modes, and for the exceptional conditions of foreign object damage and erosion.

  9. Local environmental quality positively predicts breastfeeding in the UK’s Millennium Cohort Study

    PubMed Central

    Sear, Rebecca

    2017-01-01

    ABSTRACT Background and Objectives: Breastfeeding is an important form of parental investment with clear health benefits. Despite this, rates remain low in the UK; understanding variation can therefore help improve interventions. Life history theory suggests that environmental quality may pattern maternal investment, including breastfeeding. We analyse a nationally representative dataset to test two predictions: (i) higher local environmental quality predicts higher likelihood of breastfeeding initiation and longer duration; (ii) higher socioeconomic status (SES) provides a buffer against the adverse influences of low local environmental quality. Methodology: We ran factor analysis on a wide range of local-level environmental variables. Two summary measures of local environmental quality were generated by this analysis—one ‘objective’ (based on an independent assessor’s neighbourhood scores) and one ‘subjective’ (based on respondent’s scores). We used mixed-effects regression techniques to test our hypotheses. Results: Higher objective, but not subjective, local environmental quality predicts higher likelihood of starting and maintaining breastfeeding over and above individual SES and area-level measures of environmental quality. Higher individual SES is protective, with women from high-income households having relatively high breastfeeding initiation rates and those with high status jobs being more likely to maintain breastfeeding, even in poor environmental conditions. Conclusions and Implications: Environmental quality is often vaguely measured; here we present a thorough investigation of environmental quality at the local level, controlling for individual- and area-level measures. Our findings support a shift in focus away from individual factors and towards altering the landscape of women’s decision making contexts when considering behaviours relevant to public health. PMID:29354262

  10. Modelling the impacts of agricultural management practices on river water quality in Eastern England.

    PubMed

    Taylor, Sam D; He, Yi; Hiscock, Kevin M

    2016-09-15

    Agricultural diffuse water pollution remains a notable global pressure on water quality, posing risks to aquatic ecosystems, human health and water resources and as a result legislation has been introduced in many parts of the world to protect water bodies. Due to their efficiency and cost-effectiveness, water quality models have been increasingly applied to catchments as Decision Support Tools (DSTs) to identify mitigation options that can be introduced to reduce agricultural diffuse water pollution and improve water quality. In this study, the Soil and Water Assessment Tool (SWAT) was applied to the River Wensum catchment in eastern England with the aim of quantifying the long-term impacts of potential changes to agricultural management practices on river water quality. Calibration and validation were successfully performed at a daily time-step against observations of discharge, nitrate and total phosphorus obtained from high-frequency water quality monitoring within the Blackwater sub-catchment, covering an area of 19.6 km(2). A variety of mitigation options were identified and modelled, both singly and in combination, and their long-term effects on nitrate and total phosphorus losses were quantified together with the 95% uncertainty range of model predictions. Results showed that introducing a red clover cover crop to the crop rotation scheme applied within the catchment reduced nitrate losses by 19.6%. Buffer strips of 2 m and 6 m width represented the most effective options to reduce total phosphorus losses, achieving reductions of 12.2% and 16.9%, respectively. This is one of the first studies to quantify the impacts of agricultural mitigation options on long-term water quality for nitrate and total phosphorus at a daily resolution, in addition to providing an estimate of the uncertainties of those impacts. The results highlighted the need to consider multiple pollutants, the degree of uncertainty associated with model predictions and the risk of

  11. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    NASA Astrophysics Data System (ADS)

    Amicarelli, A.; Gariazzo, C.; Finardi, S.; Pelliccioni, A.; Silibello, C.

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  12. Identify High-Quality Protein Structural Models by Enhanced K-Means.

    PubMed

    Wu, Hongjie; Li, Haiou; Jiang, Min; Chen, Cheng; Lv, Qiang; Wu, Chuang

    2017-01-01

    Background. One critical issue in protein three-dimensional structure prediction using either ab initio or comparative modeling involves identification of high-quality protein structural models from generated decoys. Currently, clustering algorithms are widely used to identify near-native models; however, their performance is dependent upon different conformational decoys, and, for some algorithms, the accuracy declines when the decoy population increases. Results. Here, we proposed two enhanced K -means clustering algorithms capable of robustly identifying high-quality protein structural models. The first one employs the clustering algorithm SPICKER to determine the initial centroids for basic K -means clustering ( SK -means), whereas the other employs squared distance to optimize the initial centroids ( K -means++). Our results showed that SK -means and K -means++ were more robust as compared with SPICKER alone, detecting 33 (59%) and 42 (75%) of 56 targets, respectively, with template modeling scores better than or equal to those of SPICKER. Conclusions. We observed that the classic K -means algorithm showed a similar performance to that of SPICKER, which is a widely used algorithm for protein-structure identification. Both SK -means and K -means++ demonstrated substantial improvements relative to results from SPICKER and classical K -means.

  13. Identify High-Quality Protein Structural Models by Enhanced K-Means

    PubMed Central

    Li, Haiou; Chen, Cheng; Lv, Qiang; Wu, Chuang

    2017-01-01

    Background. One critical issue in protein three-dimensional structure prediction using either ab initio or comparative modeling involves identification of high-quality protein structural models from generated decoys. Currently, clustering algorithms are widely used to identify near-native models; however, their performance is dependent upon different conformational decoys, and, for some algorithms, the accuracy declines when the decoy population increases. Results. Here, we proposed two enhanced K-means clustering algorithms capable of robustly identifying high-quality protein structural models. The first one employs the clustering algorithm SPICKER to determine the initial centroids for basic K-means clustering (SK-means), whereas the other employs squared distance to optimize the initial centroids (K-means++). Our results showed that SK-means and K-means++ were more robust as compared with SPICKER alone, detecting 33 (59%) and 42 (75%) of 56 targets, respectively, with template modeling scores better than or equal to those of SPICKER. Conclusions. We observed that the classic K-means algorithm showed a similar performance to that of SPICKER, which is a widely used algorithm for protein-structure identification. Both SK-means and K-means++ demonstrated substantial improvements relative to results from SPICKER and classical K-means. PMID:28421198

  14. The Application of Satellite-Derived, High-Resolution Land Use/Land Cover Data to Improve Urban Air Quality Model Forecasts

    NASA Technical Reports Server (NTRS)

    Quattrochi, D. A.; Lapenta, W. M.; Crosson, W. L.; Estes, M. G., Jr.; Limaye, A.; Kahn, M.

    2006-01-01

    Local and state agencies are responsible for developing state implementation plans to meet National Ambient Air Quality Standards. Numerical models used for this purpose simulate the transport and transformation of criteria pollutants and their precursors. The specification of land use/land cover (LULC) plays an important role in controlling modeled surface meteorology and emissions. NASA researchers have worked with partners and Atlanta stakeholders to incorporate an improved high-resolution LULC dataset for the Atlanta area within their modeling system and to assess meteorological and air quality impacts of Urban Heat Island (UHI) mitigation strategies. The new LULC dataset provides a more accurate representation of land use, has the potential to improve model accuracy, and facilitates prediction of LULC changes. Use of the new LULC dataset for two summertime episodes improved meteorological forecasts, with an existing daytime cold bias of approx. equal to 3 C reduced by 30%. Model performance for ozone prediction did not show improvement. In addition, LULC changes due to Atlanta area urbanization were predicted through 2030, for which model simulations predict higher urban air temperatures. The incorporation of UHI mitigation strategies partially offset this warming trend. The data and modeling methods used are generally applicable to other U.S. cities.

  15. Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

    DTIC Science & Technology

    2015-07-15

    Long-term effects on cancer survivors’ quality of life of physical training versus physical training combined with cognitive-behavioral therapy ...COMPARISON OF NEURAL NETWORK AND LINEAR REGRESSION MODELS IN STATISTICALLY PREDICTING MENTAL AND PHYSICAL HEALTH STATUS OF BREAST...34Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

  16. Evaluation of the Community Multiscale Air Quality Model for Simulating Winter Ozone Formation in the Uinta Basin with Intensive Oil and Gas Production

    NASA Astrophysics Data System (ADS)

    Matichuk, R.; Tonnesen, G.; Luecken, D.; Roselle, S. J.; Napelenok, S. L.; Baker, K. R.; Gilliam, R. C.; Misenis, C.; Murphy, B.; Schwede, D. B.

    2015-12-01

    The western United States is an important source of domestic energy resources. One of the primary environmental impacts associated with oil and natural gas production is related to air emission releases of a number of air pollutants. Some of these pollutants are important precursors to the formation of ground-level ozone. To better understand ozone impacts and other air quality issues, photochemical air quality models are used to simulate the changes in pollutant concentrations in the atmosphere on local, regional, and national spatial scales. These models are important for air quality management because they assist in identifying source contributions to air quality problems and designing effective strategies to reduce harmful air pollutants. The success of predicting oil and natural gas air quality impacts depends on the accuracy of the input information, including emissions inventories, meteorological information, and boundary conditions. The treatment of chemical and physical processes within these models is equally important. However, given the limited amount of data collected for oil and natural gas production emissions in the past and the complex terrain and meteorological conditions in western states, the ability of these models to accurately predict pollution concentrations from these sources is uncertain. Therefore, this presentation will focus on understanding the Community Multiscale Air Quality (CMAQ) model's ability to predict air quality impacts associated with oil and natural gas production and its sensitivity to input uncertainties. The results will focus on winter ozone issues in the Uinta Basin, Utah and identify the factors contributing to model performance issues. The results of this study will help support future air quality model development, policy and regulatory decisions for the oil and gas sector.

  17. Classification and regression tree (CART) model to predict pulmonary tuberculosis in hospitalized patients.

    PubMed

    Aguiar, Fabio S; Almeida, Luciana L; Ruffino-Netto, Antonio; Kritski, Afranio Lineu; Mello, Fernanda Cq; Werneck, Guilherme L

    2012-08-07

    Tuberculosis (TB) remains a public health issue worldwide. The lack of specific clinical symptoms to diagnose TB makes the correct decision to admit patients to respiratory isolation a difficult task for the clinician. Isolation of patients without the disease is common and increases health costs. Decision models for the diagnosis of TB in patients attending hospitals can increase the quality of care and decrease costs, without the risk of hospital transmission. We present a predictive model for predicting pulmonary TB in hospitalized patients in a high prevalence area in order to contribute to a more rational use of isolation rooms without increasing the risk of transmission. Cross sectional study of patients admitted to CFFH from March 2003 to December 2004. A classification and regression tree (CART) model was generated and validated. The area under the ROC curve (AUC), sensitivity, specificity, positive and negative predictive values were used to evaluate the performance of model. Validation of the model was performed with a different sample of patients admitted to the same hospital from January to December 2005. We studied 290 patients admitted with clinical suspicion of TB. Diagnosis was confirmed in 26.5% of them. Pulmonary TB was present in 83.7% of the patients with TB (62.3% with positive sputum smear) and HIV/AIDS was present in 56.9% of patients. The validated CART model showed sensitivity, specificity, positive predictive value and negative predictive value of 60.00%, 76.16%, 33.33%, and 90.55%, respectively. The AUC was 79.70%. The CART model developed for these hospitalized patients with clinical suspicion of TB had fair to good predictive performance for pulmonary TB. The most important variable for prediction of TB diagnosis was chest radiograph results. Prospective validation is still necessary, but our model offer an alternative for decision making in whether to isolate patients with clinical suspicion of TB in tertiary health facilities in

  18. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-01-20

    Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

  19. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-02-01

    Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Stichting European Society for Clinical Investigation Journal Foundation.

  20. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-01-06

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

  1. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement

    PubMed Central

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-01-01

    Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25562432

  2. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-02-01

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Royal College of Obstetricians and Gynaecologists.

  3. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-01-06

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

  4. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-02-01

    Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study, regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Joint copyright. The Authors and Annals of Internal Medicine. Diabetic Medicine published by John Wiley Ltd. on behalf of Diabetes UK.

  5. Transparent reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-02-01

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD): The TRIPOD Statement.

    PubMed

    Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M

    2015-06-01

    Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations

  7. Delirium prediction in the intensive care unit: comparison of two delirium prediction models.

    PubMed

    Wassenaar, Annelies; Schoonhoven, Lisette; Devlin, John W; van Haren, Frank M P; Slooter, Arjen J C; Jorens, Philippe G; van der Jagt, Mathieu; Simons, Koen S; Egerod, Ingrid; Burry, Lisa D; Beishuizen, Albertus; Matos, Joaquim; Donders, A Rogier T; Pickkers, Peter; van den Boogaard, Mark

    2018-05-05

    Accurate prediction of delirium in the intensive care unit (ICU) may facilitate efficient use of early preventive strategies and stratification of ICU patients by delirium risk in clinical research, but the optimal delirium prediction model to use is unclear. We compared the predictive performance and user convenience of the prediction  model for delirium (PRE-DELIRIC) and early prediction model for delirium (E-PRE-DELIRIC) in ICU patients and determined the value of a two-stage calculation. This 7-country, 11-hospital, prospective cohort study evaluated consecutive adults admitted to the ICU who could be reliably assessed for delirium using the Confusion Assessment Method-ICU or the Intensive Care Delirium Screening Checklist. The predictive performance of the models was measured using the area under the receiver operating characteristic curve. Calibration was assessed graphically. A physician questionnaire evaluated user convenience. For the two-stage calculation we used E-PRE-DELIRIC immediately after ICU admission and updated the prediction using PRE-DELIRIC after 24 h. In total 2178 patients were included. The area under the receiver operating characteristic curve was significantly greater for PRE-DELIRIC (0.74 (95% confidence interval 0.71-0.76)) compared to E-PRE-DELIRIC (0.68 (95% confidence interval 0.66-0.71)) (z score of - 2.73 (p < 0.01)). Both models were well-calibrated. The sensitivity improved when using the two-stage calculation in low-risk patients. Compared to PRE-DELIRIC, ICU physicians (n = 68) rated the E-PRE-DELIRIC model more feasible. While both ICU delirium prediction models have moderate-to-good performance, the PRE-DELIRIC model predicts delirium better. However, ICU physicians rated the user convenience of E-PRE-DELIRIC superior to PRE-DELIRIC. In low-risk patients the delirium prediction further improves after an update with the PRE-DELIRIC model after 24 h. ClinicalTrials.gov, NCT02518646 . Registered on 21 July 2015.

  8. Regression and multivariate models for predicting particulate matter concentration level.

    PubMed

    Nazif, Amina; Mohammed, Nurul Izma; Malakahmad, Amirhossein; Abualqumboz, Motasem S

    2018-01-01

    The devastating health effects of particulate matter (PM 10 ) exposure by susceptible populace has made it necessary to evaluate PM 10 pollution. Meteorological parameters and seasonal variation increases PM 10 concentration levels, especially in areas that have multiple anthropogenic activities. Hence, stepwise regression (SR), multiple linear regression (MLR) and principal component regression (PCR) analyses were used to analyse daily average PM 10 concentration levels. The analyses were carried out using daily average PM 10 concentration, temperature, humidity, wind speed and wind direction data from 2006 to 2010. The data was from an industrial air quality monitoring station in Malaysia. The SR analysis established that meteorological parameters had less influence on PM 10 concentration levels having coefficient of determination (R 2 ) result from 23 to 29% based on seasoned and unseasoned analysis. While, the result of the prediction analysis showed that PCR models had a better R 2 result than MLR methods. The results for the analyses based on both seasoned and unseasoned data established that MLR models had R 2 result from 0.50 to 0.60. While, PCR models had R 2 result from 0.66 to 0.89. In addition, the validation analysis using 2016 data also recognised that the PCR model outperformed the MLR model, with the PCR model for the seasoned analysis having the best result. These analyses will aid in achieving sustainable air quality management strategies.

  9. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  10. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): the TRIPOD Statement.

    PubMed

    Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M

    2015-02-01

    Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. A complete checklist is available at http://www.tripod-statement.org. © 2015 American College of Physicians.

  11. Risk terrain modeling predicts child maltreatment.

    PubMed

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Model-based evaluation of subsurface monitoring networks for improved efficiency and predictive certainty of regional groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, M. J.; Wöhling, Th.; Moore, C. R.; Dann, R.; Scott, D. M.; Close, M.

    2012-04-01

    Groundwater resources worldwide are increasingly under pressure. Demands from different local stakeholders add to the challenge of managing this resource. In response, groundwater models have become popular to make predictions about the impact of different management strategies and to estimate possible impacts of changes in climatic conditions. These models can assist to find optimal management strategies that comply with the various stakeholder needs. Observations of the states of the groundwater system are essential for the calibration and evaluation of groundwater flow models, particularly when they are used to guide the decision making process. On the other hand, installation and maintenance of observation networks are costly. Therefore it is important to design monitoring networks carefully and cost-efficiently. In this study, we analyse the Central Plains groundwater aquifer (~ 4000 km2) between the Rakaia and Waimakariri rivers on the Eastern side of the Southern Alps in New Zealand. The large sedimentary groundwater aquifer is fed by the two alpine rivers and by recharge from the land surface. The area is mainly under agricultural land use and large areas of the land are irrigated. The other major water use is the drinking water supply for the city of Christchurch. The local authority in the region, Environment Canterbury, maintains an extensive groundwater quantity and quality monitoring programme to monitor the effects of land use and discharges on groundwater quality, and the suitability of the groundwater for various uses, especially drinking-water supply. Current and projected irrigation water demand has raised concerns about possible impacts on groundwater-dependent lowland streams. We use predictive uncertainty analysis and the Central Plains steady-state groundwater flow model to evaluate the worth of pressure head observations in the existing groundwater well monitoring network. The data worth of particular observations is dependent on the problem

  13. Untrained consumer assessment of the eating quality of beef: 1. A single composite score can predict beef quality grades.

    PubMed

    Bonny, S P F; Hocquette, J-F; Pethick, D W; Legrand, I; Wierzbicki, J; Allen, P; Farmer, L J; Polkinghorne, R J; Gardner, G E

    2017-08-01

    Quantifying consumer responses to beef across a broad range of demographics, nationalities and cooking methods is vitally important for any system evaluating beef eating quality. On the basis of previous work, it was expected that consumer scores would be highly accurate in determining quality grades for beef, thereby providing evidence that such a technique could be used to form the basis of and eating quality grading system for beef. Following the Australian MSA (Meat Standards Australia) testing protocols, over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia tasted cooked beef samples, then allocated them to a quality grade; unsatisfactory, good-every-day, better-than-every-day and premium. The consumers also scored beef samples for tenderness, juiciness, flavour-liking and overall-liking. The beef was sourced from all countries involved in the study and cooked by four different cooking methods and to three different degrees of doneness, with each experimental group in the study consisting of a single cooking doneness within a cooking method for each country. For each experimental group, and for the data set as a whole, a linear discriminant function was calculated, using the four sensory scores which were used to predict the quality grade. This process was repeated using two conglomerate scores which are derived from weighting and combining the consumer sensory scores for tenderness, juiciness, flavour-liking and overall-liking, the original meat quality 4 score (oMQ4) (0.4, 0.1, 0.2, 0.3) and current meat quality 4 score (cMQ4) (0.3, 0.1, 0.3, 0.3). From the results of these analyses, the optimal weightings of the sensory scores to generate an 'ideal meat quality 4 score (MQ4)' for each country were calculated, and the MQ4 values that reflected the boundaries between the four quality grades were determined. The oMQ4 weightings were far more accurate in categorising European meat samples than the cMQ4 weightings, highlighting that

  14. Information quality-control model

    NASA Technical Reports Server (NTRS)

    Vincent, D. A.

    1971-01-01

    Model serves as graphic tool for estimating complete product objectives from limited input information, and is applied to cost estimations, product-quality evaluations, and effectiveness measurements for manpower resources allocation. Six product quality levels are defined.

  15. NIR spectroscopy for the quality control of Moringa oleifera (Lam.) leaf powders: Prediction of minerals, protein and moisture contents.

    PubMed

    Rébufa, Catherine; Pany, Inès; Bombarda, Isabelle

    2018-09-30

    A rapid methodology was developed to simultaneously predict water content and activity values (a w ) of Moringa oleifera leaf powders (MOLP) using near infrared (NIR) signatures and experimental sorption isotherms. NIR spectra of MOLP samples (n = 181) were recorded. A Partial Least Square Regression model (PLS2) was obtained with low standard errors of prediction (SEP of 1.8% and 0.07 for water content and a w respectively). Experimental sorption isotherms obtained at 20, 30 and 40 °C showed similar profiles. This result is particularly important to use MOLP in food industry. In fact, a temperature variation of the drying process will not affect their available water content (self-life). Nutrient contents based on protein and selected minerals (Ca, Fe, K) were also predicted from PLS1 models. Protein contents were well predicted (SEP of 2.3%). This methodology allowed for an improvement in MOLP safety, quality control and traceability. Published by Elsevier Ltd.

  16. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    PubMed

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  17. EFFECTS OF USING THE CB05 VERSUS THE CB4 CHEMICAL MECHANISMS ON MODEL PREDICTIONS

    EPA Science Inventory

    The Carbon Bond 4 (CB4) chemical mechanism has been widely used for many years in box and air quality models to predict the effect of atmospheric chemistry on pollutant concentrations. Because of the importance of this mechanism and the length of time since its original developm...

  18. EFFECTS OF USING THE CB05 VERSUS THE CB4 CHEMICAL MECHANISM ON MODEL PREDICTIONS

    EPA Science Inventory

    The Carbon Bond 4 (CB4) chemical mechanism has been widely used for many years in box and air quality models to predict the effect of atmospheric chemistry on pollutant concentrations. Because of the importance of this mechanism and the length of time since its original developm...

  19. Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?

    NASA Astrophysics Data System (ADS)

    Everett, R. A.; Packer, A. M.; Kuang, Y.

    Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.

  20. Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?

    NASA Astrophysics Data System (ADS)

    Everett, R. A.; Packer, A. M.; Kuang, Y.

    2014-04-01

    Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.

  1. Path-Dependent Travel Time Prediction Variance and Covariance for a Global Tomographic P- and S-Velocity Model

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.

    2015-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  2. Using "big data" to optimally model hydrology and water quality across expansive regions

    USGS Publications Warehouse

    Roehl, E.A.; Cook, J.B.; Conrads, P.A.

    2009-01-01

    This paper describes a new divide and conquer approach that leverages big environmental data, utilizing all available categorical and time-series data without subjectivity, to empirically model hydrologic and water-quality behaviors across expansive regions. The approach decomposes large, intractable problems into smaller ones that are optimally solved; decomposes complex signals into behavioral components that are easier to model with "sub- models"; and employs a sequence of numerically optimizing algorithms that include time-series clustering, nonlinear, multivariate sensitivity analysis and predictive modeling using multi-layer perceptron artificial neural networks, and classification for selecting the best sub-models to make predictions at new sites. This approach has many advantages over traditional modeling approaches, including being faster and less expensive, more comprehensive in its use of available data, and more accurate in representing a system's physical processes. This paper describes the application of the approach to model groundwater levels in Florida, stream temperatures across Western Oregon and Wisconsin, and water depths in the Florida Everglades. ?? 2009 ASCE.

  3. Genomic predictive model for recurrence and metastasis development in head and neck squamous cell carcinoma patients.

    PubMed

    Ribeiro, Ilda Patrícia; Caramelo, Francisco; Esteves, Luísa; Menoita, Joana; Marques, Francisco; Barroso, Leonor; Miguéis, Jorge; Melo, Joana Barbosa; Carreira, Isabel Marques

    2017-10-24

    The head and neck squamous cell carcinoma (HNSCC) population consists mainly of high-risk for recurrence and locally advanced stage patients. Increased knowledge of the HNSCC genomic profile can improve early diagnosis and treatment outcomes. The development of models to identify consistent genomic patterns that distinguish HNSCC patients that will recur and/or develop metastasis after treatment is of utmost importance to decrease mortality and improve survival rates. In this study, we used array comparative genomic hybridization data from HNSCC patients to implement a robust model to predict HNSCC recurrence/metastasis. This predictive model showed a good accuracy (>80%) and was validated in an independent population from TCGA data portal. This predictive genomic model comprises chromosomal regions from 5p, 6p, 8p, 9p, 11q, 12q, 15q and 17p, where several upstream and downstream members of signaling pathways that lead to an increase in cell proliferation and invasion are mapped. The introduction of genomic predictive models in clinical practice might contribute to a more individualized clinical management of the HNSCC patients, reducing recurrences and improving patients' quality of life. The power of this genomic model to predict the recurrence and metastases development should be evaluated in other HNSCC populations.

  4. Estimating and Predicting Metal Concentration Using Online Turbidity Values and Water Quality Models in Two Rivers of the Taihu Basin, Eastern China

    PubMed Central

    Yao, Hong; Zhuang, Wei; Qian, Yu; Xia, Bisheng; Yang, Yang; Qian, Xin

    2016-01-01

    Turbidity (T) has been widely used to detect the occurrence of pollutants in surface water. Using data collected from January 2013 to June 2014 at eleven sites along two rivers feeding the Taihu Basin, China, the relationship between the concentration of five metals (aluminum (Al), titanium (Ti), nickel (Ni), vanadium (V), lead (Pb)) and turbidity was investigated. Metal concentration was determined using inductively coupled plasma mass spectrometry (ICP-MS). The linear regression of metal concentration and turbidity provided a good fit, with R2 = 0.86–0.93 for 72 data sets collected in the industrial river and R2 = 0.60–0.85 for 60 data sets collected in the cleaner river. All the regression presented good linear relationship, leading to the conclusion that the occurrence of the five metals are directly related to suspended solids, and these metal concentration could be approximated using these regression equations. Thus, the linear regression equations were applied to estimate the metal concentration using online turbidity data from January 1 to June 30 in 2014. In the prediction, the WASP 7.5.2 (Water Quality Analysis Simulation Program) model was introduced to interpret the transport and fates of total suspended solids; in addition, metal concentration downstream of the two rivers was predicted. All the relative errors between the estimated and measured metal concentration were within 30%, and those between the predicted and measured values were within 40%. The estimation and prediction process of metals’ concentration indicated that exploring the relationship between metals and turbidity values might be one effective technique for efficient estimation and prediction of metal concentration to facilitate better long-term monitoring with high temporal and spatial density. PMID:27028017

  5. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    PubMed

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  6. Evaluating ammonia (NH3) predictions in the NOAA National Air Quality Forecast Capability (NAQFC) using in-situ aircraft and satellite measurements from the CalNex2010 campaign

    NASA Astrophysics Data System (ADS)

    Bray, Casey D.; Battye, William; Aneja, Viney P.; Tong, Daniel; Lee, Pius; Tang, Youhua; Nowak, John B.

    2017-08-01

    Atmospheric ammonia (NH3) is not only a major precursor gas for fine particulate matter (PM2.5), but it also negatively impacts the environment through eutrophication and acidification. As the need for agriculture, the largest contributing source of NH3, increases, NH3 emissions will also increase. Therefore, it is crucial to accurately predict ammonia concentrations. The objective of this study is to determine how well the U.S. National Oceanic and Atmospheric Administration (NOAA) National Air Quality Forecast Capability (NAQFC) system predicts ammonia concentrations using their Community Multiscale Air Quality (CMAQ) model (v4.6). Model predictions of atmospheric ammonia are compared against measurements taken during the NOAA California Nexus (CalNex) field campaign that took place between May and July of 2010. Additionally, the model predictions were also compared against ammonia measurements obtained from the Tropospheric Emission Spectrometer (TES) on the Aura satellite. The results of this study showed that the CMAQ model tended to under predict concentrations of NH3. When comparing the CMAQ model with the CalNex measurements, the model under predicted NH3 by a factor of 2.4 (NMB = -58%). However, the ratio of the median measured NH3 concentration to the median of the modeled NH3 concentration was 0.8. When compared with the TES measurements, the model under predicted concentrations of NH3 by a factor of 4.5 (NMB = -77%), with a ratio of the median retrieved NH3 concentration to the median of the modeled NH3 concentration of 3.1. Because the model was the least accurate over agricultural regions, it is likely that the major source of error lies within the agricultural emissions in the National Emissions Inventory. In addition to this, the lack of the use of bidirectional exchange of NH3 in the model could also contribute to the observed bias.

  7. A Bayesian Multilevel Model for Microcystin Prediction in ...

    EPA Pesticide Factsheets

    The frequency of cyanobacteria blooms in North American lakes is increasing. A major concernwith rising cyanobacteria blooms is microcystin, a common cyanobacterial hepatotoxin. Toexplore the conditions that promote high microcystin concentrations, we analyzed the US EPANational Lake Assessment (NLA) dataset collected in the summer of 2007. The NLA datasetis reported for nine eco-regions. We used the results of random forest modeling as a means ofvariable selection from which we developed a Bayesian multilevel model of microcystin concentrations.Model parameters under a multilevel modeling framework are eco-region specific, butthey are also assumed to be exchangeable across eco-regions for broad continental scaling. Theexchangeability assumption ensures that both the common patterns and eco-region specific featureswill be reflected in the model. Furthermore, the method incorporates appropriate estimatesof uncertainty. Our preliminary results show associations between microcystin and turbidity, totalnutrients, and N:P ratios. The NLA 2012 will be used for Bayesian updating. The results willhelp develop management strategies to alleviate microcystin impacts and improve lake quality. This work provides a probabilistic framework for predicting microcystin presences in lakes. It would allow for insights to be made about how changes in nutrient concentrations could potentially change toxin levels.

  8. Development of a three dimensional numerical water quality model for continental shelf applications

    NASA Technical Reports Server (NTRS)

    Spaulding, M.; Hunter, D.

    1975-01-01

    A model to predict the distribution of water quality parameters in three dimensions was developed. The mass transport equation was solved using a non-dimensional vertical axis and an alternating-direction-implicit finite difference technique. The reaction kinetics of the constituents were incorporated into a matrix method which permits computation of the interactions of multiple constituents. Methods for the computation of dispersion coefficients and coliform bacteria decay rates were determined. Numerical investigations of dispersive and dissipative effects showed that the three-dimensional model performs as predicted by analysis of simpler cases. The model was then applied to a two dimensional vertically averaged tidal dynamics model for the Providence River. It was also extended to a steady state application by replacing the time step with an iteration sequence. This modification was verified by comparison to analytical solutions and applied to a river confluence situation.

  9. Development and application of Geobacillus stearothermophilus growth model for predicting spoilage of evaporated milk.

    PubMed

    Kakagianni, Myrsini; Gougouli, Maria; Koutsoumanis, Konstantinos P

    2016-08-01

    The presence of Geobacillus stearothermophilus spores in evaporated milk constitutes an important quality problem for the milk industry. This study was undertaken to provide an approach in modelling the effect of temperature on G. stearothermophilus ATCC 7953 growth and in predicting spoilage of evaporated milk. The growth of G. stearothermophilus was monitored in tryptone soy broth at isothermal conditions (35-67 °C). The data derived were used to model the effect of temperature on G. stearothermophilus growth with a cardinal type model. The cardinal values of the model for the maximum specific growth rate were Tmin = 33.76 °C, Tmax = 68.14 °C, Topt = 61.82 °C and μopt = 2.068/h. The growth of G. stearothermophilus was assessed in evaporated milk at Topt in order to adjust the model to milk. The efficiency of the model in predicting G. stearothermophilus growth at non-isothermal conditions was evaluated by comparing predictions with observed growth under dynamic conditions and the results showed a good performance of the model. The model was further used to predict the time-to-spoilage (tts) of evaporated milk. The spoilage of this product caused by acid coagulation when the pH approached a level around 5.2, eight generations after G. stearothermophilus reached the maximum population density (Nmax). Based on the above, the tts was predicted from the growth model as the sum of the time required for the microorganism to multiply from the initial to the maximum level ( [Formula: see text] ), plus the time required after the [Formula: see text] to complete eight generations. The observed tts was very close to the predicted one indicating that the model is able to describe satisfactorily the growth of G. stearothermophilus and to provide realistic predictions for evaporated milk spoilage. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions

    NASA Astrophysics Data System (ADS)

    Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.

    2017-12-01

    Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.

  11. Determination of the optimal training principle and input variables in artificial neural network model for the biweekly chlorophyll-a prediction: a case study of the Yuqiao Reservoir, China.

    PubMed

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang

    2015-01-01

    Predicting the levels of chlorophyll-a (Chl-a) is a vital component of water quality management, which ensures that urban drinking water is safe from harmful algal blooms. This study developed a model to predict Chl-a levels in the Yuqiao Reservoir (Tianjin, China) biweekly using water quality and meteorological data from 1999-2012. First, six artificial neural networks (ANNs) and two non-ANN methods (principal component analysis and the support vector regression model) were compared to determine the appropriate training principle. Subsequently, three predictors with different input variables were developed to examine the feasibility of incorporating meteorological factors into Chl-a prediction, which usually only uses water quality data. Finally, a sensitivity analysis was performed to examine how the Chl-a predictor reacts to changes in input variables. The results were as follows: first, ANN is a powerful predictive alternative to the traditional modeling techniques used for Chl-a prediction. The back program (BP) model yields slightly better results than all other ANNs, with the normalized mean square error (NMSE), the correlation coefficient (Corr), and the Nash-Sutcliffe coefficient of efficiency (NSE) at 0.003 mg/l, 0.880 and 0.754, respectively, in the testing period. Second, the incorporation of meteorological data greatly improved Chl-a prediction compared to models solely using water quality factors or meteorological data; the correlation coefficient increased from 0.574-0.686 to 0.880 when meteorological data were included. Finally, the Chl-a predictor is more sensitive to air pressure and pH compared to other water quality and meteorological variables.

  12. Application of visible/near-infrared reflectance spectroscopy for predicting internal and external quality in pepper.

    PubMed

    Toledo-Martín, Eva María; García-García, María Carmen; Font, Rafael; Moreno-Rojas, José Manuel; Gómez, Pedro; Salinas-Navarro, María; Del Río-Celestino, Mercedes

    2016-07-01

    The characterization of internal (°Brix, pH, malic acid, total phenolic compounds, ascorbic acid and total carotenoid content) and external (color, firmness and pericarp wall thickness) pepper quality is necessary to better understand its possible applications and increase consumer awareness of its benefits. The main aim of this work was to examine the feasibility of using visible/near-infrared reflectance spectroscopy (VIS-NIRS) to predict quality parameters in different pepper types. Commercially available spectrophotometers were evaluated for this purpose: a Polychromix Phazir spectrometer for intact raw pepper, and a scanning monochromator for freeze-dried pepper. The RPD values (ratio of the standard deviation of the reference data to the standard error of prediction) obtained from the external validation exceeded a value of 3 for chlorophyll a and total carotenoid content; values ranging between 2.5 < RPD < 3 for total phenolic compounds; between 1.5 < RPD <2.5 for °Brix, pH, color parameters a* and h* and chlorophyll b; and RPD values below 1.5 for fruit firmness, pericarp wall thickness, color parameters C*, b* and L*, vitamin C and malic acid content. The present work has led to the development of multi-type calibrations for pepper quality parameters in intact and freeze-dried peppers. The majority of NIRS equations obtained were suitable for screening purposes in pepper breeding programs. Components such as pigments (xanthophyll, carotenes and chlorophyll), glucides, lipids, cellulose and water were used by modified partial least-squares regression for modeling the predicting equations. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  13. Coastal Water Quality Modeling in Tidal Lake: Revisited with Groundwater Intrusion

    NASA Astrophysics Data System (ADS)

    Kim, C.

    2016-12-01

    A new method for predicting the temporal and spatial variation of water quality, with accounting for a groundwater effect, has been proposed and applied to a water body partially connected to macro-tidal coastal waters in Korea. The method consists of direct measurement of environmental parameters, and it indirectly incorporates a nutrients budget analysis to estimate the submarine groundwater fluxes. Three-dimensional numerical modeling of water quality has been used with the directly collected data and the indirectly estimated groundwater fluxes. The applied area is Saemangeum tidal lake that is enclosed by 33km-long sea dyke with tidal openings at two water gates. Many investigations of groundwater impact reveal that 10 50% of nutrient loading in coastal waters comes from submarine groundwater, particularly in the macro-tidal flat, as in the west coast of Korea. Long-term monitoring of coastal water quality signals the possibility of groundwater influence on salinity reversal and on the excess mass outbalancing the normal budget in Saemangeum tidal lake. In the present study, we analyze the observed data to examine the influence of submarine groundwater, and then a box model is demonstrated for quantifying the influx and efflux. A three-dimensional numerical model has been applied to reproduce the process of groundwater dispersal and its effect on the water quality of Saemangeum tidal lake. The results show that groundwater influx during the summer monsoon then contributes significantly, 20% more than during dry season, to water quality in the tidal lake.

  14. Water-quality models to assess algal community dynamics, water quality, and fish habitat suitability for two agricultural land-use dominated lakes in Minnesota, 2014

    USGS Publications Warehouse

    Smith, Erik A.; Kiesling, Richard L.; Ziegeweid, Jeffrey R.

    2017-07-20

    Fish habitat can degrade in many lakes due to summer blue-green algal blooms. Predictive models are needed to better manage and mitigate loss of fish habitat due to these changes. The U.S. Geological Survey (USGS), in cooperation with the Minnesota Department of Natural Resources, developed predictive water-quality models for two agricultural land-use dominated lakes in Minnesota—Madison Lake and Pearl Lake, which are part of Minnesota’s sentinel lakes monitoring program—to assess algal community dynamics, water quality, and fish habitat suitability of these two lakes under recent (2014) meteorological conditions. The interaction of basin processes to these two lakes, through the delivery of nutrient loads, were simulated using CE-QUAL-W2, a carbon-based, laterally averaged, two-dimensional water-quality model that predicts distribution of temperature and oxygen from interactions between nutrient cycling, primary production, and trophic dynamics.The CE-QUAL-W2 models successfully predicted water temperature and dissolved oxygen on the basis of the two metrics of mean absolute error and root mean square error. For Madison Lake, the mean absolute error and root mean square error were 0.53 and 0.68 degree Celsius, respectively, for the vertical temperature profile comparisons; for Pearl Lake, the mean absolute error and root mean square error were 0.71 and 0.95 degree Celsius, respectively, for the vertical temperature profile comparisons. Temperature and dissolved oxygen were key metrics for calibration targets. These calibrated lake models also simulated algal community dynamics and water quality. The model simulations presented potential explanations for persistently large total phosphorus concentrations in Madison Lake, key differences in nutrient concentrations between these lakes, and summer blue-green algal bloom persistence.Fish habitat suitability simulations for cool-water and warm-water fish indicated that, in general, both lakes contained a large

  15. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    NASA Astrophysics Data System (ADS)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  16. Modeling the distribution of white spruce (Picea glauca) for Alaska with high accuracy: an open access role-model for predicting tree species in last remaining wilderness areas

    Treesearch

    Bettina Ohse; Falk Huettmann; Stefanie M. Ickert-Bond; Glenn P. Juday

    2009-01-01

    Most wilderness areas still lack accurate distribution information on tree species. We met this need with a predictive GIS modeling approach, using freely available digital data and computer programs to efficiently obtain high-quality species distribution maps. Here we present a digital map with the predicted distribution of white spruce (Picea glauca...

  17. Modeling the Dynamic Change of Air Quality and its Response to Emission Trends

    NASA Astrophysics Data System (ADS)

    Zhou, Wei

    This thesis focuses on evaluating atmospheric chemistry and transport models' capability in simulating the chemistry and dynamics of power plant plumes, evaluating their strengths and weaknesses in predicting air quality trends at regional scales, and exploring air quality trends in an urban area. First, the Community Mutlti-scale Air Quality (CMAQ) model is applied to simulate the physical and chemical evolution of power plant plumes (PPPs) during the second Texas Air Quality Study (TexAQS) in 2006. SO2 and NOy were observed to be rapidly removed from PPPs on cloudy days but not on cloud-free days, indicating efficient aqueous processing of these compounds in clouds, while the model fails to capture the rapid loss of SO2 and NOy in some plumes on the cloudy day. Adjustments to cloud liquid water content (QC) and the default metal concentrations in the cloud module could explain some of the SO 2 loss while NOy in the model was insensitive to QC. Second, CMAQ is applied to simulate the ozone (O3) change after the NO x SIP Call and mobile emission controls in the eastern U.S. from 2002 to 2006. Observed downward changes in 8-hour O3 concentrations in the NOx SIP Call region were under-predicted by 26%--66%. The under-prediction in O3 improvements could be alleviated by 5%--31% by constraining NOx emissions in each year based on observed NOx concentrations while temperature biases or uncertainties in chemical reactions had minor impact on simulated O3 trends. Third, changes in ozone production in the Houston area is assessed with airborne measurements from TexAQS 2000 and 2006. Simultaneous declines in nitrogen oxides (NOx=NO+NO2) and highly reactive Volatile Organic Compounds (HRVOCs) were observed in the Houston Ship Channel (HSC). The reduction in HRVOCs led to the decline in total radical concentration by 20-50%. Rapid ozone production rates in the Houston area declined by 40-50% from 2000 to 2006, to which the reduction in NOx and HRVOCs had the similar

  18. Predicting climate-induced range shifts: model differences and model reliability.

    Treesearch

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  19. Atmospheric prediction model survey

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.

    1976-01-01

    As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.

  20. Progress on Implementing Additional Physics Schemes into MPAS-A v5.1 for Next Generation Air Quality Modeling

    EPA Science Inventory

    The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and ...

  1. An Interoceptive Predictive Coding Model of Conscious Presence

    PubMed Central

    Seth, Anil K.; Suzuki, Keisuke; Critchley, Hugo D.

    2011-01-01

    We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness. PMID:22291673

  2. Application and evaluation of two air quality models for particulate matter for a southeastern U.S. episode.

    PubMed

    Zhang, Yang; Pun, Betty; Wu, Shiang-Yuh; Vijayaraghavan, Krish; Seigneur, Christian

    2004-12-01

    The Models-3 Community Multiscale Air Quality (CMAQ) Modeling System and the Particulate Matter Comprehensive Air Quality Model with extensions (PMCAMx) were applied to simulate the period June 29-July 10, 1999, of the Southern Oxidants Study episode with two nested horizontal grid sizes: a coarse resolution of 32 km and a fine resolution of 8 km. The predicted spatial variations of ozone (O3), particulate matter with an aerodynamic diameter less than or equal to 2.5 microm (PM2.5), and particulate matter with an aerodynamic diameter less than or equal to 10 microm (PM10) by both models are similar in rural areas but differ from one another significantly over some urban/suburban areas in the eastern and southern United States, where PMCAMx tends to predict higher values of O3 and PM than CMAQ. Both models tend to predict O3 values that are higher than those observed. For observed O3 values above 60 ppb, O3 performance meets the U.S. Environmental Protection Agency's criteria for CMAQ with both grids and for PMCAMx with the fine grid only. It becomes unsatisfactory for PMCAMx and marginally satisfactory for CMAQ for observed O3 values above 40 ppb. Both models predict similar amounts of sulfate (SO4(2-)) and organic matter, and both predict SO4(2-) to be the largest contributor to PM2.5. PMCAMx generally predicts higher amounts of ammonium (NH4+), nitrate (NO3-), and black carbon (BC) than does CMAQ. PM performance for CMAQ is generally consistent with that of other PM models, whereas PMCAMx predicts higher concentrations of NO3-, NH4+, and BC than observed, which degrades its performance. For PM10 and PM2.5 predictions over the southeastern U.S. domain, the ranges of mean normalized gross errors (MNGEs) and mean normalized bias are 37-43% and -33-4% for CMAQ and 50-59% and 7-30% for PMCAMx. Both models predict the largest MNGEs for NO3- (98-104% for CMAQ 138-338% for PMCAMx). The inaccurate NO3- predictions by both models may be caused by the inaccuracies in the

  3. Epoxide pathways improve model predictions of isoprene markers and reveal key role of acidity in aerosol formation

    EPA Science Inventory

    Isoprene significantly contributes to organic aerosol in the southeastern United States where biogenic hydrocarbons mix with anthropogenic emissions. In this work, the Community Multiscale Air Quality model is updated to predict isoprene aerosol from epoxides produced under both ...

  4. Perceptions of Parent-Child Attachment Relationships and Friendship Qualities: Predictors of Romantic Relationship Involvement and Quality in Adolescence.

    PubMed

    Kochendorfer, Logan B; Kerns, Kathryn A

    2017-05-01

    Relationships with parents and friends are important contexts for developing romantic relationship skills. Parents and friends may influence both the timing of involvement and the quality of romantic relationships. Three models of the joint influence of parents and friends (direct effects model, mediation model, and moderator model) have been proposed. The present study uses data from a longitudinal study (n = 1012; 49.8% female; 81.1% Caucasian) to examine how attachment and friendship quality at age 10 years predict romantic relationship involvement and quality at ages 12 and 15 years. The results supported the direct effects model, with attachment and friendship quality uniquely predicting different romantic relationship outcomes. The findings provide further support for the important influence of family and friends on early romantic relationships.

  5. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    NASA Astrophysics Data System (ADS)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  6. A no-reference bitstream-based perceptual model for video quality estimation of videos affected by coding artifacts and packet losses

    NASA Astrophysics Data System (ADS)

    Pandremmenou, K.; Shahid, M.; Kondi, L. P.; Lövström, B.

    2015-03-01

    In this work, we propose a No-Reference (NR) bitstream-based model for predicting the quality of H.264/AVC video sequences, affected by both compression artifacts and transmission impairments. The proposed model is based on a feature extraction procedure, where a large number of features are calculated from the packet-loss impaired bitstream. Many of the features are firstly proposed in this work, and the specific set of the features as a whole is applied for the first time for making NR video quality predictions. All feature observations are taken as input to the Least Absolute Shrinkage and Selection Operator (LASSO) regression method. LASSO indicates the most important features, and using only them, it is possible to estimate the Mean Opinion Score (MOS) with high accuracy. Indicatively, we point out that only 13 features are able to produce a Pearson Correlation Coefficient of 0.92 with the MOS. Interestingly, the performance statistics we computed in order to assess our method for predicting the Structural Similarity Index and the Video Quality Metric are equally good. Thus, the obtained experimental results verified the suitability of the features selected by LASSO as well as the ability of LASSO in making accurate predictions through sparse modeling.

  7. Bullying Predicts Reported Dating Violence and Observed Qualities in Adolescent Dating Relationships.

    PubMed

    Ellis, Wendy E; Wolfe, David A

    2015-10-01

    The relationship between reported bullying, reported dating violence, and dating relationship quality measured through couple observations was examined. Given past research demonstrating similarity between peer and dating contexts, we expected that bullying would predict negative dating experiences. Participants with dating experience (n = 585; 238 males, M(age) = 15.06) completed self-report assessments of bullying and dating violence perpetration and victimization. One month later, 44 opposite-sex dyads (M(age) = 15.19) participated in behavioral observations. In 10-min sessions, couples were asked to rank and discuss areas of relationship conflict while being video-recorded. Qualities of the relationship were later coded by trained observers. Regression analysis revealed that bullying positively predicted dating violence perpetration and victimization. Self-reported bullying also predicted observations of lower relationship support and higher withdrawal. Age and gender interactions further qualified these findings. The bullying of boys, but not girls, was significantly related to dating violence perpetration. Age interactions showed that bullying was positively predictive of dating violence perpetration and victimization for older, but not younger adolescents. Positive affect was also negatively predicted by bullying, but only for girls. These findings add to the growing body of evidence that adolescents carry forward strategies learned in the peer context to their dating relationships. © The Author(s) 2014.

  8. A prediction model based on artificial neural network for surface temperature simulation of nickel-metal hydride battery during charging

    NASA Astrophysics Data System (ADS)

    Fang, Kaizheng; Mu, Daobin; Chen, Shi; Wu, Borong; Wu, Feng

    2012-06-01

    In this study, a prediction model based on artificial neural network is constructed for surface temperature simulation of nickel-metal hydride battery. The model is developed from a back-propagation network which is trained by Levenberg-Marquardt algorithm. Under each ambient temperature of 10 °C, 20 °C, 30 °C and 40 °C, an 8 Ah cylindrical Ni-MH battery is charged in the rate of 1 C, 3 C and 5 C to its SOC of 110% in order to provide data for the model training. Linear regression method is adopted to check the quality of the model training, as well as mean square error and absolute error. It is shown that the constructed model is of excellent training quality for the guarantee of prediction accuracy. The surface temperature of battery during charging is predicted under various ambient temperatures of 50 °C, 60 °C, 70 °C by the model. The results are validated in good agreement with experimental data. The value of battery surface temperature is calculated to exceed 90 °C under the ambient temperature of 60 °C if it is overcharged in 5 C, which might cause battery safety issues.

  9. Risk prediction model: Statistical and artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  10. Predictive models of moth development

    USDA-ARS?s Scientific Manuscript database

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  11. Using Predictive Uncertainty Analysis to Assess Hydrologic Model Performance for a Watershed in Oregon

    NASA Astrophysics Data System (ADS)

    Brannan, K. M.; Somor, A.

    2016-12-01

    A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.

  12. Image quality prediction - An aid to the Viking lander imaging investigation on Mars

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Wall, S. D.

    1976-01-01

    Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).

  13. Use of Air Quality Observations by the National Air Quality Forecast Capability

    NASA Astrophysics Data System (ADS)

    Stajner, I.; McQueen, J.; Lee, P.; Stein, A. F.; Kondragunta, S.; Ruminski, M.; Tong, D.; Pan, L.; Huang, J. P.; Shafran, P.; Huang, H. C.; Dickerson, P.; Upadhayay, S.

    2015-12-01

    The National Air Quality Forecast Capability (NAQFC) operational predictions of ozone and wildfire smoke for the United States (U.S.) and predictions of airborne dust for continental U.S. are available at http://airquality.weather.gov/. NOAA National Centers for Environmental Prediction (NCEP) operational North American Mesoscale (NAM) weather predictions are combined with the Community Multiscale Air Quality (CMAQ) model to produce the ozone predictions and test fine particulate matter (PM2.5) predictions. The Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model provides smoke and dust predictions. Air quality observations constrain emissions used by NAQFC predictions. NAQFC NOx emissions from mobile sources were updated using National Emissions Inventory (NEI) projections for year 2012. These updates were evaluated over large U.S. cities by comparing observed changes in OMI NO2 observations and NOx measured by surface monitors. The rate of decrease in NOx emission projections from year 2005 to year 2012 is in good agreement with the observed changes over the same period. Smoke emissions rely on the fire locations detected from satellite observations obtained from NESDIS Hazard Mapping System (HMS). Dust emissions rely on a climatology of areas with a potential for dust emissions based on MODIS Deep Blue aerosol retrievals. Verification of NAQFC predictions uses AIRNow compilation of surface measurements for ozone and PM2.5. Retrievals of smoke from GOES satellites are used for verification of smoke predictions. Retrievals of dust from MODIS are used for verification of dust predictions. In summary, observations are the basis for the emissions inputs for NAQFC, they are critical for evaluation of performance of NAQFC predictions, and furthermore they are used in real-time testing of bias correction of PM2.5 predictions, as we continue to work on improving modeling and emissions important for representation of PM2.5.

  14. A simple metric to predict stream water quality from storm runoff in an urban watershed.

    PubMed

    Easton, Zachary M; Sullivan, Patrick J; Walter, M Todd; Fuka, Daniel R; Petrovic, A Martin; Steenhuis, Tammo S

    2010-01-01

    The contribution of runoff from various land uses to stream channels in a watershed is often speculated and used to underpin many model predictions. However, these contributions, often based on little or no measurements in the watershed, fail to appropriately consider the influence of the hydrologic location of a particular landscape unit in relation to the stream network. A simple model was developed to predict storm runoff and the phosphorus (P) status of a perennial stream in an urban watershed in New York State using the covariance structure of runoff from different landscape units in the watershed to predict runoff in time. One hundred and twenty-seven storm events were divided into parameterization (n = 85) and forecasting (n = 42) data sets. Runoff, dissolved P (DP), and total P (TP) were measured at nine sites distributed among three land uses (high maintenance, unmaintained, wooded), three positions in the watershed (near the outlet, midwatershed, upper watershed), and in the stream at the watershed outlet. The autocorrelation among runoff and P concentrations from the watershed landscape units (n = 9) and the covariance between measurements from the landscape units and measurements from the stream were calculated and used to predict the stream response. Models, validated using leave-one-out cross-validation and a forecasting method, were able to correctly capture temporal trends in streamflow and stream P chemistry (Nash-Sutcliffe efficiencies, 0.49-0.88). The analysis suggests that the covariance structure was consistent for all models, indicating that the physical processes governing runoff and P loss from these landscape units were stationary in time and that landscapes located in hydraulically active areas have a direct hydraulic link to the stream. This methodology provides insight into the impact of various urban landscape units on stream water quantity and quality.

  15. A predictive nondestructive model for the covariation of tree height, diameter, and stem volume scaling relationships.

    PubMed

    Zhang, Zhongrui; Zhong, Quanlin; Niklas, Karl J; Cai, Liang; Yang, Yusheng; Cheng, Dongliang

    2016-08-24

    Metabolic scaling theory (MST) posits that the scaling exponents among plant height H, diameter D, and biomass M will covary across phyletically diverse species. However, the relationships between scaling exponents and normalization constants remain unclear. Therefore, we developed a predictive model for the covariation of H, D, and stem volume V scaling relationships and used data from Chinese fir (Cunninghamia lanceolata) in Jiangxi province, China to test it. As predicted by the model and supported by the data, normalization constants are positively correlated with their associated scaling exponents for D vs. V and H vs. V, whereas normalization constants are negatively correlated with the scaling exponents of H vs. D. The prediction model also yielded reliable estimations of V (mean absolute percentage error = 10.5 ± 0.32 SE across 12 model calibrated sites). These results (1) support a totally new covariation scaling model, (2) indicate that differences in stem volume scaling relationships at the intra-specific level are driven by anatomical or ecophysiological responses to site quality and/or management practices, and (3) provide an accurate non-destructive method for predicting Chinese fir stem volume.

  16. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  17. Data assimilation in optimizing and integrating soil and water quality water model predictions at different scales

    USDA-ARS?s Scientific Manuscript database

    Relevant data about subsurface water flow and solute transport at relatively large scales that are of interest to the public are inherently laborious and in most cases simply impossible to obtain. Upscaling in which fine-scale models and data are used to predict changes at the coarser scales is the...

  18. Perceptual tools for quality-aware video networks

    NASA Astrophysics Data System (ADS)

    Bovik, A. C.

    2014-01-01

    Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."

  19. Stressor-response modeling using the 2D water quality model and regression trees to predict chlorophyll-a in a reservoir system

    USDA-ARS?s Scientific Manuscript database

    In order to control algal blooms, stressor-response relationships between water quality metrics, environmental variables, and algal growth should be understood and modeled. Machine-learning methods were suggested to express stressor-response relationships found by application of mechanistic water qu...

  20. Development and analysis of air quality modeling simulations for hazardous air pollutants

    NASA Astrophysics Data System (ADS)

    Luecken, D. J.; Hutzell, W. T.; Gipson, G. L.

    The concentrations of five hazardous air pollutants were simulated using the community multi-scale air quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results are shown for formaldehyde, acetaldehyde, benzene, 1,3-butadiene and acrolein. Photochemical production in the atmosphere is predicted to dominate ambient formaldehyde and acetaldehyde concentrations, and to account for a significant fraction of ambient acrolein concentrations. Spatial and temporal variations are large throughout the domain over the year. Predicted concentrations are compared with observations for formaldehyde, acetaldehyde, benzene and 1,3-butadiene. Although the modeling results indicate an overall slight tendency towards underprediction, they reproduce episodic and seasonal behavior of pollutant concentrations at many monitors with good skill.